Exhaustingly, AI hasn't yet gone away as a topic in law (not that it has made much impact either) and so I’m forced (by admittedly my own impulse control issues) to write again about the topic.
This time I have been triggered by an Alex Smith tweet:
So can “legaltech” or GPT3 or other magic write a legal template/standard doc/precedent? If not now, by when?
By template I mean something lawyers agreed as ready to be used as a start point/automated system or dare I say for the public to use.
By what year is this possible?
(This is an old tweet he retweeted again today, but the same conversation just started again like clockwork).
Alex is being naughty, but people are still biting and so the sheep and the goats are separating themselves along the lines of: do they get that the current approaches to AI are still not (and have no plausible pathway to being) capable of understanding or meaningful reasoning?
Of course, lots of people don't get this, because if they did then we would have long ago given up discussing this stuff, but here we are. It's not only AI that has issues with understanding and reasoning.
Anyway, I spent a few minutes pondering a suitably scathing response to Mr Smith’s tweet and it occurred to me that the concept of satisficing might be useful as we try to reason about when AI might conceivably help.
You probably know the concept by now, because it's hard to be oblivious to the destruction of the homo economicus school of thought by the biases gang, but for those at the back - satisficing is when humans choose good enough solutions rather than seek perfect rational solutions. Turns out finding something good enough and running with it is a better solution for choosing pretty much anything, for example if you actually want to have fun rather than debate which bar to go to (missing Legal Geek already).
The reason I raise satisficing is that I think this may be a helpful test: for any given activity, will AI produce a good enough outcome to meet the stakeholders’ objectives?
I think this is helpful because (a) it forces us to define the stakeholders and their objectives, and (b) it recognises that we aren't in the perfection business.
Let's try some examples:
- I want a nice picture with a particular theme and style: Why yes, DALL E may well be satisfactory!
- I want a nice picture that changes how I think (you know, art): Hmmm
- I want a picture of my dog in a particular painting style: why yes a photo and Fotor might be enough!
- I want an original Rembrandt: No, an AI take on Rembrandt won't work because I really want status. The clue is in the “original”.
- I want to write an essay that gets a passing grade from my teacher: Why yes, AI may tick the box (but watch out for the anti-cheating tech).
- I want to learn more about that topic through writing an essay: Nope, AI ain't going to help.
- I want to convince someone about my theory on the topic: AI will not do this either, as it would have to understand your theory and figure out how best to make the case.
Is this helping? Can you see the difference between filling space and making a point?
So let's come back to Alex Smith's naughty tweet. If all you wanted to do was generate some text on a page then sure use AI. But don't you want to make sure the standard covers the objectives of the parties using the contract? How can you, without understanding the objectives, how they might be balanced etc as required by a standard? How can you ensure that the contract is internally consistent? Non-repetitive? Doesn’t say the same thing twice? Is clear? Is reasonably complete? Is relevant?
Anyone biting on this idea is revealing their motivations. AI can plausibly satisfy you for this task? Well, you appear not to care about the parties achieving their objectives. Ho hum.
Another point here. I can create a standard contract in a day or two with reference to a few examples. If you really really want to get humans out of the loop, consider your motivations. Are you saving man-years of grunt work (good for you) or chasing novelty? There are more fun vices available.
Anyway, the issue with standards is not the time taken to produce the text, it's adoption. And adoption comes from aligned incentives and a sense of ownership in the standard by enough people. Can you see another problem here?
All the same, nice one Alex Smith. You got a bite (a year later)!