Last week I gave a talk at the Legal Futures conference and this morning it was written up. I thought it might help to give more context and colour to the argument I was making.
The talk was to law firms and investors and I started with Charlie Munger’s quote: “Never, ever, think about anything else when you should be thinking about the power of incentives.”
The partnership model is an excellent way of preventing innovation because leaders are always at the mercy of their constituency, the partners, the most powerful of whom have done the best under the current way of working. Partners are given wide discretion over how they work individually and can always vote to remove the leaders if they disagree with strategy. Why should they support real change? After all they have by definition made it work so far and they are only going to be around for a finite period of time - they have the most to lose.
This means that it is hard to drive changes in ways of working and it is hard to lead radical firm moves. Oh, and the partnership structure is brilliant at preventing changes to the partnership model itself, for the same reasons. What is the incentive for any partner to agree to give up power voluntarily?
The partnership model in turn prevents most law firms from making the key radical change needed to make meaningful progress - getting rid of the time sheet. The time sheet and the billable hour incentivise maximising time spent on delivering services, or at the very least, disincentivise radical removal of time. All innovations will therefore be judged against that test (whether explicitly or implicitly). What incentives does an associate have to come up with a real hours-removing innovation? They would have to spend time (not billable time) creating it and then the next year they would be at risk of missing their bonus because they wouldn’t meet their hourly target. Even firms that have introduced “innovation time” haven’t addressed the second part. You get what you measure (because of incentives) and time sheets create hours.
Ah, but what about clients’ demands you ask? Well, clients haven’t been as demanding as many like to think and have often failed to use their purchasing power to drive real change. The clients have been insourcing a lot of work as crude labour arbitrage and the ALSPs/NewLaw firms in turn have started to pick up some of this work from the more sophisticated clients. But the law firms have mentally written off what used to be bread-and-butter work, instead focusing on the higher end. Not all will be successful - there’s only so much bet-the-company work with surveys ranging from 3-7% of spend. But personal relationships go a long way in law and many partners continue to hang in there.
The question those partners are asking is what do I have to do to keep my clients happy? And there, the clients have really failed to convince the law firms that they have to change (beyond debates over hourly rates). The firms believed they had to do something for a bit, but then they figured the clients weren't really serious. This story is told in this incredible slide produced by Jae Um, now at Baker & McKenzie. It shows the progress on a number of measures between 2014 and 2017 by US firms in two Altman Weil surveys. And the progress is… backwards:
Which takes us to AI. Because my core argument is that AI is the perfect topic to convince clients that you are doing something “innovative” while actually really not doing anything much at all. It’s so perfect: it sounds exciting, it only threatens the jobs of paralegals doing document review (and partners often don’t have the same personal loyalties to the teams of paralegals as they do their associate team members) and nothing else has to change. Reduce the price on one part of the project to protect the rest. Perhaps I’m cynical to claim that the main purpose of buying an AI licence is to put out a press release. Perhaps.
Looking a little further into what AI actually does, consider this map we produced of the lawtech available for companies to support their commercial contracts:
The systems are highly document-centric (rather than data-centric) and the vast majority of them are basically made up of workflow, databases and document transformation. Most companies have yet to introduce the majority of these systems (we have the data to prove this from the Radiant Benchmark). And AI only makes up two sections of this map:
Contract review covers tools like LegalSifter, ThoughtRiver and LawGeex, which are used to review third party paper to spot key issues. These can be helpful where you get a lot of contracts of a particular type (we use LegalSifter), but the question I would be asking GCs is why are you not contracting on your paper in these situations? There may be perennial leverage issues, but you could probably get the most impact by rewriting your standard terms so that they are so darned short, reasonable, relevant and clear that you use yours far more often. We have data on this too.
The other use is Extraction, which covers tools like Kira, Diligen, Seal, Luminance and many, many, many others. They are generally used in two use cases. The first, is extracting data as part of getting contracts into contract management systems (alert: the bigger issue that companies have is that they can’t find their contracts in the first place, and that could be solved, at least initially, by just setting up an internal email address to send them to and following through, or even better by using digital signatures). The second is where the law firms come in - document reviews as part of major one-off events (legislative changes, acquisitions etc). These tools are helpful for this, but let us be clear what is going on: an army of paralegals have been replaced by a tool which is faster and cheaper. But it’s still just that one step in a big project and the bigger issue is the usability and judgement calls associated with the report itself, not how the contracts were checked.
Perhaps these limited use cases sound disappointing, not the silver bullet you were looking for? Well that’s because of the hype associated with what “AI” actually does. And the key point is that you can automatically: translate a clause into another language, break down the parts of the sentence, spot entities and most famously classify it as a type of clause (with about 70-something percent accuracy in the wild, sometimes up to 90%+) but the system cannot tell you what the clause means. It just can’t and we are nowhere close to solving that. So it’s not really surprising that there are limited use cases.
I was asked by an audience member about how to approach introducing AI at a firm. I suggested that they start by fixing their incentives (ho, ho) and then they take process seriously and introduce lean and slowly over many years of continuous improvement they introduce simple systems (those databases and workflow and even document automation)... and then after 10 years they look at AI. Perhaps it will understand clauses by then, though I doubt it. But no one wants to hear that really, and so the circus continues.