10 ways AI is changing law firm management
The rise of AI within the public consciousness has been nothing short of astronomical.
While mass market products such as ChatGPT are increasingly popular, professional service industries are also finding ways to engage with AI to optimise their businesses.
Law firms are no exception.
Outsiders have traditionally perceived law firms as maintaining a rather stuffy, conservative approach to running their businesses (see, for example, the billable hour).
However, many firms have been increasingly interested in legal technology for some time now. In more recent years, interest has skyrocketed.
Over half of lawyers now use some form of AI according to University of Oxford research by John Armour and Mari Sako.
Here, we set out 10 considerations to think about when engaging with AI – from plentiful potential uses to serious potential risks.
1. Training staff on AI technology
Discussing AI usage is one thing – getting training in place to make it feasible is quite another.
The Law Society’s horizon scanning report on AI and the legal profession asks: “Are managers receiving relevant training to prepare for intelligent machines?”
Staff need to know how to use AI tools, but they also need to understand why to use them.
Demonstrate, with tangible examples, how AI tools will cut workloads, or simplify tedious processes.
Crucially, the training process needs to start at the top level – getting senior staff on board rather than expecting a bottom-up approach.
2. The impact on human lawyers
Lawyers need a wide range of skills for day-to-day work – from attention to detail and a dedicated work ethic to interpersonal skills.
The introduction of AI models is changing the skills required. Naturally, this includes basic technological skills (although AI systems should be as user-friendly as possible).
You also need to consider how your employee skills overlap with the abilities of AI models.
The horizon scanning report describes how AI systems are best used to “augment human judgement” rather than replace it entirely.
Routine basic tasks can be the best place to employ AI models, since more complex tasks (requiring a personal understanding of the client’s motivations) are still far better suited to human judgement.
Other horizon scanning reports have suggested:
- menial tasks becoming automated could form part of the World Economic Forum’s prediction of 5 million jobs being lost to computers in the next five years
- the increasing need for lawyers to develop flexible ‘T-shaped’ skills, providing depth of knowledge and the adaptability to collaborate with other specialists
3. Automating recruitment
The recruitment process is often complex at the junior level, with applications for training contracts and vacation schemes regularly involving rounds of CVs, cover letters, psychometric or situational judgement tests, and interviews.
AI systems can reduce workload by automating processes, such as comparing test score results over time or analysing language against desired characteristics.
4. Simplifying document review
A number of AI-based tools exist to help firms in menial processes – for instance, document review. One tool, Luminance, has been used by Slaughter and May for some time.
These tools can scan large quantities of text and gather relevant findings rapidly. Consider the implications for large discovery tasks.
5. The impact on fee structures
From a client perspective, simple answers will be more within reach via free (or low-cost) legal-focused AI chatbots.
Firms should focus on selling the comprehensive skill package that qualified lawyers offer, engaging with AI for assistance where logical (and being open with clients about that).
6. Predicting case outcomes
University College London (UCL) researchers developed an AI model to predict case outcome by analysing the language present in its documents – with a 79% success rate.
While not to be relied upon fully, this can be useful in generating a rough statistical estimate of success which you can present to clients in whichever level of detail you deem appropriate.
7. Training AI models
One point often neglected in discussions around AI is that machine learning processes require large quantities of data to be ‘trained’ on.
A number of ethical considerations arise, such as patient data used to train AI tools for drug discovery in the pharmaceutical industry.
Even within a law firm, you need to question how the model you will be using has been trained.
Is a document review tool offering your high street firm outcomes based on observations it made on documents passing through a Magic Circle firm? Are the two always trying to reach similar goals?
8. In-house vs external AI systems
Larger firms often maintain in-house departments working on legal tech products – for example, the Fuse incubator at Allen & Overy, which made headlines for its introduction of AI chatbot Harvey.
Armour and Sako note that, previously, “law firms tended to work in partnership with third party providers, who develop AI-lawtech solutions on their behalf”.
For many smaller firms, this is still an attractive prospect.
Sourcing AI tools externally is far cheaper and often still offers excellent results, even if its benefits are sometimes limited by the lack of tailoring to your business needs.
9. Liability-related risks
Jingchen Zhao discusses the importance of developing clear regulatory frameworks to address the issue of liability when an AI model makes a mistake.
A New York firm was fined for using fake case law generated by ChatGPT in court, highlighting the importance of qualified lawyers checking over any AI outputs.
Firms also need to understand the bigger picture in ensuring AI assistants are offering outputs representative of a range of people.
Liability is likely to crop up where AI tools generate biases indicative of a narrow perspective (often reflective of their inputs), as has been an issue with ChatGPT, and as AI models such as Claude are explicitly attempting to improve upon.
10. Avoiding technological determinism
American sociologist Thorstein Veblen coined the term ‘technological determinism’ to describe how technology is a leading factor in the advancement of civilisation from one era to the next.
Modern critiques often point out that human agency remains at the core of technology such as AI models.
For this reason, you should remain mindful of the benefits and limitations of AI tools in an industry where people are still at the heart of decision-making processes.
For now, AI tools are still best used as a powerful assistant.
I want to know more
The Lawyer Portal is a Law Society partner that provides resources for aspiring lawyers. It helps organisations and law firms to build brand recognition, widen participation into law and communicate directly with future lawyers.
Learn more about attitudes to lawtech adoption in the legal sector
Are you considering using legal technology in your practice?
The introduction to lawtech guide contains information about the types of products available and shares tips on successfully implementing new technology.