Navigating the benefits and risks of generative AI
Artificial intelligence is rapidly weaving itself into our personal and professional lives – and in applications that can leave us feeling empowered and overwhelmed in equal measures.
As the release of ChatGPT in November 2022 made clear, generative artificial intelligence (AI) has impressive power to synthesize large amounts of information in seconds.
Data-heavy organisations are seeing its potential to transform their work – even as they acknowledge the need to refine the technology and verify the accuracy of the results it produces.
Many law firms have been fine-tuning their own generative AI tools in the wake of ChatGPT’s launch, aiming to tap into the benefits of the technology while managing its potential risks.
However, the adoption of technology to support legal work is nothing new.
For years, firms have been using technology to automate tasks ranging from e-discovery to conflict checks. The adoption of generative AI is simply the next step in that process – but it does potentially introduce new complexities for firms, their clients and employees.
“Law firms have to consider a range of interests as they use generative AI in their work,” said Sharon Glynn, director and underwriter in the Bond & Specialty department at Travelers Europe.
“Some clients may insist that firms use it to drive cost savings, while others may insist on firms not using it because they prefer the benefit of working with a human, or are concerned about the risks to their data.
“Firms must consider their workforce too. They must balance the needs of current and future employees who are eager to embrace generative AI with those employees who are less enthusiastic.
“In the process, they must also be mindful of the evolving tools they need to compete in the marketplace.”
Firms forge ahead with generative AI
Many firms have already been navigating these opportunities and challenges in real time with their own generative AI tools.
Allen & Overy, for example, released its Harvey model in early 2023. Travers Smith developed a ChatGPT-like tool that it is using internally and making it available to other firms.
Specifically, the legal technology team at Travers Smith built a chatbot, YCNBot, as an alternative to ChatGPT. While the firm isn’t using YCNBot for client work yet, it is encouraging its lawyers to experiment with it internally and provide feedback.
The firm made the code downloadable via GitHub and free through an open-source licence. This allows other firms to use the code to plug into the application programming interface of Microsoft and OpenAI, then tailor it to their own needs – including enhanced controls around compliance, security and data privacy.
Generative AI tools are triggering the creation of new roles within firms as well – and signalling a new commitment to the ongoing development and refinement of the technology.
Think of AI as amplified intelligence
Firms are treating generative AI as a powerful tool that serves firms best when combined with human oversight and contributions. But it’s difficult to regulate a technology that is evolving faster than humans can understand it.
Law firms, which are in the business of defining rights and responsibilities and protecting against risk, are in an important position when it comes to identifying the pitfalls of overreliance on AI.
AI impacts more functions of organisations than we may even realise, so firms will need to continuously recalibrate their efforts to monitor it.
The regulation of AI applications isn’t simply about getting legal facts right. There are wider-ranging consequences that may impact how well a firm keeps its commitments to stakeholders and how the firm’s leadership and workforce evolve over time.
For instance, Amelia Kallman, a futurist who addressed Travelers legal clients at a conference earlier this year, says many of her client firms have set goals to be carbon neutral by 2030, but they haven’t figured AI’s carbon footprint into those goals – and its footprint is substantial.
"The more powerful AI is, the more energy it requires,” she said. “While data is still being collected, research indicates a generative AI search requires four to five times more energy than a regular search-engine query.
“The energy it took to train ChatGPT-3 with 175 billion parameters was equivalent to driving 123 gas vehicles for one year, generating 552 tons of CO2. Comparatively, GPT-4 was trained on over a trillion parameters.
“I believe that while it will inevitably contribute to our footprint, it is also the same technology that will help us to solve some of our greatest climate challenges. It’s up to us to be strategic and responsible today to ensure sustainability for tomorrow."
Within firms, AI use also has the power to impact the makeup of the organisation’s leadership and overall workforce.
A study by Revelio Labs has found that AI is set to disproportionately replace jobs held by women – largely because today women are less likely to hold leadership and decision-making positions.
If women or other people in underrepresented groups aren’t at the table when firms are discussing the potential applications of AI across the business, their concerns are more likely to be overlooked.
But this isn’t inevitable. Kallman said: “Significant evidence shows diversity is an advantage, and ultimately the people at the end of these decisions can choose not to let this technology set us even further back in terms of equality and diversity in the workplace."
Maximise value, minimise risk
So how can firms best step into this new territory and embrace the opportunities it holds – while appreciating the risks it can introduce to legal work?
Kallman suggests firms start by identifying potential cases where AI can accelerate, enhance and improve one aspect of their business.
Then assign a team to manage the planning, implementation and monitoring of the technology’s use throughout a specified life cycle.
The next step is establishing an internal governance strategy that gives the firm a means of auditing the internal use of AI and retaining human oversight. This could be an IT strategy that builds a portal or an internal management system.
It should be consciously aligned with the firm’s brand values so it accurately reflects and preserves the organisation’s culture, in addition to considering the needs of clients, employees and managing the risk of third-party suppliers.
Having such a framework in place for monitoring AI applications can help a firm to more confidently embrace the opportunities generative AI presents for increasing profits and reducing expenses, while also helping it keep an eye on the evolving risks it may pose.
“Generative AI is ushering in new ways of working for law firms and we’re interested in how firms are adapting to these changes and preparing for the future,” Glynn said.
“We’re not necessarily expecting them to be at front of pack with adopting new technology, but we’re also not expecting them to turn off the lights and shut the doors in response to these changes, because that’s not a good approach to risk either.
“It’s about proceeding with change but maintaining risk awareness – much like we would approach any new way of working.”