Risk Outlook report: The use of artificial intelligence in the legal market

Artificial intelligence (AI) is not a new subject in the Risk Outlook. We have discussed its use and potential in multiple reports, including the 2018 Outlook on technology and legal services. However, the systems available to firms are currently developing and spreading rapidly. This has been particularly notable with the new 'generative models', such as ChatGPT. These use machine learning and huge amounts of data to predict, summarise and generate content.

Dedicated AI for legal work was once only accessible by the largest firms. This is no longer the case. There is an increasing range of commercial products, making it easier for smaller firms to benefit. And several systems are available online for anyone to use, at very low cost. Staff might use these occasionally and casually, even without the firm formally adopting AI as part of its work.

The use of AI is rising rapidly. At the end of last year, reportedly:

  • three quarters of the largest solicitors’ firms were using AI, nearly twice the number from just three years ago
  • over 60% of large law firms were at least exploring the potential of the new generative systems, as were a third of small firms
  • 72% of financial services firms were using AI.

Anecdotal evidence suggests that the use of AI in small and medium firms is also rising.

This growth is predicted to increase annual global gross domestic products by 7% across the next ten years, as well as driving radical innovation. Legal services is likely to be one of the most affected sectors. The government expects its proposals on AI regulation will encourage investment into the UK, supporting growth.

In most systems already in use which employ AI, the technology acts to support and improve the work of humans. Where firms are using such tech, there are signs that increasingly familiarity is not only helping them to use it effectively, but it is also overcoming any concern that the use of AI could in some way replace humans. Most surveyed workers say that AI has improved both their performance and their working conditions.

The cost and speed benefits these systems bring could be a major advantage to firms that adopt them. This may particularly be the case in the current hard economic times.

Ever more technologically savvy consumers will also increasingly expect firms to use technological tools that improve services. This will include using AI tools, especially with many of those consumers increasingly using such AI-driven tools in their own lives.

Many firms might be unsure about which systems could benefit them, and about how to use them safely. We also know some are uncertain about how to make sure they are complying with their regulatory obligations when automating work. That reasonable uncertainty may be holding some firms back from gaining the benefits that new systems can bring.

Our regulation focuses on the outcomes that firms achieve, and not necessarily on the specific systems they use to achieve them. We do not specify the technologies that firms should or should not adopt, or are we able to recommend individual providers or software. It is not part of our role as a regulator to do so.

We do though seek to create an environment within which firms can confidently take advantage of any technologies that will help them to deliver improved services to the public. At the same time, we work to ensure that public protection is maintained during the development and deployment of any new advances.

What do we mean by AI?

When we refer to AI, we are not talking about computers that ‘think’ in the way that people do. Such 'agentic general intelligence' does not yet exist, and might never do. Instead, we are talking about a range of systems that use statistical models to make automated decisions or predictions.

The Information Commissioner’s Office (ICO) defines AI as algorithmic systems that solve complex tasks by carrying out functions that previously required human thinking. The proposed national AI strategy describes two ways AI differs from ordinary IT systems:

  • Adaptivity: AI can make inferences that it was not explicitly programmed to produce. This means that it can be harder to explain the system’s decisions.
  • Autonomy: some AI systems can make decisions without the express intent or ongoing control of a human. This can make it harder to say who is responsible for the system's outputs.

AI is already in everyday use. The predictive text on a smartphone keyboard, for example, is the result of an AI language model.

We showed a short 10-minute 'What is Artificial Intelligence?' presentation delivered by our Lead Data Scientist at our 2023 SRA Innovate event. In the presentation he explains, in plain English, how AI works, what it does, and gives examples of how it is already being deployed in the legal sector.

Open all

There are many potential uses for AI in the legal sector. Some of these directly engage with customers, such as chatbots. Others support the internal function and operation of a law firm, for example in contract generation, financial management or business planning.

LawtechUK uses the following categories when describing the types of AI on offer in the legal sector. It is worth remembering that individual systems can include aspects of several, or all of these at the same time.

Risk identification and prediction

This is probably the most common current use of legal AI. It automates routine compliance tasks such as those that can support part of the anti-money-laundering (AML) checks. More advanced uses include analysing cases to predict the chance of success.

For example:

  • One provider claims to have a 90% accuracy in predicting case outcomes.
  • A firm has introduced a system to automate insurance claim decision making. Built so users can understand its reasoning, it takes a third less time than a human to reach decisions. It is said to have reduced the cost of processing claims by 77%.
  • A global bank introduced AI to review compliance issues in sales of its products. Before introducing the system, their team of 120 staff could sample at most 15% of their transactions. With the system, they could review all files in real time.

Administration

These systems automate routine tasks, such those which involve gathering and reviewing information from existing or potential clients.

For example, there is an increasing range of legal chatbots. They can respond quickly to enquiries from potential clients and potentially offer a 24/7 service for answering common legal questions, gathering and sorting information and triaging cases for seriousness and urgency.

Profiling

This involves tasks such as identifying consumers’ understanding, classifying documents or prioritising cases.

Examples include:

  • A commercially available system profiles legal documents for clarity, suggesting how firms could modify them to better communicate with their intended audience, thereby achieving more efficient outcomes and offering a better customer experience to clients.
  • The healthcare sector has introduced patient experience platforms that automatically analyse the results of surveys. This helps providers to see patterns in people’s experiences and identify areas for improvement.

Search

These systems automate work such as document discovery or identifying precedents for litigation, providing an efficient first step for law firms when preparing cases or considering their approach.

A growing range of products are coming to the market which can help automate the review and management of contracts. As an example, one program allows commercial clients to manage their own contracts, automatically reviewing them against market norms and summarising their contents. Its designers claim that it can save three or more hours per contract signing/renewal for the client.

Text generation

This involves directly producing content, such as contract drafting, client letters or any other form of written communication or document. Generative systems such as ChatGPT can also automatically summarise online information and draft basic documents on request.

Examples of use in the legal sector include:

  • A system used by one firm can follow instructions in natural language for comparing legislative requirements, drafting documents or preparing case descriptions.
  • The Copilot system that Microsoft is building into many of its products will carry out tasks such as automatically taking minutes from videoconferencing or producing presentations. It will also allow for bespoke ‘house styles’ and drafting principles, and applies and considers when anyone drafts documents on your systems.
  • Other large IT providers such as Google and Apple are introducing similar systems, and these are already being adopted by UK law firms.

AI has very high potential to help firms, consumers and the wider justice system. Its increasing availability makes it easier for firms to use it to provide services more affordably, effectively and efficiently than before. Used creatively in ways that suit individual firms’ needs, it might also help them to develop new business models that could not exist without it, and even contribute to career satisfaction.

As AI develops and offers more practical and commercial benefits, and as consumers become increasingly comfortable with its use, the advantages of using it will continue to grow. The risk to firms might not come from adopting AI, but from failing to do so.

Speed and capacity

If used effectively AI has the potential to allow you to deliver certain, typically administrative tasks more efficiently. In doing this it can also free up resource to focus on more challenging, labour intensive and potentially complicated tasks.

Systems such as automated know-your-client tests and AI document searching can greatly speed up work. In large document discovery exercises, for instance, they can reduce the time needed from weeks to seconds while also delivering better accuracy. Firms using these systems are reported to benefit from the increased productivity.

By automating time consuming routine tasks and basic document drafting, solicitors should be able to apply their experience and knowledge where it is really needed. This could be particularly beneficial for small firms where senior practitioners do not have support from an administrative team.

Cost savings

Automating administrative tasks or certain aspects of cases can lower the time spent, and therefore lower the cost of the work, on each case. This could help both firms and consumers. It could also make legal services more accessible for those with lower incomes and small businesses.

As the use of AI case prediction spreads, firms are likely to be able to use it to demonstrate the likely result of claims, particularly where systems are capable of showing their reasoning. This may help firms in being able to plan and prioritise more effectively, for example better identifying in advance cases where parties should be encouraged  to settle earlier, reducing cost liabilities for all involved.

Transparency

Although it is not always easy, it can be possible to show how an AI algorithm is reaching its decisions. This is not possible with a human mind. As such, wider use of AI could help to make legal reasoning clearer.

Firms that use AI that is well audited and built around transparency might be able to help support public understanding of legal services. This will also reassure consumers.

Automated translation has potential to make legal processes more transparent to people whose first language is not English. There is an increasing range of these systems. As these become more available and capable, they will help clients and firms to better understand and communicate with each other.

Skill development

Solicitors do not have to be programmers to work with AI. However, they do need to have some understanding of how their systems work. Firms that use AI are more likely to include multi-disciplinary teams including IT specialists.

This exposure gives solicitors the opportunity to develop their skills and experience, working directly with a wider range of professional experts. This will help to increase individuals’ confidence in using advanced technologies, and might help with career satisfaction and professional development.

New business models

Adopting AI can improve firms’ existing operations. However, it can also help them structure their business in new ways. For example, AI chatbots can help firms provide services to clients at times when staff would not otherwise be available. An AI system developed internally for a firm’s own use could prove to have additional value as a service provided to others under the firm’s oversight.

The cost savings noted above, and in other business processes, can allow firms to reorganise in ways that better serve their clients’ needs and support their staff. These ways might otherwise have been unaffordable.

Just as with humans, AI can have biases. If not spotted and corrected, these could lead to unfair or incorrect outcomes. For example, any AI bias in:

  • criminal litigation could lead to miscarriages of justice
  • recruitment systems could harm efforts to encourage diversity.

Problems can appear when AI follows the wrong patterns in data. For example, the information used to train an AML algorithm might feature geographic areas with concentrations of higher risk businesses. If some of those locations also have different demographic patterns to the average for the country, then the algorithm might take that demographic, rather than the types of business present, as the risk factor to apply. This could lead to inadvertent discrimination.

Another cause is social biases in the reasoning behind precedents and legislation. This could be harder to control. In the US, mortgage approval algorithms were 40 per cent more likely to deny applications from minority ethnic borrowers. That was not because of any current risk in those applications, but because the data used to train the system was based on historical, racially biased decisions.  

Errors

All computers can make mistakes. AI language models such as ChatGPT, however, can be more prone to this. That is because they work by anticipating the text that should follow the input they are given, but do not have a concept of ‘reality’. The result is known as ‘hallucination’, where a system produces highly plausible but incorrect results.

There have already been incidents where AI drafted legal arguments have included non-existent cases. This might have happened because the system had learned that legal arguments include case and statute references in specific formats, but not that those references needed to be genuine.

These errors could lead to:

  • consumers paying for legal products that are inaccurate or do not achieve the results that they intended.
  • firms inadvertently misleading the courts.

And the impact on affected people can be severe. Statistical errors by human witnesses have led to miscarriages of justice in the past, and there is evidence that people may place more trust in computers than in humans. 

Scale

The speed that makes AI so useful can also make problems worse. Errors by an AI system could affect many more people than errors by a single person could. This could lead to widespread harm to large numbers of complaints, as already been seen in group actions. The high output of an AI system also makes it harder to supervise effectively. Firms will need to maintain processes that let them manage this issue.

Confidentiality and privacy

Firms must use AI in ways that protect sensitive information.

The ICO is clear that consumers’ data protection rights remain when their information is used for training or operating an AI. Firms adopting AI will need to make sure that their use of it protects confidentiality and legal privilege. Particular threats include:

Accountability

As with any other technology or system in your firm, you will remain responsible and accountable for the outputs from AI you are using. For example, if you use a third party chatbot to provide initial legal advice, you remain responsible for any errors in that advice.

Firms will need to make sure they have systems that allow them to meet these responsibilities. They will also need to make sure that clients are suitably informed of how AI is involved in their cases.

Regulatory divergence

Many jurisdictions are exploring how to regulate AI. The UK’s proposed approach to this is fairly light-touch, and does not impose different rules on different types of systems. This is not the case everywhere. The EU’s AI Act, for example, will impose stricter restrictions on some AI uses than on others. At the time of writing, the US has just begun considering its own regulatory approach.

Firms that operate internationally will need to manage these regulatory differences and advise their clients appropriately, noting that regulations might develop quickly.

Crime

Firms need to be aware that AI has the potential to help criminals as well as legitimate activities.

AI can be used to create highly realistic ‘deepfake’ images, and even videos. Combined with AI-assisted imitation of voices from short samples, this is likely to make phishing scams harder to recognise.

In the same way, AI might be used to create false evidence. There has already been at least one case of a respondent casting doubt on the other side’s case by suggesting that evidence against them was falsified in this way.

We will continue to track developments and warn firms and consumers about scams that come to our attention.

There are a range of issues that could potentially interfere with firms’, and the market’s, ability to adopt effective legal AI. However there are range of remedies which can help firms of all sizes overcome these. These range from developing a greater technological or regulatory understanding, to around exploring collaborative or creative approaches.

Uncertainty

Many firms and individual solicitors are uncertain about whether they should adopt AI, or about how to manage the risks. Others are unsure whether their clients would accept them using these technologies. That doubt could hold firms back from taking up systems that could help them. As many new AI systems have very broad potential uses, it might be harder for firms to identify specific ways in which they could apply them to their own work.

As with any other new system, reading published reviews could help in seeing what other firms are already using and in identifying which products might be worth considering.

Our Risk Outlook reports on this topic show how firms are safely benefiting from new technology. And the end of this report lists further sources of help.

Cost

For many small firms in particular, cost is a significant barrier to adopting new technologies. The expense of purchasing a new system is only part of this. Acquiring the skills and expertise needed to fully understand and use any products can also carry costs, as can the ongoing support and maintenance of the system.

There are many sophisticated and expensive AI products on the market which are often aimed at, and in many cases designed specifically for, large corporate businesses. But increasingly we hear from small firms who are using a combination of ‘off the shelf’ and generic AI technologies to help their business and clients.

Firms can help to control their own costs in adopting AI by understanding their own business needs first, and then considering if AI may play a role in addressing these. That knowledge can help in choosing an appropriate system that provides the capabilities that are needed.

We also increasingly hear of firms, of all sizes, collaborating with partners within and outside the legal sector to trail and develop the use of AI and technology. For any firms interested in understanding or making greater use of technology such collaborative environments can prove a highly useful and productive place to start.

Data skills

Training and using AI effectively requires careful and sensitive use of data from both public and private data sources. If firms do not have access to the right skills to do this, then they will not be able to gain the best results from the systems they use.

The growing range of commercial systems will increasingly help with this, as it will be less necessary for firms themselves to have the skills needed. Firms will still need to be able to understand how data is being used and to be able to supervise the operation of systems.

It is important to ask the right questions to get the best result from generative AI systems such as ChatGPT. Such ‘prompt engineering’ is a skill that can be learned and there are a range of online resources that could help.

Data availability

AI needs data to function. But much legal information is still not available digitally. Ongoing efforts to digitise information such as court findings and statutes will increasingly help to overcome this barrier.

Only the largest firms will hold enough data to train their own systems. Again, the increasing availability of commercial AI will make it easier for smaller firms to adopt new technologies. Controlling the data that a system accesses can be a useful tool in avoiding hallucinations and other problems. However well controlled and understood a system is, firms will still need to check its outputs for accuracy.

Insurance

As a relatively new and rapidly developing tool, the insurance market’s approach to covering the use of legal AI is still emerging. Firms will need to engage closely with their insurers to make sure that misunderstandings do not cause unnecessary costs.

AI must operate effectively, accurately and within the law. The proposed national AI strategy sets out five principles for this. The following tips, grouped around those principles, should help you decide how to introduce and supervise systems. There are links to more detailed guidance at the end of this report.

Safety, security and robustness

  • Choose systems carefully to make sure that they will meet your needs,
    • As with any form of IT, it is important to choose the right system for your firm.
    • Reading published reviews can help with this choice.
    • Make sure that it is clear when technical errors in the system are the responsibility of the provider or of the user.
  • Test all systems thoroughly before bringing them into use.
  • Train and supervise your staff in what is and is not a safe and acceptable use of your systems and other uses of AI.
    • Even if you have not adopted AI as part of your business, staff might still want to use ChatGPT or similar systems. You will need clear rules covering this.
    • Make sure your supervision and guidance cover the difference between casual use of online AI such as ChatGPT and using systems you have formally adopted.
    • Some firms prohibit all staff use of online systems beyond their direct control. Others have separate rules regarding the use of confidential information with different types of AI.
  • Encourage staff to practice asking effective questions of AI, and if possible provide training, to gain the best results.

Transparency and explainability

  • Make sure that you, and staff who can access the system, can understand how it operates and makes decisions, and that you can explain this to clients.
  • Tell clients when you will be using AI with their case, and how it will operate.
    • How you do this is a decision for your firm to make, based on your specific circumstances.
    • The ICO recommend considering in advance the different types of explanation you will need to give, based on the type of use and impact on the individual.
  • Select and process data in a way that supports your ability to explain how and why you are using it.
  • Document each stage of the design and running of your system so that you can fully explain how it works.

Fairness

  • Be sure that any AI you use is only processing personal data in ways that people can reasonably expect.
  • Monitor the outputs of an AI carefully to make sure it is not producing biased or inaccurate outcomes.
    • Be aware that AI bias can be subtle, and that bias that did not exist when the system was first introduced might develop as the system’s models evolve.
  • Make sure that you introduce, train and operate AI in a way that protects confidentiality.
    • Be particularly careful when moving data to an online system.
    • Consider using ‘synthetic data’ – data that follow the same patterns as your actual client data but without their exact information – when training a system.
    • Take care to avoid any breaches of client confidentiality when moving information between your firm and the provider.
  • Follow all data protection principles when operating any AI, remembering that all normal rules still apply.
    • This includes the right to have personal data removed, including from training models.
    • Remember that data protection law applies when you use the AI model to make decisions or predictions about an individual, even if their personal data was not part of the model’s training information.
    • Be careful when transferring data to a cloud based system if you are not certain in which jurisdiction the data will be held.

Accountability and governance

  • Supervise AI systems, and staff use of them, to make sure that they are working as expected and providing accurate results.
    • Do not trust an AI system to judge its own accuracy, remembering that current AI does not have a concept of truth.
    • One firm suggests thinking of such systems as ‘bright teenagers, eager to help, who do not quite understand that their knowledge has limits’.
    • If you ask a system to summarise online information, you can ask it to give references. This should make it easier to check it is not hallucinating information.
    • Use systems to speed up and automate routine tasks, supporting rather than replacing human judgment.
  • Have supervision systems that can cope with the increased speed of AI.
  • Remember that you cannot delegate accountability to an IT team or external provider: you must remain responsible for your firm’s activities.
    • As part of this, you need to decide what information you need to give clients about how you are using AI in connection with their cases.
    • It is important that, before deploying any AI system, you fully understand what the technology does, and does not do.
    • You should also consider what processes you need to put in place to check and verify any information or content being produced through the AI.

Contestability and redress

  • Provide routes for people to contest AI decisions they disagree with.
    • Make sure that your records show where information about an individual is based on AI inferences, so that you can respond to requests to counter incorrect conclusions.
  • Make sure that your complaints procedures can handle questions about AI use.
  • You might need to decide whether to tell clients that they can speak to a human before making decisions based on AI information.

Our regulation focuses on the outcomes firms’ actions produce, not necessarily the tools they use to reach them. However, the use of advanced technologies can help firms meet their consumers’ needs more effectively and affordably. As such, supporting innovation and technology is one of our strategic priorities. We want to help firms and consumers safely gain the benefits that AI can bring.

SRA Innovate highlights the work we are doing to support innovation in the legal sector. We already help law firms and lawtech businesses interested in developing or using innovative technologies through our:

We have also produced a short video on AI to introduce the basic concepts.

In our business plan for 2023-2024, we set out our plan to develop a 'sandbox' for firms to test new technologies, including AI.

The legal market’s use of AI is growing rapidly, and this is likely to continue. As systems become ever more available, firms that could not previously have used these tools will be able to do so. Indeed, it is likely that these systems will become a normal part of everyday life, automating routine tasks. Used well:

  • they will free people to apply their knowledge and skills to tasks that truly need them.
  • they will improve outcomes both for firms and for consumers.
  • the speed, cost and productivity benefits that they can provide could help to improve access to justice.

We intend to retain an open mind on the systems used, balancing protections for consumers with support for innovation. We will produce guidance on specific issues as they come up.

We would like to hear about your views and experiences with AI, through a short online questionnaire. This will help us to make sure our regulation remains proportionate and effective in the face of rapid change.

Various resources give regulatory, legal or policy information on AI. The AI Standards Hub, run by the Alan Turing Institute, hosts a database of AI related standards and policy guidance as well as discussions. It aims to make it easier for organisations to find relevant information in a single place.

The ICO has substantial guidance on AI and data protection. This includes information on:

  • transparency and explainability
  • statistical accuracy
  • fairness
  • the application of individual data rights to AI training.

In addition, the ICO and Alan Turing Institute’s guidance helps organisations to explain AI use to consumers and other affected individuals.

The Bank of England’s discussion paper on AI and machine learning discusses the risks and opportunities of AI for financial services. The points it raises are equally valid for law firms. 

The Digital Regulation Cooperation Forum provides a range of resources, many of which will be helpful for firms. For example, one report discusses the barriers that buyers face in choosing and assessing systems, and how to overcome those.

The International Standards Organisation (ISO) has released a regulatory standard for managing risks from AI. This aims to help users of AI to identify risks, and to integrate risk management into their AI implementation.

The National Cybersecurity Centre has produced guidance for firms on current cyber threats.

The government’s report on the capabilities and risks from frontier AI discusses issues such as cybercrime and bias, as well as the benefits that wider use of advanced AI might bring

Many AI producers such as Google issue their own information and guidance on their products. These can often be generally applicable too.