AI bot

I have been off on maternity leave since November 2024 and returned to the office last month. I am ready to get my teeth into things and have found it interesting how much has changed in the legal world with the cases that have been reported and changes to processes but also in technology.

At the end of 2022, three quarters of the largest solicitors’ firms were using AI, over 60% of large firms were exploring the potential of generative AI as were a third of small firms, 72 % of financial services firms were using AI (SRA | Risk Outlook report: The use of artificial intelligence in the legal market | Solicitors Regulation Authority)

This trend has not stopped. AI appears to be rearing its head wherever you look. It is sitting in my inbox and at the bottom of my computer screen and even around the edge of my document whilst I type this. Litigants in person are using generative AI models (like Chat GPT, CoPilot and Gemini) to create statements for hearings, legal correspondence and undertake legal research which some odd outcomes. For example, in December 2023, a litigant in person was found to have used a generative AI tool and to have unknowingly generated fictional case citations in a tax tribunal.

We are currently working on a project to keep our IT systems as up to date and efficient as possible across the firm and so AI is again a hot topic. However, the SRA principles are of course fundamental to our work. We must ensure that we comply with those principles and in a firm like Timms where we pride ourselves on our client care, our professionalism and our legal expertise, does it fit?

The types of generative AI that we often see are of course not sentient. They don’t understand the meaning of their responses like a human would. They cannot communicate with a client in the way that a human lawyer would to ensure that principles are fully explained and understood. They can only generate responses based on the data used to train them. Lawyers’ responses are also based on the data used to train them at law school but have the added element of human perception, comprehension and experience in the world and in the legal sector itself.

A 2024 study from Stanford University (Hallucinating Law: Legal Mistakes with Large Language Models are Pervasive | Stanford HAI) found that AI struggles with the legal sector significantly. The AI models ‘hallucinate’ between 69% and 88% of the time when faced with specific legal queries. The study defines an hallucination in this area as ‘the tendency of LLMs [large language models such as Chat GPT] to produce content that deviates from actual legal facts or well-established legal principles and precedents.’ The study also found that case law from lower courts (in the UK those would be our local court centres such as Derby, Stafford and Nottingham) is subject to more frequent hallucination. These AI models do not have the local knowledge and experience of the day to day running of our local courts. They also struggle with the most recent and the oldest case law. Hallucination was less common amongst the case of the latter half of the 20th Century. This possibly demonstrates an issue with AI’s ability to keep up to date with the law as it develops and changes.

We need to be cautious about the information put into an AI tool. We cannot and would not put confidential information into such a tool as it is not secure.

Whilst AI looks really exciting and powerful at first glance, human judgment and professional advice is not something that can be replaced. A nice thing to know when returning to work after a long time away!

If you have an issue relating to family law that you would like a human to help you with, please do not hesitate to get in touch with us! We are happy to help make some real, accessible, sense of what can be a complex time both legally and personally.

Telephone: Freephone 0800 011 6666
Email: legal@timms-law.com