Problems with Generative Artificial intelligence
The Bar Council has recently released guidance titled, ‘Considerations when using ChatGPT and generative artificial intelligence (AI) software based on large language models’. This guidance has been produced to assist legal professionals on matters of professional conduct and ethics. It is important to note that The Bar Council have issued a disclaimer stating that the guidance is just assistance and cannot be relied on by the BSB or Legal Ombudsman and have clarified that it is not legal advice.
This article will seek to explore some issues with large language models (LLMs) in the legal field. I will adopt the considerations listed within the Bar Council Guidance whilst incorporating some of my individual views.
Generative AI is a broad concept for systems which can produce text, images, video or audio. An LLM is a type of generative AI that uses deep learning techniques to rapidly generate data. They have a broad applicability and therefore LLMs are prominent across a large variety of domains.
ChatGPT is an ‘advanced’ and relevant LLM which was developed recently by Open AI. It generated 1 million users within 5 days of launching and had amassed 100 million monthly users within 2 months of launching, making it the fastest-growing consumer application in history.[1]
Issues with AI
Despite being widely used and growing vastly, LLMs have flaws. King Charles III recently stated at the AI Safety Summit that the risks of AI needed to be approached with ‘a sense of urgency, unity and collective strength’. I will explore what I consider to be the main issues for the legal profession as set out below.
Hallucinations
Hallucinations are when an LLM provides an inaccurate response that is not based on trained data. LLMs can resort to generating fabricated responses when faced with ambiguous data due to their limited contextual understanding and inability to focus on crucial details.[2]
This is an issue for the legal sector as it reduces the reliability and transparency of LLMs, and it undermines the coherence of anything previously stated by the LLM.
Unsupported training data
As aforementioned LLMs are a deep learning system, which mean that the system has to undergo training to be able to function. The systems are trained with huge sets of data to begin with, but they do not stop developing. Anything typed into the system by any user is used by the system to generate new learning content and ‘teach’ the system.
The issue here is that with millions of people using LLMs such as ChatGPT daily there is no way to censor what information is processed by the system and then re-presented to another user. All information entered is used to develop and refine the system,[3] which means that the system could be being trained with inaccurate, biased and confidential information, and inaccurate data could even be entered maliciously.
There is a lack of control over the training data which ultimately leads to questions over the data’s validity.
Misinformation
LLMs are not systems which are “concerned with concepts like ‘truth’ or accuracy”[4], they constantly produce content with deliberate or unintentional errors.
The Bar Council guidance explores how in Mata v Avianca, inc. [Civil Action No: 22 Civ 1461], an American law firm was found to have used 6 fictitious cases in submissions, which they had found via ChatGPT. This has led to serious financial sanctions for those involved and significantly reduced the reputation of the law firm.
Such regular misinformation means that LLM systems are problematic for the legal profession who hold a duty to not mislead the Court.[5] The data presented by LLMs cannot be taken at face value, and if presented in any submissions, professionals need to ensure that it can be backed up and that it is accurate.
Lack of Empathy
Oral and written advocacy (a common element of a legal professional’s workload) are a skill, which involve practice. Legal professionals need to focus on presenting persuasive, concise and sharp submissions and to do so must involving varying one’s technique, delivery and emotion. Legal professionals also need to display empathy towards clients when providing advice and assisting clients through the legal process.
LLMs are not human and lack high levels of judgment, creativity and empathy.[6] LLMs do not have the deep understanding of complex legal issues or case law and it is unlikely that LLMs can replicate such knowledge. This lack of knowledge and ability to be empathetic is problematic as it will result in a lack of appropriate guidance being provided to clients and may reduce the persuasion needed in submissions.
Current impact on the profession
It is undeniable that AI systems currently have a greater impact on Solicitors firms than Barrister Chambers, especially in relation to tasks such as generating legal documents, extracting data, conducting research, and reviewing contracts and wills. However, the ability of AI to automate certain processes is certainly appealing to all legal professionals.
A study with the LawGeex AI algorithm found that AI could review non-disclosure agreements in an average of 26 seconds producing a 95% accuracy, whilst legal professionals took an average of 92 minutes and only had an 85% accuracy rate.[7]
Nevertheless, LLMs in their current format, present large risks for legal professionals in upholding their professional duties. Whilst LLMs undoubtedly will speed up many processes and tasks, the accuracy will almost always need to be thoroughly checked and confidentiality maintained.
The Bar Council themselves have recognised that there is nothing ‘inherently improper’ in using LLMs to assist in legal work.[8] However, the right precautions must be taken. It is too early to predict any future developments with LLMs, but a specific LLM which can protect confidentiality would reduce large risks for the profession.
Developments in LLMs are regular and so it may be wise for legal professionals to begin to automate some processes now and familiarise themselves with LLMs. It would be naïve to think that AI is not going to ultimately intertwine itself into the profession.
Conclusion
Overall, the Bar Council guidance is very welcome, and it provides interesting and relevant ethical and practical considerations for legal professionals to take on board.
Future LLM developments could have serious implications on the legal profession, but something legal professionals can be certain of is that AI is not going to be able to replace the emotion and persuasive techniques required for submissions anytime soon. Further, the human dimension is irreplaceable, and this is necessary to ensure there is trust, empathy and understanding with clients.[9]
Legal professionals should aim to use LLMs where it is efficient to do so but must try to develop the necessary skills to be able to identify when and where the tools may make mistakes, opposed to simply blindly following the output.[10] Further, legal professionals must ensure that they maintain control and integrity whilst doing so.[11]
*This article has been produced on 5 February 2024. It is important to note that AI is a rapidly growing sector and is constantly subject to adaptations.
[1] Ryan Brown, ‘All you need to know about ChatGPT, the A.I. chatbot that’s got the world talking and tech giants clashing’ (CNBC, April 2023) <https://www.cnbc.com/2023/02/08/what-is-chatgpt-viral-ai-chatbot-at-heart-of-microsoft-google-fight.html> last accessed 2 February 2024
[2]Maryna Bilan, ‘Hallucinations in LLMs: What You Need to Know Before Integration’ (MasterofCode, Nov 2023) <https://masterofcode.com/blog/hallucinations-in-llms-what-you-need-to-know-before-integration#:~:text=In%20summary%2C%20LLM%20hallucination%20arises,patterns%20rather%20than%20factual%20accuracy> last accessed 2 February 2024
[3] The Bar Council, ‘Considerations when using ChatGPT and generative artificial intelligence software based on large language models’ (The Bar Council, Jan 2024) < https://www.barcouncilethics.co.uk/documents/considerations-when-using-chatgpt-and-generative-ai-software-based-on-large-language-models/?utm_campaign=2689665_AI%20guidance&utm_medium=email&utm_source=DotDigital&dm_i=4CGD,1LNCX,7BKR9H,7IBO4,1> last accessed 1 February 2024
[4] ibid
[6] Alex Pinsent, ‘AI and the future of the legal function’ (Hedley May) <https://hedleymay.com/ai-and-the-future-of-the-legal-function/> last accessed 4 February 2024
[7] Law Geex, ‘Comparing the Performance of Artificial Intelligence to Human Lawyers in the Review of Standard Business Contracts’ (Law Geex, Feb 2018) <https://images.law.com/contrib/content/uploads/documents/397/5408/lawgeex.pdf> last accessed 1 February 2024
[8] The Bar Council n(3)
[9] Giulia Gentile, ‘LawGPT? How AI is Reshaping the Legal Profession’ (LSE, June 2023)
<https://blogs.lse.ac.uk/impactofsocialsciences/2023/06/08/lawgpt-how-ai-is-reshaping-the-legal-profession/> last accessed 30 January 2024
[10]Pinsent n(6)
[11] The Bar Council n(3)
Comentários