The allure of using artificial intelligence and content generative tools to grapple with legal documents for court proceedings is strong, but there are significant risks to relying on the information that programs like ChatGPT spit out, writes Jade Carlson, Commercial Litigation Senior Associate at Attwood Marshall Lawyers.
Introduction
ChatGPT and other generative AI tools like Bard by Google, Bing AI and ChatSonic, have been the buzz of the tech world for quite some time. But they have now entered the domain of the wider public, hogging headlines and shaking up the content industry in the short amount of time they’ve become available.
The programs are trained on vast amounts of data and have been given skills in language and sentence prediction, with abilities to suit whatever style you tell it to mimic. As a chatbot, they do not think like humans, but they are able to carry on convincing conversations with prompt replies to any question a user puts in.
AI is a powerful tool, which can save on time and act as an educator of types in the summaries it can provide. But to fully reap the benefits, users need to be aware of its limitations. Using programs like ChatGPT to generate text for a potential legal dispute, for example, requires extremely careful scrutiny.
Limitations and inaccuracies – Chat GPT
ChatGPT was released to the public in November 2022, and the information it can provide is limited to what was published up to September 2021 (if you are using ChatGPT 3.5). So, it will not consider any legal developments or legislative changes that have occurred in the years since.
It also cannot open or analyze PDF files or other external documents. Instead, ChatGPT generates text-based responses on the prompts a user puts into the chat box.
Unsurprisingly, ChatGPT’s website notes that it:
- May occasionally generate incorrect information
- May occasionally produce harmful instructions or biased content
- Has limited knowledge of world and events after 2021
In much smaller text at the bottom of the page, near the chat box where users ask input their prompts and questions, there is another warning that “ChatGPT may produce incorrect information about people, places or facts.”
Interestingly, it also responded to our dummy question on what types of court-related documents it can help on with a disclaimer that while it can “assist with the drafting process, it’s crucial to consult with a licensed attorney to review and finalize any legal documents to ensure they are appropriate for your specific case and jurisdiction.”
Taking heed of these warnings is incredibly important, as ChatGPThas a very public history of filling in the gaps in its knowledge with fabrications that are then presented as fact.
Case study: The ChatGPT lawyer
At the end of May, news broke of a solicitor in Manhattan who had used ChatGTP to draft legal briefs for a personal injury court case, not realising that the generative AI tool had cited entirely fake cases.
The man reportedly thought that ChatGPT operated like a large search engine, and never doubted he citations he received.
Both he and another lawyer at the firm were fined $5,000 for filing the briefs and could face disciplinary action from New York’s bar association, according to reports.
The fallout – and the massive media attention it received – is a warning call to lawyers, certainly, but also to individuals who may be tempted to present the information they receive from ChatGPT as fact.
It has also prompted some judges to take a stand on the use of AI in their courtrooms. In early June, a federal judge in Texas announced that he would require any lawyers that use artificial intelligence in their filings to declare that a human had carried out checks on the content for accuracy.
Case study: First defamation case against OpenAI
In the U.S., a man is suing OpenAI, the company behind ChatGPT, for defamation. The suit appears to be the first of its kind.
The man, a radio host named Mark Walters, alleges he is owed damages after ChatGPT falsely accused him of defrauding pro-gun non-profit the Second Amendment Foundation. The AI program reportedly gave a journalist the fabricated information as research for a news story about a Georgia court case brought by pro-gun non-profit the Second Amendment Foundation.
The generated text reportedly said Walter committed fraud while serving as the foundation’s treasurer and chief financial officer, even though he had never been employed there before. When the journalist asked for paragraphs in the complaint that specifically name Walters, ChatGPT invented a series of paragraphs.
Whether Walters is successful or not will depend on whether his lawyers can show precedent that supports a company being held responsible for its system’s actions – or convincing the court to set a new precedent entirely.
Privacy issues
When we asked it about its abilities, ChatGPT boasted that it can assist with the drafting of legal documents such as complaints, motions, statutory declarations, notices of an intent to sue or appeal, disclosure requests or settlement agreements. We didn’t test the extent to which it could actually assist with the drafting of these documents. It may have just been affirming the question based on what it thought we wanted to hear.
But someone who does not understand the platform’s limitations may, after asking the same question, start feeding the chat box with information related to their case. And all those documents are likely to need references to personal and sensitive information.
Yet, there are significant privacy concerns when it comes to using AI content generators. And those concerns should not be underestimated or ignored.
Any personal information that users put into ChatGTP, for example, may be processed and used by AI models.
Google’s AI tool, Bard, updated its privacy notice on 1 June 2023, telling users not to include confidential or sensitive information in its conversations with the platform.
Bard says that the queries it receives are used to “improve and develop Google products, services and machine-learning technologies.”
Lessons
Due to the burgeoning nature of the technology, and the way it has rapidly expanded as a hot topic, lawsuits and disputes are bound to keep emerging. Some of them are likely to end in landmark rulings that can be applied in other countries.
One big draw card for users is the technology’s ability to mimic different styles for different audiences. It can be very convincing, despite its inability to detect fact from fiction. Even when a user asks for clarifications, it can respond with further fabricated dates, facts and figures, as seen in the defamation claim above.
With many of these companies now putting disclaimers on their websites that shift the responsibility of any content onto the user, the onus is on the user to factcheck and proofread whatever they receive. And more importantly, anyone facing a dispute should check with a qualified and experienced lawyer for advice on their legal matter first.
The use of AI has also been rolled out to the drafting of Wills, with some companies that specialise in e-Wills providing a mass-produced questionnaire to each user that AI then uses to structure a Will. However, the product is limited in its ability to tackle all the variations that must be considered as part of the estate planning process. The Will is also not valid until the document is printed and signed before witnesses according to state or territory laws – a fact that many people may overlook.
The disclaimer that appears on most online e-Will websites should also be a red flag. Usually, a disclaimer like the following is stated at the bottom of these types of websites:
“Disclaimer: XXX is a technology platform that allows you to create your own estate planning solutions using our forms and other information. XXX is not a law firm and does not provide legal, financial, taxation or other advice. If you are unsure whether our estate planning solutions are suitable for your personal circumstances, legal advice should be sought from a law firm.”
Attwood Marshall Lawyers – There’s no substitute for expert legal advice, even in the ever-changing digital world
This article is one part of a two-part special on the legal implications of AI. The next will be focused on the rise of the Metaverse and the challenges that are likely to arise in the virtual world related to intellectual property and the regulation of virtual assets.
The use of AI is relatively new, and much of the caution so far can be linked to its early problems with interpretability – which is expected to improve over time – and privacy concerns.
If you are involved in a dispute, our team can help you understand your rights and the best path to take to resolve your matter.
Our experienced Commercial Litigation lawyers can help determine if you may be eligible for compensation for any financial loss you have suffered. To discuss your specific matter, please contact our Commercial Litigation Department Manager, Amanda Heather, on direct line 07 5506 8245, email aheather@attwoodmarshall.com.au or free call 1800 621 071 at any time.