Not Using AI Could Become a Matter of Negligence

An AI judge's hammer.

In FIR podcast episode 403 of our For Immediate Release podcast, the long-form monthly edition for April, Shel Holtz and I discussed a range of compelling subjects over the course of an 82-minute conversation.

I’d like to talk about one of our topics in particular: the remarkable keynote address that Sir Geoffrey Vos, Master of the Rolls, Head of Civil Justice, and the second-most senior judge in the UK, gave to the Manchester Law Society’s AI Conference in March. The conference centred around the theme “AI: Transforming the Work of Lawyers and Judges,” highlighting the increasing importance of artificial intelligence in the legal sector.

Sir Geoffrey Vos

Known for his advocacy for digital justice and technological advancements in the legal field, Sir Geoffrey emphasized the urgent need for legal professionals to embrace AI technologies. He starkly stated the importance he sees that lawyers and judges get to grips with new technologies, especially AI, underscoring that the legal profession is not immune to the transformations AI is bringing to various sectors globally.

He used several images as visual support for his speech, all of which he created using ChatGPT Plus and DALL-E 3. Unfortunately, none of those images has been publicly discoverable. I was particularly keen to see one he referenced – a vivid AI-generated image depicting what an AI-enabled courtroom might look like, symbolizing his forward-thinking approach.

During his speech, Sir Geoffrey outlined several ways AI can assist legal professionals, from legal research and drafting documents to more complex tasks like legal analysis and practice management.

His arguments will resonate far outside the legal profession as much of what he says is also applicable in, say, public relations, advertising, or accounting. For example:

First, clients will not want to pay for what they can get more cheaply elsewhere. If generative AI can draft a perfectly serviceable contract that can be quickly amended, checked and used, clients will not want to pay a lawyer to draft one instead.

Secondly, in a similar vein, if AI can summarise the salient points contained in thousands of pages of documents in seconds, clients will not want to pay for lawyers to do so manually.

Thirdly, and perhaps more importantly, AI is not only quicker, but may do some tasks more comprehensively than a human adviser or operator can do.

It’s not far-fetched to suggest that someone providing professional services to clients could be held to account for negligence by not using AI, a point Sir Geoffrey makes clear:

Use [of AI] may rapidly become necessary in order to perform workplace duties. One may ask rhetorically whether lawyers and others in a range of professional services will be able to show that they have used reasonable skill, care and diligence to protect their clients’ interests if they fail to use available AI programmes that would be better, quicker and cheaper.

He also addressed potential risks, such as the misuse of AI and the importance of maintaining confidentiality, especially when using large language models like ChatGPT. His perspective is clear: while AI is a powerful tool, it requires careful and responsible handling to ensure it benefits the legal system without compromising ethical standards or client trust.

Another critical point Sir Geoffrey raised was the potential for AI to necessitate changes in common law principles due to its profound impact on legal processes. He speculated that in the future, AI might even be involved in certain judicial decision-making processes, albeit under strict human supervision and with safeguards like the right to appeal to a human judge.

In other words, an AI could one day play a role in court cases, including acting as a judge. Far-fetched? I don’t think so. But you decide!

The implications of AI in the legal sector are vast and multifaceted. Sir Geoffrey Vos’s speech serves as a clarion call for the legal profession to not only adopt AI – “get with the programme”, as Sir Geoffrey noted – but to do so thoughtfully and ethically, ensuring that its integration supports justice and adheres to the highest standards of the profession. His vision for a technologically advanced legal system aligns with broader movements towards digital transformation in various other sectors.

The transcript of Sir Geoffrey’s speech is on the Judiciary website, and you can download a PDF copy.

FIR podcast episode 403

This episode also explored other significant topics, such as whether marketing requires its own ethics standard for AI, reflecting the broader implications of AI across various professional fields. The conversation about ethics in AI is crucial, paralleling the concerns Sir Geoffrey Vos raised about AI’s integration into the legal profession. We talked about the rise of a phoenix as Peter Shankman’s HERO makes its entrance. We shook our heads at the notion of a beauty pageant featuring AI-generated babes, and lauded Unilever for its pledge to not use AI models in lieu of real women in its advertising for Dove.

And there’s more, including Dan York‘s technology report.

You can listen to the overall discussion right here. If you don’t see the embedded audio player below, listen on the FIR website or via your favourite podcast app.

(Image at top by Conny Schneider on Unsplash.)


[Update 2 May 2024:] Here’s a salutary lesson to learn about verifying the legitimacy of AI-generated content before you share it with anyone. Especially if you’re a lawyer who’s about to use AI-generated content in a court case.

Reported in the Daily Mail, and amplified widely by media in the U.S., this is about a Colorado attorney who was suspended from the bar last year and fired from his firm for using ChatGPT in court after the AI cited fake cases.

As the Mail reports, much of this clearly was in that U.S. attorney’s mind when he spoke about his reasons for using ChatGPT.

But his egregious error was his utter failure to read or verify any of the cases the AI turned up to ensure the accuracy of the AI’s work. Saying he was “overwhelmed with his caseload” just does not cut it as an excuse for not doing either of those things. And so he suffered extreme consequences.

Add this to your to-do list when considering what your AI has produced for you:

  • Always read the content it has created. Duh!
  • Always carry out due diligence on the accuracy of what the content is saying.
  • Always review every source cited by the AI (very easy if you use Perplexity, which always includes source links) or conduct more research on the content to find impeccable sources you can cite. Your AI can help you with this.

While the Mail’s report focuses on the lawyer being fired because he used ChatGPT, I think the actual reason is that he did not verify any of the AI-generated material and what he submitted included false information. Negligence, at least. That was his downfall.

Don’t make mistakes like the Colorado attorney.

(I first published an earlier version of this update on LinkedIn.)

Neville Hobson

Social Strategist, Communicator, Writer, and Podcaster with a curiosity for tech and how people use it. Believer in an Internet for everyone. Early adopter (and leaver) and experimenter with social media. Occasional test pilot of shiny new objects. Avid tea drinker.

Close