Hooooo boy. You gotta read this concurrencethis concurrence by Judge Newsom about the use of AI in legal opinions. It starts this way:
I concur in the Court’s judgment and join its opinion in full. I write separately (and I’ll confess this is a little unusual1) simply to pull back the curtain on the process by which I thought through one of the issues in this case—and using my own experience here as backdrop, to make a modest proposal regarding courts’ interpretations of the words and phrases used in legal instruments. Here’s the proposal, which I suspect many will reflexively condemn as heresy, but which I promise to unpack if given the chance: Those, like me, who believe that “ordinary meaning” is the foundational rule for the evaluation of legal texts should consider—consider—whether and how AI-powered large language models like OpenAI’s ChatGPT, Google’s Gemini, and Anthropic’s Claude might—might—inform the interpretive analysis. There, having thought the unthinkable, I’ve said the unsayable. Now let me explain myself.
1. Even for me.
Wowsa. You gotta read the whole opinion. He goes through the pros and cons of using AI and also shows how he used it in this particular case. It's a very interesting read.
Meantime, I asked Co-Pilot (Microsoft's AI program) to write a blog post about Judge Newsom's use of AI. This is what I got back:
Circuit Judge Proposes Using AI-Powered LLMs to Interpret Legal Texts
In a recent concurring opinion during an insurance dispute, U.S. Circuit Judge Kevin Newsom suggested that courts might benefit from using artificial intelligence programs to interpret legal texts. This idea, which he described as “unthinkable,” could revolutionize the way we understand and apply the law1.
The underlying case involved a dispute between a landscaper and his insurance company over whether his installation of an in-ground trampoline fell under the term “landscaping” in his insurance policy. The district court noted that because the term “landscaping” was not defined in the policy, whether or not the landscaper’s activities were covered by the insurance policy hinged on whether the installation of the trampoline fell within the common, everyday meaning of the word “landscaping.”
Judge Newsom took the opportunity to express his thoughts about artificial intelligence’s possible role in future disputes after “hours and hours (and hours) laboring over the question [of the ordinary meaning of ‘landscaping’ in the context].” He recounted that querying ChatGPT (an AI-powered LLM) about the ordinary meaning of “landscaping” resulted in an explanation that “squared with [his] own impression.” ChatGPT’s definition included “the process of altering the visible features of an area of land, typically a yard, garden, or outdoor space, for aesthetic or practical purposes.” It also listed activities such as planting trees and installing paths and water features as part of "landscaping"2.
Judge Newsom evaluated the pros and cons of using LLMs to interpret legal texts. Some of the pros included the fact that LLMs are trained on a vast range of ordinary-language data, from literature to online discussions. However, he also noted that LLMs cannot capture “pure offline” usages—those that do not occur online or are not eventually digitized. Despite this limitation, LLMs can understand context, are accessible, and have advantages over other empirical interpretive methods, like dictionary research2.
In conclusion, Judge Newsom’s proposal opens up exciting possibilities for the legal field. While he never reached a definitive conclusion due to the panel’s “off-ramp” approach, his experimentation with generative AI suggests that LLMs might indeed be useful when interpreting legal texts3. It’s a fascinating development that could shape the future of legal analysis.
Disclaimer: The views expressed in this blog post are based on Judge Newsom’s opinion and do not constitute legal advice. Consult a qualified attorney for specific legal matters.
2: Circuit Judge Proposes Using AI-Powered LLMs to Interpret Legal Texts 1: Federal Judge Proposes AI to Decode Legal Texts: A Revolutionary Approach or Risky Gamble? 3: Eleventh Circuit Judge Experiments With ChatGPT
One (insurmountable?) weakness of any AI—let’s call it a lack of humanity—is demonstrated in the very title of the AI-generated section of DOM’s post.
ReplyDeleteLike, I think, many lawyers might, when reading the headline "Circuit Judge Proposes Using AI-Powered LLMs to Interpret Legal Texts," I immediately conjured an image of attorneys with graduate law degrees wielding AI to interpret texts.
Of course, Co-Pilot understands LLMs to be Large Language Models. You have to be a human to grok that, if you're writing for a legal audience like the readers of DOM's blog, “LLM” is going to land very differently and, in this case, confusingly.
To be frank, I find AI stumbles like this a bit reassuring. Not to be TOO Boomerish—and accepting that integrating AI to strengthen and streamline the practice of law is likely beneficial and certainly inevitable—I am grateful for anything that might perhaps keep us from slipping down the slopes of the uncanny valley.
Robert, AI is in its infancy right now. Think about what a kindergartener would write if asked to push out an article.
ReplyDeleteWait till Skynet goes live.
From: Chief Judge William Pryor, Jr.
ReplyDeleteTo: The Unwashed
Re: Nothing To See Here
This afternoon I have issued a preemptive ruling. Based upon the reporting that the Hon. Aileen Cannon issued a paperless order on Tuesday allowing lawyers Josh Blackman, Gene Schaerr and Matthew Seligman to make oral arguments in a June 21 hearing to consider a motion to dismiss Trump's 40 federal felony charges. All three lawyers are experts in constitutional law who filed amicus curiae, or "friend of the court," briefs regarding the motion to dismiss. Schaerr and Blackman are representing groups that want the case against the former president to be dismissed, while Seligman is representing groups that say the case should continue.
Because Judge Cannon's order is highly unusual, I anticipate that there will be an "orchestrated campaign" to complain about her actions. Like I implied in my earlier order, we do not care. Therefore, the Court will refuse to accept any complaints on this issue and will assume that the public believes the judge is rigging the system for President Trump. While I am writing, I also will assume that she will take other highly unusual actions that will cause additional orchestrated campaigns against her. Accordingly, the Court will not permit the filing of any complaint against Judge Cannon on any issue related to President Trump.
The flaw behind all this is the implicit belief that there is some sort of objective truth to what the law is, and that cases can be decided in truly objective ways. Once people begin to accept that the realists had it right, and that there is always some layer of subjectivity involved or needed to resolve disputes (while respecting the ideal that judges should act objectively, and we should strive for objective outcomes even if they are not ultimately possible in a purely objective sense), then we can give up on the fiction that textualism can produce true, singularly correct answers.
ReplyDeleteThe point is proven by the dispute between Gorsuch and Kavanaugh in Bostock, and in the other cases that result in disagreements between the self-appointed textualist savants who seek the title of being the new Scalia.
I’ve been blogging on autopilot with AI for over a year and no one has noticed.
ReplyDeleteNext up AI opening cross and closing in cases.
If I could just get AI to answer the 20 plus client texts that come in every weekend starting with “hey I was just wondering about my case …” and invariably include an attack in the 2B1.1 financial guidelines because “this guy shot 39 people and got two years and this other guy killed 11 people and got three years so why are my guidelines 70-90 months? Can’t you tell the judge that?”