Skip to main content

Court Flags False Claims in AI-Assisted Brief

Written by: Victoria Xikis

In February, the U.S. Fifth Circuit Court of Appeals ordered sanctions against Attorney Heather Hersh after finding that she used artificial intelligence to craft the majority (if not all) of her appellate brief. The brief was submitted to the court and found to contain AI hallucinations. Ms. Hersh failed to provide verification on behalf of the content accuracy and has yet to comment on the matter publicly.[1]

Artificial Intelligence (AI) use has expanded outside the scope of private use and has become commonplace in professional settings, even in the legal world. AI tools consume data that is already accessible online and converts this information into a concise overview of the prompted topic. Due to the vast amounts of information accessible and the unreliability of AI tools, information that is provided to a user must be double checked and verified for accuracy. Double checking the work provided is needed to avoid “AI hallucinations.” AI hallucinations “occur when an AI model produces a response that sounds plausible but is factually incorrect, logically flawed, or completely invented.”[2] This is exactly what happened to Ms. Hersh.

Here, Ms. Hersh filed an appellate brief in response to  sanctions a district court had awarded to plaintiff’s counsel, Shawn Jaffer, of the firm “Jaffer & Associates.”[3] The lawsuit involved potential Fair Credit Reporting Act violations. The court issued a written “show-cause order”[4] against Ms. Hersh, flagging 16 examples of fabricated quotes and 5 serious misrepresentations of law and fact in her submitted brief.[3]

When asked by U.S. Circuit Judge Jennifer Walker Elrod to provide an explanation for the inaccuracies, Ms. Hersh claimed to have gathered information for her brief through cases that were publicly accessible and thought to be accurate as well as reliable legal databases. Judge Elrod was disappointed with Ms. Hersh’s response, finding it not credible, misleading, and an attempt to evade responsibility. Ms. Hersh stated she was only aware of the inaccuracies after the court brought them to her attention; she claimed to have gathered her information through well-known legal databases such as Google Scholar, CourtListener, Justia, and FindLaw. The court disagreed, particularly after researching the authorities allegedly utilized by Ms. Hersh and not finding the information referenced in her brief. [3]

Furthermore, Ms. Hersh only admitted to using AI tools for her brief after being expressly asked, thus further harming her case. Judge Elrod claimed if she had been more forthcoming from the beginning about the contents of her brief, reduced sanctions would have likely been imposed, and some flexibility would have been granted.[1] However, due to Ms. Hersh’s evasiveness and lack of responsibility, stricter punishment was warranted. According to Federal Rule of Appellate Procedure Rule 46(c), disciplinary action of an attorney can be granted so long as their conduct is “unbecoming” or isn’t compliant with court rules. Ms. Hersh was ordered to pay a sanction in the amount of $2,500 within 30 days.[3]

Judge Elrod stated that AI hallucinations have become increasingly more apparent in court proceedings over the last 3 years following the first high-profile case back in 2023 where two New York attorneys were sanctioned for using ChatGPT in their legal brief.   In that case, the Manhattan U.S. District Judge Caste found that both attorneys acted in bad faith when they used ChatGPT to research information for a client’s personal injury case against an airline and included false citations that were formulated by the AI tool. Both attorneys were questioned in a similar fashion to Ms. Hersh about the reliability of their materials, and neither attorney was forthright about their use of AI. Thus, the court ordered both attorneys and their firm to pay a total of $5,000 as a sanction.[5]

To be sure, Ms. Hersh is not the first attorney – nor will she be the last attorney – to submit legal documentation with the assistance of AI tools. Notwithstanding, there is much that can be done whether by the states or federally to implement policies that ensure the use of AI tools remain outside of the courtroom. Pursuant to 28 U.S.C. § 2071, the Fifth Circuit had already proposed the amendment of Fed. R. App. P. R. Rule 32.3 and Form 6 in June 2024 to include a requirement of notice for any AI usage in appellate court filings.[6] The proposal was not adopted, but if it had, it would have been the first rule at the appellate level to regulate AI use by attorneys in court filings. The proposal would have required attorneys to certify that they have: (1) not used AI; or (2) have used AI but have reviewed the information and verified its accuracy to avoid AI hallucinations. The argument as to why the rule failed to pass was largely due to bar members’ belief that current rules are satisfactory to oversee AI technology.[7] Although there are current policies regarding the formalities of court filings, the specific inclusion of AI regulation would undoubtedly be extremely beneficial in this area.

AI has gained a negative reputation because professionals often use it instead of formal research, significantly reducing research time. It’s unclear whether this shift is driven by efficiency, widespread acceptance, or simply lack of diligence. However, AI truly can be a useful tool that aids attorneys who are overseeing numerous projects and would benefit from a source that efficiently targets the materials they need to focus on. To address this ongoing issue, the circuit courts should revisit the proposal stated above and find ways in which it could be improved and agreed upon. We currently have policies in place to reprimand attorneys for certain conduct; thus, we should have policies that also ensure orderly use of these systems as well.

AI hallucinations vary in their level of potential harm. While some may have negligible effects, others can pose significant risks. This is especially true in the legal sector, where inaccuracies related to referencing sources, cases, or laws could easily undermine constitutionally-protected rights such as the Sixth Amendment’s guarantee of a fair trial. Misrepresentation by legal professionals can result in wrongful imprisonment or enable guilty parties to escape accountability. This underscores the importance of treating AI as a tool requiring rigorous oversight. Unreviewed AI-generated content can lead to serious, even fatal outcomes depending on the legal context. In light of rapidly advancing technology, it is imperative to proactively develop laws and regulations that safeguard justice, public order, and human safety.

 

References:

 

[1]Nate Raymond, US Appeals Court Orders Lawyer to Pay $2,500 Over AI Hallucinations in Brief, Reuters (February 18, 2026), https://www.reuters.com/legal/government/us-appeals-court-orders-lawyer-pay-2500-over-ai-hallucinations-brief-2026-02-18/

[2]Muhammad Tuhin, What Are AI Hallucinations and Why Do They Happen?, Science News Today (April 24, 2025), https://www.sciencenewstoday.org/what-are-ai-hallucinations-and-why-do-they-happen

[3]Robert Fletcher v. Experian Info. Sol. ; Bridgecrest Credit Co., LLC,168 F.4th 231(5th Cir. 2026). https://www.ca5.uscourts.gov/opinions/pub/25/25-20086-CV0.pdf

[4]LegalClarity Team, Show Cause Order: What It Means and How to Respond, LegalClarity (April 5, 2026), https://legalclarity.org/show-cause-order-what-it-is-and-how-to-respond/
a. Definition: “A show-case order is a formal directive issued by a judge that compels a party in a lawsuit to appeal before the court and provide a justification for a specific action or inaction”

[5]Sara Merken, New York Lawyers Sanctioned for Using Fake ChatGPT Cases in Legal Brief, Reuters (June 26, 2023), https://www.reuters.com/legal/new-york-lawyers-sanctioned-using-fake-chatgpt-cases-legal-brief-2023-06-22/

[6]Proposed Amendment to 5th Cir. R. 32.3 (proposed Jan. 4, 2024) https://www.ca5.uscourts.gov/docs/default-source/default-document-library/public-comment-local-rule-32-3-and-form-6.pdf?sfvrsn=fe96c92d_0

[7]Nate Raymond, 5th Circuit Scraps Plans to Adopt AI Rule After Lawyers Object, Reuters (June 11, 2024), https://www.reuters.com/legal/transactional/5th-circuit-scraps-plans-adopt-ai-rule-after-lawyers-object-2024-06-10/

Artificial Intelligence (AI), AI Risks & AI Regulation in Buffalo, NY

AI Risks in Buffalo, NY | AI Regulation

AI RisksAI RegulationArtificial Intelligence (AI) ∴ Buffalo, NY

Buffalo, NY