Increasingly, attorneys use various generative artificial intelligence (AI) tools in the practice of law. These tools purport to provide targeted answers to specific legal questions and they can be used to facilitate review and drafting of legal documents as well as aid in due diligence assignments, along with various other legal tasks. In response to the rapid rise of generative AI tools in the legal profession, state bar associations have published recommendations on the issue. For instance, in 2023, the California State Bar Association issued practical guidance to attorneys on generative AI in the legal profession. Florida followed suit by issuing an advisory opinion on the topic. Similarly, the American Bar Associationalso released a formal opinion on generative AI tools in 2024.
In her article, Rule 11 is No Match for Generative AI, Professor Jessica R. Gunder offers an impressive contribution to both the law-and-technology and civil procedure fields by exposing the limits of Federal Rule of Civil Procedure 11 in addressing “fictitious cases and false statements of law” that arise from attorneys’ use of generative AI. Gunder convincingly argues that although courts have used Rule 11 to sanction attorneys who fail to conduct sufficient legal research, Rule 11 cannot adequately regulate this behavior in the generative AI context. She goes on to contend that Rule 11’s inadequacies have likely led a growing number of courts to issue standing orders to directly address attorneys’ misuse of generative AI in legal proceedings.
Gunder begins the article with a valuable description of the features associated with generative AI in the legal profession. She documents and critiques well-known cases in which attorneys improperly used generative AI tools. Gunder offers possible explanations for lawyers’ unprofessional use of generative AI, including their failure to understand the technology and “evaluate the work product” produced by the technology.
Gunder then turns her attention to Rule 11. After providing a brief overview of Rule 11’s history, scope, and objective, Gunder contends that attorneys and litigants can violate Rule 11 by filing legal documents that contain an inaccurate representation of law or that “do[] not contain key cases.” However, she argues that even before generative AI, courts already encounter significant hurdles in attempting to determine whether to impose sanctions for failure to perform sufficient legal research, including problems with identifying “how much research is enough?” Due to these difficulties, she posits that courts are reluctant to sanction attorneys for inadequate research. Given attorneys’ ethical obligations, courts are more likely to impose sanctions when there is “an intentional failure to disclose controlling legal authority” or if the conduct “is repeated, particularly after a court has informed the attorney of their error” or “involves misrepresenting or changing the holding of a case.”
Gunder goes on to argue that due to sanction requirements, Rule 11 will largely be ineffective when a litigant or attorney erroneously relies on generative AI and submits legal documents to a federal district court that contain “fictitious cases and false statements of law.” Gunder posits that Rule 11 is not intended to cover all bad faith conduct in a lawsuit and that the rule “cannot be used to sanction oral misrepresentation and testimony.” She argues that cases involving litigant or attorney misuse of generative AI often stem from “lack of knowledge of how generative AI works and its propensity to hallucinate” and while such conduct is perhaps negligent, it does not rise to the level of “contempt or subjective bad faith” for purposes of imposing Rule 11 sanctions.
The well-written article concludes with an examination of standing orders dealing with attorney misuse of generative AI and recommendations for courts moving forward. Gunder argues that standing orders may encourage litigants to refrain from filing legal documents in court that contain inaccurate statements of law or fabricated cases. Moreover, “they may make it easier for a court to find that a litigant violated Rule 11 and impose sanctions.” Despite these potential benefits, she contends that poorly drafted standing orders may discourage litigants and attorneys from adopting and implementing new technology and that “a patchwork of standing orders” issued by different courts may lead to inconsistencies.
Gunder suggests that courts should effectively balance the benefits and risks she associates with generative AI standing orders. She also posits that courts should be reticent to adopt “an anti-technology tone” in standing orders to avoid deterring parties from adopting generative AI and the appearance of “judicial bias.” She advocates for the use of the federal district court local rules process authorized by Federal Rule of Civil Procedure 83 to remedy concerns associated with inconsistent standing orders. Gunder’s insightful description of the current use of generative AI in civil litigation in federal courts should be of particular interest to courts, practitioners and scholars in both the law-and-technology and civil procedure fields.






