Over the past few years, attorneys have been incorporating AI into day-to-day business operations in an effort to improve efficiency and provide enhanced client services. But as the United States Court of Appeals for the Sixth Circuit recently reiterated in its decision in United States v. Farris,[1] the use of AI tools, even those sponsored by trusted legal technology providers, must be employed in a manner consistent with an attorney’s ethical obligations. Indeed, as AI continues to achieve new frontiers and become more readily accessible, attorneys must take proactive measures to ensure that adherence to ethical obligations remain at the forefront of their practice.
The Farris Decision:
In Farris, the Sixth Circuit considered whether an attorney, appointed to represent a criminal defendant, committed misconduct by using AI to draft two briefs without properly verifying the legal authorities cited within same.
During the appellate phase of the case, the attorney directed a staff member to upload district court documents to CoCounsel — Westlaw’s internal AI platform — to create a first draft of the principal and reply briefs. The attorney then supplemented the initial drafts over the course of several hours but did not adequately review and verify the contents of the initial drafts. Additionally, the attorney noted that he had not previously used CoCounsel and was unfamiliar with the AI tool.
In its decision, the Sixth Circuit determined that even though the attorney acknowledged his use of AI and it appeared the misconduct was a first-time offense, the attorney, nonetheless, “committed inexcusable transgressions” by failing to verify the citations and propositions submitted to the Court. While recognizing that new technologies present significant promise to the legal field, the Court emphasized that attorneys have an obligation to be “clear eyed about technology’s potential pitfalls.” Importantly, the Court stressed that “attorneys who choose to use artificial-intelligence tools must do so in a manner consistent with their ethical obligations,” and that use of such technologies is “no substitute for tried-and-true safeguards managed by practicing attorneys.”
While not an exhaustive list, the court provided a roadmap of ethical considerations for attorneys when using AI:
- Reviewing and validating content produced by artificial intelligence;
- Considering whether to disclose the use of artificial intelligence to clients or obtain informed consent;
- Safeguarding confidential client information and preserving attorney-client privilege;
- Implementing firm-wide policies governing the use of artificial intelligence;
- Adhering to ethical billing practices when using artificial-intelligence tools; and
- Keeping current with jurisdiction-specific guidelines.
Noting that the attorney’s misconduct had significant consequences — such as necessitating a significant use of judicial resources to investigate the suspected AI improprieties, coordinating a response, facilitating additional steps of the appellate proceedings, and delaying the defendant’s case — the court determined that the attorney should: (1) not be compensated for his time spent of the appeal, (2) be referred to the chief judge of the Sixth Circuit, and (3) be referred to federal district court chief judges in Kentucky and the disciplinary clerk of the Kentucky Bar Association.
Best Practices
The decision in Farris comes on the heels of numerous decisions from courts across the country, finding that attorneys are misusing generative AI.[2] Nevertheless, Farris, as one of the few published circuit court decisions on the misuse of generative AI,[3] provides important guidance to attorneys attempting to navigate this evolving landscape. Four key areas of where Farris’ reminder of how ethical obligations interact with AI use are:
- Review and validate Content. The use of AI technology has the potential to improve an attorney’s efficiency and overall work product and help attorneys maintain a competitive edge in the market. However, as Farris demonstrates, even AI tools sponsored by trusted legal technology providers are imperfect. When using generative AI to assist with drafting legal documents or producing relevant caselaw, make sure to set aside a sufficient amount of time to carefully and thoroughly review AI produced materials.
- Duty to supervise. Attorneys have an ethical duty to supervisor the use of AI by staff, including other attorneys. Moreover, while not addressed in Farris, it seems logical that this duty extends to situations where an attorney is serving as local counsel in a case and may not have direct involvement in the development or drafting of relevant court documents. As Farris teaches, attorneys signing and/or certifying court filings are responsible for issues arising with those filings.
- Safeguarding attorney-client information, privilege, and work product. The use of generative AI may impact attorney client privilege and/or fall outside the scope of work product protections. While this area of law is still evolving, attorneys should be mindful of the types of generative AI being employed in their practices. The use of publicly available generative AI as opposed to AI enterprise platforms may cause problems when asserting claims for privilege or work product.[4]
- Understanding jurisdiction-specific guidelines. Attorneys must consult guidance from the states and federal courts in which they practice regarding the use of generative AI. Some states, such as New Jersey, have issued guidelines on attorneys’ use of AI, emphasizing that AI does not change ethical obligations involving accuracy, truthfulness, honesty, candor, and communication.[5] Moreover, some individual judges have issued standing orders banning the use of AI in the preparation of any filing submitted or mandating disclosure of AI use in the preparation of filings.
Bottom Line
Generative AI is continuing to evolve and becoming an integral part of the legal industry. So long as attorneys stay up to date on both the benefits and issues with AI, concerns, such as those outlined in Farris, can be avoided.
Please contact the authors if you have questions about this article.
[1] No. 25-5623, 2026 WL 915082 (6th Cir. Apr. 3, 2026) (pending publication).
[2] See, e.g., United States v. McGee, 806 F. Supp. 3d 1264, 1269 (S.D. Ala. 2025) (collecting cases involving the improper use of generative AI).
[3] See also McCarthy v. U.S. Drug Enf’t Admin., No. 24-2704, 2026 WL 850354, at *1 (3d Cir. Mar. 27, 2026) (pending publication); Whiting City of Athens Tenn., 170 F.4th 455 (6th Cir. 2026); Fletcher v. Experian Info. Sols., Inc., 168 F.4th 231 (5th Cir. 2026).
[4] See United States v. Heppner, 25 CR. 503, 2026 WL 436479 (S.D.N.Y. Feb. 17, 2026) (pending publication); Warner v. Gilbarco, Inc., Civ. No. 24-12333, 2026 WL 373043 (E.D. Mich. Feb. 10, 2026) (pending publication).
[5] See, e.g., Responsible Use of Artificial Intelligence (AI) and Related Technologies – Benefits of AI Policies (Mar. 30, 2026), https://www.njcourts.gov/sites/default/files/notices/2026/03/n260330c.pdf; Legal Practice: Preliminary Guidelines on the Use of Artificial Intelligence by New Jersey Lawyers (Jan. 24, 2024), https://www.njcourts.gov/sites/default/files/notices/2024/01/n240125a.pdf?cb=aac0e368.
