Using AI in Legal Matters: What Clients Can Learn from Two Recent Federal Court Decisions
Generative AI tools such as ChatGPT, Claude, and Gemini are now widely used for research, drafting, and analysis. Two recent federal court decisions make one point clear: using AI in connection with legal matters can jeopardize the attorney‑client privilege and work‑product protection if not handled carefully.
United States v. Heppner, No. 25‑cr‑00503‑JSR (S.D.N.Y. Feb. 17, 2026)
On February 10, 2026, the Southern District of New York issued a landmark decision in United States v. Heppner. The court held that a criminal defendant’s written exchanges with a public, consumer AI platform (Anthropic’s Claude) were not protected by attorney‑client privilege or the work‑product doctrine – even though the defendant later shared the AI‑generated materials with his lawyers.
The court identified several independent reasons why the privilege did not apply:
- AI is not a lawyer. Attorney‑client privilege protects communications between a client and a lawyer (or the lawyer’s agent). An AI platform cannot form an attorney‑client relationship.
- No confidentiality. The AI platform’s terms of service allowed the provider to store, use, and disclose prompts and outputs – including to government authorities – defeating any reasonable expectation of confidentiality.
- No counsel direction. The defendant used AI on his own initiative, not at the direction of counsel. Later sharing AI outputs with lawyers could not “retroactively cloak” them with privilege.
For the same reasons, the court also rejected work‑product protection, concluding the materials were not prepared by, or at the direction of, counsel.
Warner v. Gilbarco, Inc., No. 2:24‑cv‑12333‑GAD‑APP (E.D. Mich Feb. 10, 2026)
On the same day, the Eastern District of Michigan reached the opposite conclusion in Warner v. Gilbarco, Inc., an employment discrimination case.
There, a pro se plaintiff used ChatGPT to research legal questions and draft filings after her attorney withdrew from the case. The defendants moved to compel production of every AI prompt that plaintiff used, and the AI’s responses. Magistrate Judge Anthony Patti denied the request, holding that the materials were protected by the work‑product doctrine on three independent grounds:
- Work‑product protection applied. The court held that the plaintiff’s AI interactions constituted work product of a party¹ under Federal Rule of Civil Procedure 26(b)(3)(A). Citing Upjohn Co. v. United States, the court emphasized that forcing disclosure would reveal a litigant’s mental impressions regardless of whether those impressions were recorded through dictation, a word processor, or an AI tool.
- No waiver. Under Sixth Circuit precedent, work‑product waiver requires disclosure to an adversary or in a manner likely to reach one. The court further explained that generative AI tools are tools, not third parties, noting that treating AI use as waiver would “nullify work‑product protection in nearly every modern drafting environment.”
- Disproportionate discovery. The court characterized the defendants’ request as a “fishing expedition” seeking a litigant’s internal mental impressions reformatted through software and admonished that the defendants’ “preoccupation with Plaintiff’s use of AI needs to abate.”
Practical Guidance: Using AI More Safely
Individuals and organizations using AI tools in connection with legal matters should consider the following guardrails:
- Involve counsel early. AI use in legal contexts should be at the direction of counsel, not performed independently by a client or its employees.
- Favor closed systems. Enterprise or “closed” AI systems with contractual confidentiality protections are far safer than public consumer tools which use prompts and inputs from users to “train” their AI models.
- Limit AI to general tasks. Drafting neutral language or summarizing public information poses less risk than analyzing legal exposure or strategy.
- Document purpose. Where appropriate, memorialize that AI use was undertaken at counsel’s direction for litigation or legal advice.
The Bottom Line
AI can be a powerful productivity tool. But when used carelessly in legal matters, it can unintentionally strip away critical legal protections. Users should avoid uploading confidential facts, legal strategy, or attorney communications into public AI tools—or treating AI outputs as “notes to counsel.”
If you have questions about how emerging technologies like AI may impact your legal rights, please contact one of Obermayer’s experienced attorneys.
¹While privileged work product is often generated by attorneys, Rule 26(b)(3)(A) applies to documents and tangible things prepared in anticipation of litigation or for trial “by or for another party or its representative.” Accordingly, documents created by a party to litigation (as opposed to that party’s attorney) can be protected from disclosure in some circumstances.
The information contained in this publication should not be construed as legal advice, is not a substitute for legal counsel, and should not be relied on as such. For legal advice or answers to specific questions, please contact one of our attorneys.