A Response to AI with the help of AI - Plan now and don't look back
- 3 days ago
- 6 min read
Three AI Decisions Every Plaintiff’s Firm
Needs to Understand Right Now
By Mike Fischer/Opus4.6 | E-Discovered Consulting, LLC | April 2026
We're now four months into 2026, and while LLMs and generative and now agentic AI being nothing new, the courts have, until now, mostly avoided issuing any sort of real guidance and interpretation about their usage. Enter the end of Q1, when three federal magistrate judges issued the first meaningful judicial guidance on what happens when AI meets litigation discovery. The decisions landed in Warner v. Gilbarco, United States v. Heppner, and Morgan v. V2X, Inc. — and together they reshape how every plaintiff’s firm in the country should be thinking about AI both in their office management but most importantly as they prepare to begin new complex litigation projects.
I want to break these down from the perspective most of the commentary has ignored: the receiving side of large ESI projects who constantly feel as they're behind the competition in both understanding and deploying litigation support tech.
The Cases at a Glance
Before diving in, here is the landscape in a single snapshot:
Case | Court / Date | Key Holding |
Warner v. Gilbarco, Inc. | E.D. Mich. Feb. 10, 2026 | Work product preserved. AI platforms are “tools, not persons”; disclosure to them is not disclosure to an adversary. |
United States v. Heppner | S.D.N.Y. Feb. 17, 2026 | No privilege, no work product. Publicly available AI platform with disclosure-permissive TOS defeats any confidentiality expectation. |
Morgan v. V2X, Inc. | D. Colo. Mar. 30, 2026 | Mental impressions protected; tool identity disclosable. Protective order requires vendor contract prohibiting training, onward disclosure, and retention. |
Warner: The Tool Is Not the Person
Warner gives plaintiff’s counsel something concrete to rely on. Judge Patti’s reasoning is grounded in decades of work product doctrine and anchored to the text of Rule 26(b)(3)(A). The court emphasized that work product protection requires disclosure to an adversary or in a manner likely to reach an adversary — not just disclosure to any third party.
Heppner: The Terms of Service Matter More Than You Think
Heppner is the cautionary tale. The defendant, Bradley Heppner, used the free consumer version of Anthropic’s Claude to research his legal situation and outline defense strategies — without any direction from his attorneys. Judge Rakoff found that the consumer platform’s privacy policy, which permitted data sharing with third parties including potentially the government, defeated any expectation of confidentiality. No privilege. No work product protection. The AI is a tool, not a person or a party.
Morgan: The One that Hits Home Most for Receipt side Practioners
Morgan v. V2X is the decision that changes the most and will likely be cited for the foreseeable future in this early landscape. Judge Dominguez Braswell did what neither the Warner nor the Heppner courts attempted: she translated the abstract privilege analysis into a concrete, enforceable protective order framework.
The amended protective order requires that any AI tool processing confidential discovery materials must satisfy three conditions:
1. No training on inputs. The AI provider must be contractually prohibited from using any submitted data to train or improve its models.
2. No onward disclosure. The provider cannot share inputs with third parties except where essential for service delivery.
3. Deletion on demand. The user must have the right to require the provider to delete all confidential information upon request.
What does that mean? It means what we've always held in the early stage of application of AI in Legal Tech. Free, consumer level services with no contractual agreement NOT to use input prompt or uploaded documents and data do not meet the usage guidelines for confidential data designated such by agreed protective order.
For folks working in complex litigation on the receiving side of most ESI, this decision rings with the loudest bell. We need to be prepared for not only probing into use of AI generally in practice, but most certainly, provisions and visibility into intent to use and guardrails around use of AI on produced data.
What This Means for the Receiving Side
Most of the legal commentary on these three decisions has been written from the defense bar’s perspective or from the position of large corporate parties deploying AI to accelerate their own review workflows. That perspective matters, but it is incomplete. Here is what plaintiff’s counsel — and the litigation support teams that actually draft ESI stipulations and protective orders — should be doing right now.
At the Rule 26(f) Conference
Ask directly whether the producing party intends to use generative AI, agentic AI, or any LLM-based tool at any stage of the production workflow — collection, processing, review, privilege screening, or QC. Get a yes or no on the record. Do not accept vague assurances about “analytics” or “technology-assisted review” as a substitute. TAR and agentic AI are fundamentally different in how they make decisions, and the disclosure obligations should reflect that difference.
In the Protective Order
Incorporate language modeled on the Morgan framework. Any party using AI on materials designated as Confidential must be able to demonstrate that the platform provider is contractually bound by the no-training, no-disclosure, and deletion-on-demand requirements. Require that written documentation of those protections be retained and produced on request. That said, locking into particular tools at the onset by name should always be scrutinized. This is a fast moving target, and tools should be evaluated at both the onset, and continually as they improve their models and very importantly, when partnerships between vendors change and evolve.
When Evaluating a Production
If you receive a production that appears to systematically under-produce a category of documents you have reason to believe exists, the use of AI in the review workflow is now a legitimate line of inquiry. Ask whether an AI tool made the non-responsiveness or privilege determination. Ask whether a human reviewed that determination before the document was withheld. Ask whether the producing party can reconstruct the decision trail.
The Asymmetry Problem
The disclosure asymmetry in these cases cuts against the smaller party — and in most MDL and mass tort matters, that is the plaintiff’s side.
A large corporate defendant deploying an agentic review system has in-house counsel, established vendor relationships, and dedicated staff to draft a defensible ESI clause that checks all the boxes. A plaintiff’s firm or a plaintiff steering committee trying to evaluate whether the production on the other side is complete has limited built-in infrastructure.
All three of the litigants in these first-impression cases — the Morgan plaintiff, the Warner plaintiff, the Heppner defendant — were either pro se or acting without sophisticated litigation support. The courts got their first look at AI in discovery not from BigLaw-on-BigLaw commercial disputes, but from the parties who could least afford the fight. That pattern tells you where the pressure is going to land next, and it is not going to land on the parties with the deepest bench.
Plaintiff’s firms need to start asking these questions now. Not because the rules require it yet —but because the first published order granting a spoliation motion based on undisclosed agentic AI use in a production workflow is coming. And the firm on the wrong side of that order is going to wish they had asked the questions at the 26(f) conference instead of after the production was sealed.
Mike Fischer is the founder of E-Discovered Consulting, LLC, an e-discovery and litigation support consultancy focused exclusively on plaintiff-side discovery in MDL and mass tort litigation. With over 15 years of experience in ESI consulting and managed document hosting, Mike works with plaintiff firms and plaintiff steering committees across the country.
Contact: mike@e-discovered.com | www.e-discovered.com
Case Citations
Warner v. Gilbarco, Inc., No. 2:24-CV-12333, 2026 WL 373043 (E.D. Mich. Feb. 10, 2026)
United States v. Heppner, No. 25-CR-503, 2026 WL 436479 (S.D.N.Y. Feb. 17, 2026)
Morgan v. V2X, Inc., No. 1:25-cv-01991-SKC-MDB (D. Colo. Mar. 30, 2026)
Further Reading
• Perkins Coie, Heppner and Gilbarco: Courts Apply Privilege and Work Product Protection to Generative AI Tools (March 2026)
• Sidley Austin, Generative AI in Discovery: Protective Orders as an Emerging Point of Dispute (April 2026)
• Everlaw, Morgan v. V2X Decision Signals a Turning Point for AI Data Privacy (April 2026)
• Paul Weiss, Federal Courts Reach Different Outcomes on Whether AI-Generated Materials Warrant Work Product Protection (March 2026)
• Clio, Courts Are Starting to Pick AI Tool Winners: Breaking Down Morgan v. V2X Inc. (April 2026)

Comments