The Civil Case That Brings Some Sanity to the AI Privilege Debate

 

ChatGPT-Image-Feb-25-2026-09_54_35-PM-1024x683

A federal court recently rejected an attempt to force a litigant to turn over information about her use of ChatGPT in a pending employment lawsuit.

Yes. Information about her AI use.

In a civil case, one side moved to compel “all documents and information concerning [the plaintiff’s] use of third-party AI tools in connection with this lawsuit.” The court said no.


TL;DR: A federal court refused to compel discovery into a pro se plaintiff’s use of generative AI in her lawsuit. The judge held the request was late, not relevant or proportional, and aimed at protected work product. That ruling does not conflict with the recent criminal case denying privilege to AI-generated documents created without lawyer involvement or confidentiality safeguards. Different posture. Same doctrine.

📄 Read the recent decision
📄 Read my prior post


Defendants’ attempt to compel AI use information

The defendants wanted everything related to the plaintiff’s use of third-party AI tools in the case.

The plaintiff was pro se.

The court denied the request on multiple grounds.

First, the motion to compel was filed after the discovery deadline. The court enforced its scheduling order.

Second, even setting timing aside, the judge held the request was improper. The defendants were trying to probe the plaintiff’s internal analysis and mental impressions in preparing her case. That is core work product.

Third, the court rejected the argument that using ChatGPT waived protection. Work-product waiver requires disclosure to an adversary or something close to it. Using a generative AI tool, without more, does not meet that standard.

The judge emphasized that ChatGPT and similar programs are “tools, not persons.”

The request was described as a “fishing expedition” and “a distraction from the merits of this case.”

Nothing in the ruling turned on novelty. The court applied familiar principles about relevance and protection of litigation strategy. AI did not change the analysis.

The criminal case that caused the panic

A few weeks earlier, a criminal decision triggered headlines suggesting AI “destroys privilege.”

There, a criminal defendant — on his own — used a public AI platform to generate written analyses of potential defenses after learning he was under investigation.

No lawyer directed the searches. No lawyer supervised the drafting. The platform’s terms did not guarantee confidentiality.

The government later seized the documents. The defendant claimed privilege and work product.

The court rejected both because the documents were not confidential communications with counsel and were not prepared by or for a lawyer.

That case addressed whether standalone AI-generated documents were privileged in the first place.

Conversely, the civil case addressed whether the rules of civil procedure allowed a defendant to obtain a pro se plaintiff’s internal drafting process simply because AI was used.

What About Employees Using AI Before Hiring Counsel?

Now flip it.

If an employee — still employed — uses ChatGPT to complain to HR about discrimination, there is no attorney-client privilege. No lawyer means no attorney-client privilege.

The work-product question is different. Work product can apply even without a lawyer, but only if someone is preparing for litigation — in plain English, if a lawsuit is realistically on the horizon.

An employee raising a workplace complaint while still employed is not automatically preparing for a lawsuit. At that stage, litigation may be possible, but it is not necessarily expected.

If she sends the AI-generated complaint to HR and later sues her employer, that document is discoverable.

The harder question is the underlying AI prompts.

Substance matters.

There is a difference between “What is a hostile work environment?” and “My supervisor cut my bonus on March 3 after I complained about pay equity. Is that retaliation?”

The first looks like general research. The second is a factual narrative.

If an employee is feeding specific facts — names, dates, statements, compensation decisions — into a consumer AI platform while still employed, those prompts may become relevant later, especially if her version of events shifts.

Now you are not just talking about “research.” You may be looking at a contemporaneous factual account.

That does not mean every AI interaction is fair game. Courts still require relevance and proportionality. But when the prompts contain detailed factual descriptions, and credibility becomes an issue, the discovery argument gets stronger.

Pre-lawsuit workplace complaints are not the same thing as protected litigation strategy developed during an active lawsuit.

Courts are likely to treat them differently.

Employer Takeaways

• Consumer AI is not a confidential sandbox. If managers or HR paste sensitive internal analysis into public AI tools without counsel involved, you may be creating discoverable material.

• Lawyer involvement still matters. Materials prepared by or for counsel in anticipation of litigation are on firmer ground than executive freelancing in ChatGPT.

• Employees’ AI prompts can become relevant if they contain detailed factual accounts.

Bottom Line

The panic is overblown. Courts are not blowing up privilege because someone used ChatGPT. But they are not ignoring common sense either. If you pour factual details into a consumer AI platform, don’t be surprised if those facts come back in discovery.

“Doing What’s Right – Not Just What’s Legal”
Contact Information