Claude, ChatGPT, and Privilege: Proceed With Caution, Employers

ChatGPT-Image-Feb-17-2026-11_25_02-PM-1024x683

A recent Southern District of New York decision is being described as “AI destroys privilege.”

That’s not what the court held. But employers using consumer AI tools in connection with employment decisions should pay attention.


TL;DR: In United States v. Heppner, the court held that documents a criminal defendant generated using Anthropic’s Claude were not protected by attorney-client privilege or work product because the tool was not an attorney, its terms disclaimed confidentiality, and the documents were not prepared by or at the direction of counsel. The ruling applies traditional privilege principles. It does not create a blanket rule against AI.

📄 Read the opinion.


What Happened in Heppner

After receiving a grand jury subpoena and learning he was a target of a federal investigation, the defendant used Claude to generate written reports outlining potential defense arguments.

He did this on his own. Counsel did not direct the use of the tool.

When the government later seized the documents, he claimed attorney-client privilege and work product protection.

The court rejected both.

On privilege, the court emphasized that Claude is not an attorney and that the platform’s terms allowed collection and use of user inputs. Because the tool disclaimed any expectation of privacy, the court found no reasonable expectation of confidentiality.

On work product, the court focused on authorship. The materials were created by the defendant “on his own volition.” They were not prepared by or for counsel and did not reflect counsel’s mental impressions.

The court also noted that the analysis might have been different if counsel had directed the use of the tool or if the platform functioned as a confidential agent. Those facts were not present.

This was a disclosure problem and a lawyer-involvement problem. Not an AI problem.

Now bring this into the workplace.

Scenario 1: HR Uses a Public AI Tool to Draft a Termination

An HR manager pastes performance notes, complaint history, and leave information into ChatGPT or Claude to refine a termination memo.

No lawyer is involved.

There is no privilege. There never was. Attorney-client privilege requires seeking legal advice. Work product requires preparation by or for counsel in anticipation of litigation.

Pasting termination rationales into consumer AI tools makes those materials discoverable and shifts sensitive employee data into a third-party system you don’t control.

Scenario 2: Employer Receives a Demand Letter and Uses Claude to Prepare for a Call With Counsel

A former employee sends a demand letter threatening litigation. The employer schedules a call with outside counsel. Before that call, HR uses Claude to summarize allegations, organize facts, and outline issues to discuss with counsel.

Two doctrines matter here.

Attorney-client privilege.
Privilege depends on whether the communication was made for the purpose of obtaining legal advice from a lawyer. If Claude is used simply to organize information before speaking with counsel, that supports the argument that the purpose relates to legal advice. But if the employer asks Claude to analyze liability, assess exposure, or recommend defenses, it is seeking legal advice from a non-lawyer third party. That communication is not privileged at inception. Sending the output to counsel later does not retroactively create privilege.

Work product.
Materials prepared in anticipation of litigation can qualify for work product protection. Using Claude to prepare for a call with counsel strengthens the anticipation-of-litigation argument.

But disclosure still matters. If the employer uploads litigation-related analysis to a platform operating under non-confidential consumer terms, a court could treat that as voluntary third-party disclosure that waives protection.

Preparing for a call with counsel helps. It does not override the requirement of confidentiality.

Scenario 3: Counsel Uses Enterprise AI to Prepare

Now change the facts.

Outside counsel uses an enterprise AI platform under terms that contractually preserve confidentiality, segregate client data, and prohibit training on client inputs. Counsel uses it to prepare litigation strategy for that same call.

That is a different posture.

Work product protects materials prepared by or for counsel in anticipation of litigation. Courts routinely hold that disclosure to a lawyer’s non-lawyer agent does not waive protection where confidentiality is preserved and the agent assists in the representation.

With those safeguards in place, this becomes a standard work product analysis.

The Takeaway

Privilege and work product rise or fall on confidentiality and lawyer involvement. If you disclose strategy to a consumer AI platform operating under non-confidential terms, a court may treat that as voluntary third-party disclosure. If counsel uses an AI tool under contractual confidentiality and data segregation safeguards, the analysis looks like any other litigation-vendor scenario.

“Doing What’s Right – Not Just What’s Legal”
Contact Information