AI attorney-client privilege is no longer a future concern. It is an immediate risk, and the federal court’s ruling in United States v. Heppner should alert lawyers, executives, founders, and anyone considering using a public AI tool to handle sensitive legal issues. In a case the court called a question of first impression, Judge Jed Rakoff decided that a criminal defendant’s written exchanges with Anthropic’s Claude were not protected by attorney-client privilege or the work product doctrine.
That holding matters far beyond a single criminal case in New York. The legal question in Heppner was narrow, but the warning is broad. If someone uses a public-facing AI platform to analyze legal exposure, test defenses, or organize facts before counsel has structured the process, they may be creating discoverable material rather than protected legal work. Early commentary on the case has focused on precisely that concern, and for good reason.
What happened in Heppner
According to the court, the defendant created approximately thirty-one documents that reflected his communications with Claude after receiving a grand jury subpoena and after it was clear he was the investigation’s target. Defense counsel argued that these materials outlined the defense strategy and were produced in anticipation of a possible indictment. They also claimed the documents were intended to facilitate discussions with counsel and were later shared with them. The serious issue was that counsel admitted they did not instruct Heppner to perform the Claude searches.
That concession influenced the entire decision. The court began with the usual privilege framework and determined that the AI documents failed at least two elements, and probably all three. The exchanges were not communications between attorney and client because Claude was not an attorney. The court also concluded that the communications were not confidential as privilege requires, citing the platform’s privacy terms and the lack of a reasonable expectation that the information would stay protected from third parties. Lastly, the court rejected the idea that the documents became privileged later just because they were eventually provided to counsel.
That last point is easy to overlook, but it could be the riskiest aspect of the opinion for business users. Many people believe that if they use AI to organize facts and then send the result to their lawyer, the communication automatically falls under privilege. The court dismissed that logic completely. Once the communication was no longer privileged, forwarding it to counsel later did not alter its status.
Why the work product doctrine failed, too
The court did not stop with privilege. It also determined that the documents were not protected work product. Judge Rakoff clarified that even if the materials were created in anticipation of litigation, they still were not prepared by counsel or under counsel’s direction, nor did they reflect counsel’s mental impressions at the time they were created. In the court’s view, this excluded the documents from the core purpose of the doctrine, which is to protect the lawyer’s thought process and case preparation.
That part of the ruling deserves close attention because it highlights a growing trend in the market. Clients, executives, and employees increasingly rely on AI to summarize disputes, create timelines, test arguments, and “get smart” before talking to counsel. While it may seem productive, Heppner warns of the trap. Legal sensitivity alone does not provide legal protection. Factors like direction, structure, confidentiality, and agency remain important. The commentary after the decision emphasizes whether counsel directed the AI use and if the AI environment truly offered confidentiality protections.
The bigger risk is not AI. It is an undisciplined use of AI
The simple headline is that AI is not your lawyer. That statement is true, but it’s too straightforward to be helpful. The bigger takeaway from Heppner is that businesses and individuals are using AI in the wrong order. They go to the tool first and consult counsel second. That’s the wrong sequence, especially when the matter involves litigation, internal investigations, employment complaints, regulatory risk, deal conflicts, or crisis response.
Once you view the situation this way, the true business issue becomes clearer. Public AI tools encourage people to act quickly, ask difficult questions, and input sensitive information. They create an illusion of privacy and speed at the very moment when legal discipline is most critical. A founder under pressure might ask AI if investor statements pose securities risks. A manager might paste in employee allegations and seek a credibility assessment. An investigation target might test defenses and timelines in a chat before consulting counsel. In each case, users may believe they are preparing for legal advice, but they are actually creating records that could complicate privilege, confidentiality, discovery, and case strategy. Heppner serves as a warning against falling into this trap.
What lawyers and companies should do now
The first step is to stop treating all AI use as the same. Using AI to clean up a marketing draft is one thing. Using AI to analyze facts tied to a subpoena, threatened claim, internal complaint, or regulator inquiry is something else entirely. Heppner suggests that the second category requires process, not improvisation.
The second step is to bring legal review to the beginning of the workflow. If the matter is sensitive, counsel should decide whether to use AI at all, which platform to choose, which facts can be entered, who will supervise the process, and how to handle the output. The court left open the possibility for future disputes over whether a different setup, especially one more aligned with counsel’s guidance and a more controlled environment, might lead to a different outcome. While this does not guarantee protection, it indicates where the next legal arguments are likely to focus.
The third step is governance. Companies should have a written AI use policy that distinguishes between routine business activities and high-risk legal applications. Executives and employees need straightforward rules. Do not paste sensitive dispute facts. Do not use public AI tools to analyze legal exposure. Do not use AI to prepare summaries for counsel unless counsel has approved the process. Do not assume a tool is private because it feels private. Those are not technical safeguards; they are legal risk controls.
Where this goes next
Heppner will not be the final decision on AI attorney-client privilege. It is the initial example. Other courts will now encounter the same issue in civil litigation, internal investigations, trade secret disputes, regulatory proceedings, and corporate discovery disputes. Reuters has already observed that Heppner’s reasoning might extend beyond privilege and into trade secret law, where confidentiality and reasonable protective measures are also important.
That is why this case requires more than just a quick headline or a narrow criminal law summary. It marks the point at which courts began applying old confidentiality rules to new AI behavior, with little patience for the argument that the technology changes the doctrine. In one sense, that should not surprise anyone. Privilege and work product have always depended on structure, purpose, confidentiality, and legal guidance. What Heppner demonstrates is that courts may apply those principles to AI with very little sympathy for casual or self-directed use.
Final thought
The lesson here is not that lawyers and businesses should avoid AI. Instead, they need to be cautious about using AI casually during legally sensitive moments. While quick action can seem useful, especially in a crisis, rushing without proper structure can lead to serious exposure, and early mistakes are often the hardest to fix later.
That is the real value of Heppner. It prompts a better question. The question isn’t whether AI is useful. It’s whether you are using it in a way that safeguards your position before the fight begins.
Read More:

