A CEO typed his strategy into an AI chatbot. A Delaware court read every word of it. And it cost his company the case.

On March 16, 2026, the Delaware Court of Chancery issued its ruling in Fortis Advisors v. Krafton. If you run a company, lead a team, or have ever used an AI tool to think through a business problem, this case should get your attention. Not someday. Right now.

What Happened

Krafton, a South Korean gaming conglomerate, acquired Unknown Worlds Entertainment, the studio behind the hit franchise Subnautica, in 2021 for $500 million upfront. The deal included up to $250 million in additional earnout payments tied to future revenue. The contract guaranteed the studio’s founders and CEO Ted Gill operational control of the business and protection from being fired without clearly defined cause.

When internal projections showed the upcoming game release would trigger nearly the full earnout payment, Krafton’s CEO CH Kim wanted out of paying it. His own legal team warned him that firing the executives would not eliminate the obligation and would expose the company to a lawsuit. So what did Kim do? He turned to ChatGPT.

The AI chatbot gave him a detailed corporate takeover strategy. Kim followed it step by step. Krafton locked the studio out of its own publishing platform, posted unauthorized messages on the studio’s website, and fired all three executives, claiming the game was not ready for release.

When that justification failed in court, Krafton tried two new arguments: the executives had quietly stepped back from their roles, and they had downloaded company data before being fired.

The court rejected both. It found the role changes had been openly communicated to Krafton leadership for over a year. The data downloads were ruled defensive moves in response to Krafton’s escalating takeover, and the executives returned everything promptly.

The court quoted Kim’s ChatGPT conversations at length. It ruled against Krafton on every count. CEO Ted Gill was reinstated with full operational authority. The earnout period was extended by 258 days to replace the time Krafton’s conduct had consumed.

Kim also deleted some of those AI logs after the lawsuit began. The court noted it. That decision will cost Krafton even more in the next phase of litigation.

Why This Should Concern Every Business Owner

Here is what this case really means for you.

Kim did not use ChatGPT to draft a letter or look something up. He used it to design a strategy for getting out of a legal obligation, then executed that strategy. The court treated those AI conversations as direct evidence of bad faith, pretext, and deliberate intent.

Your AI conversations are not private. They are not protected by attorney-client privilege. They are not the equivalent of something you said out loud in a meeting. They are written records, and they are fully discoverable in litigation.

Think about that the next time you or someone on your leadership team types a sensitive business strategy into an AI chatbot. If a lawsuit ever follows, a court can subpoena those records, enter them into evidence, and quote them in a published ruling. That is exactly what happened to Krafton.

What is worse, Kim deleted records after litigation started. Do not do that. Ever. Deleting records after a lawsuit begins is called spoliation, and courts treat it as evidence of wrongdoing. It makes a bad situation significantly worse. This is a core principle of sound legal risk management that most business owners only learn the hard way.

AI Governance Is Not Just a Big Company Problem

You might be thinking this only applies to companies doing $500 million acquisitions. It does not.

Any time you use an AI tool to strategize around a business dispute, an employment decision, a contract negotiation, or a termination, you are creating a written record. If that situation ever becomes litigation, that record can be used against you.

Most business owners have never thought about this. Most companies have no policy covering it. That gap is now a real legal exposure, and Fortis v. Krafton is the case that proves it.

I wrote about this earlier this year when I discussed AI legal risks in business and how AI does not reduce legal risk. It reduces hesitation. The Krafton ruling proves exactly that point in a courtroom setting.

The Harvard Law School Forum on Corporate Governance analyzed this case and identified the same core warning: AI-generated communications used outside the protection of legal counsel are fully exposed in discovery. That applies to your business at any size.

What Happens When You Delete the Evidence

Courts do not look kindly on record destruction once litigation is underway. The legal term is spoliation, and judges have a range of remedies available to them, from instructing juries to draw negative inferences all the way to sanctions and adverse judgments.

Kim’s decision to delete his ChatGPT logs after the lawsuit was filed will be addressed in the damages phase of this case. The court has already flagged it. That single decision, made in a moment of panic or bad judgment, will likely cost Krafton more than the original breach.

Your document retention policy needs to cover AI-generated content. If it does not, you are operating with a gap that could define the outcome of your next legal dispute.

Three Things to Do Right Now

Talk to your attorney about whether your document retention policy covers AI-generated content. Most retention policies were written before AI tools existed. That needs to change.

Brief your leadership team. Every person who uses AI tools to discuss strategy, personnel decisions, or business disputes needs to understand that those conversations carry the same legal weight as email. As I have said before, a strong legal foundation is not optional for business owners. It is the difference between controlling a situation and being controlled by it.

Consider what AI governance looks like for your organization. That does not mean banning these tools. It means knowing when to involve legal counsel before using them in sensitive situations, and having a clear policy on how those records are handled.

The Krafton case is a nine-figure lesson in what happens when a CEO treats AI as a private sounding board. Trust me, it is not. Courts have made that clear. Your job as a business owner is to make sure your team knows that before a lawsuit teaches them the hard way.

If you want to talk through your AI governance exposure and document retention policy, reach out to me for a confidential consultation.