If you needed a case study why AI is not a substitute for qualified legal advice, the Delaware Court of Chancery has just delivered one bound in nice pink legal tape.1
In March 2026, Vice Chancellor Lori Will delivered judgment in Fortis Advisors LLC v Krafton, Inc., finding that South Korean gaming giant Krafton had breached an acquisition agreement by firing the leadership of its subsidiary, Unknown Worlds Entertainment – the studio behind the hit survival game Subnautica.
The strategy was orchestrated not on the advice of Krafton’s lawyers, but on the advice of ChatGPT.
The deal
Krafton acquired Unknown Worlds in 2021 for US$500 million upfront, plus up to US$250 million (about AUD$355 million) in contingent ‘earnout’ payments tied to the revenue performance of the sequel, Subnautica 2. The acquisition agreement guaranteed the studio’s founders operational independence and protection from removal except for cause.
By mid-2025, internal projections showed Subnautica 2 was on track to exceed initial assumptions. The earnout formula was going to cost Krafton a lot: once revenue exceeded US$69.8 million, Krafton owed US$3.12 for every additional dollar of revenue, up to the US$250 million cap. A base-case scenario projected a payout of US$191.8 million.
GPT enters the chat
Krafton CEO Changhan Kim, who had personally led the Unknown Worlds acquisition, went looking for an exit strategy. When the pesky folks in legal told him2 that terminating the founders on a pretext would not eliminate the earnout obligation and would expose the company to ‘lawsuit and reputation risk’, Mr Kim did not accept that advice.
Instead, he turned to ChatGPT.3
ChatGPT obliged with a multi-stage strategy that Mr Kim dubbed Project X. The chatbot’s recommendations included locking down Steam publishing rights to prevent the game’s release, framing the conflict publicly as being about “fan trust” and “quality” rather than money, preparing legal materials responding to “breaches” that had not even occurred yet, and ultimately firing the founders on manufactured pretexts.
Over the following month, Krafton followed most of ChatGPT’s recommendations. The founders were fired. Their board positions were replaced. The studio was locked out of its own publishing platform.
A public message – also drafted with ChatGPT’s help – was posted on the game’s website, alarming fans rather than winning them over. Mr Kim later admitted at trial that he had deleted specific ChatGPT chat logs relating to the dismissals.
The judgment
The Court rejected all of Krafton’s stated reasons for the terminations, describing them as “newly manufactured justifications”. It ordered the reinstatement of studio CEO Ted Gill with full operational control, restoration of his access to Steam, and an extension of the earnout accrual period by 258 days to account for the disruption Krafton had caused.
The Court had the benefit of the GPT Chats to establish Krafton’s objectives, the CEO’s state of mind and the intent to manufacture an excuse to terminate. The close correlation between the AI plan and the company actions showed that his went beyond abstract brainstorming. Deleting the chats in an attempt to hide the smoking gun was the last ingredient needed to shred the Krafton CEO’s credibility in the eyes of the Court.
The views of the legal advisors that told the CEO not to do it have not been reported.
Lessons for Queensland practitioners
Several takeaways are worth noting.
- First, AI tools are not legal advisors. ChatGPT will generate a confident, structured, plausible-sounding strategy in response to virtually any prompt. It will not tell you that the strategy is a breach of contract, that it will fail under judicial scrutiny, or that following it will result in an adverse judgment potentially worth hundreds of millions of dollars. It has no professional obligations, no practising certificate, and no insurance.
- Second, AI chats are not privileged. Clients should understand that just because the question they ask is a legal one the protections that may apply to discussions with a human lawyer do not apply. For a template warning on this point that might be useful for “mum and dad” clients see the template here: qls.com.au/content-collections/template/qls-warning-to-clients-use-of-ai-tools-template.
- Third, the deletion of ChatGPT logs is a discovery and evidence preservation issue that lawyers should consider. As AI tools become embedded in commercial decision-making, the chat histories behind those decisions will increasingly become discoverable material.
This case is a cautionary tale worth sharing: the most expensive legal advice your client ever receives may be the free kind they get from a chatbot.
Footnotes
1 Case link here: Fortis Advisors LLC v Krafton Inc CA 2025-0805 (Judgment Jan 9 ’26)
2 The conclusions reached were relayed through the business development team, eroding privilege.
3 How this came to light is not set out in the judgment. Reference is made to deletion and attempted concealment of GPT chats admitted by the CEO at trial.


Share this article