Summary
Gwinnett County, Georgia, Superior Court Judge Tracie Cason denied OpenAI’s motion to dismiss a defamation lawsuit. Companies should keep in mind that AI “hallucinations” may present liability for generative AI.
In June of 2023, radio host Mark Walters filed a defamation lawsuit against OpenAI in Gwinnett County Superior Court in Georgia, alleging ChatGPT produced a response suggesting that Mr. Walters was embezzling from the Second Amendment Foundation (SAF), a gun rights advocacy group. In the complaint, Mr. Walters alleged that a ChatGPT user submitted a prompt to ChatGPT as part of research into ongoing litigation involving the SAF and in response ChatGPT hallucinated defamatory material falsely implicating Mr. Walters in an embezzlement scheme. Specifically, the ChatGPT user asked ChatGPT to provide a summary of the SAF complaint. In response, ChatGPT produced factually inaccurate text asserting that the SAF complaint is “a legal complaint filed…against Mark Walters, who is accused of defrauding and embezzling funds from the SAF…The complaint alleges that Walters….misappropriated funds for personal expenses and without authorization or reimbursement, manipulated financial records and bank statements to conceal his activities, and failed to provide accurate and timely financial reports and disclosures to the SAF’s leadership.” When asked, ChatGPT provided the complete text of the completely fabricated legal complaint.
In fact, Mr. Walters is not a party to the SAF litigation, has never held any positions at the SAF, and is not accused of defrauding or embezzling funds from SAF. The fabricated complaint and summary thereof is an example of a “hallucination,” when a generative AI program makes up fake facts. These hallucinations occur because ChatGPT does not function like a search engine, but rather uses natural language processing and other techniques to predict text the user would like to see based on the prompt.
Mr. Walters alleges that by generating the fabricated complaint, OpenAI published libelous matter. In January of this year, a Gwinnett County Superior Court Judge denied OpenAI’s motion to dismiss the defamation lawsuit, allowing the case to proceed. In its motion to dismiss, OpenAI argued that it was not liable for defamation because the user who prompted ChatGPT knew the hallucinations were false. OpenAI also asserted there should be no liability because the ChatGPT terms of use alerts users that ChatGPT “is not fully reliable (it ‘hallucinates’ facts and makes reasoning errors)…[and] care should be taken when using language model outputs, particularly in high-stakes contexts.”
Mr. Walters’ lawsuit is likely one of many cases testing where liability falls when generative AI creates false information that may be damaging. Many internet-based companies that publish content seek protection from defamation and libel suits under Section 230 of the Communication Decency Act. That 1996 Federal law protects internet platforms from liability based on content created by their users. It has yet to be determined, however, whether the law extends that protection to generative AI.
Related Insights
Subscribe to Ballard Spahr Mailing Lists
Copyright © 2024 by Ballard Spahr LLP.
www.ballardspahr.com
(No claim to original U.S. government material.)
All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, including electronic, mechanical, photocopying, recording, or otherwise, without prior written permission of the author and publisher.
This alert is a periodic publication of Ballard Spahr LLP and is intended to notify recipients of new developments in the law. It should not be construed as legal advice or legal opinion on any specific facts or circumstances. The contents are intended for general informational purposes only, and you are urged to consult your own attorney concerning your situation and specific legal questions you have.