Beware Fake Facts: AI Hallucination in Business

Beware Fake Facts - AI Hallucination in BusinessIn 2019, Roberto Mata, flying on Avianca Airlines, suffered a knee injury after a serving cart struck him. He hired Steven Schwartz, a lawyer with more than three decades of experience at a respected firm. Schwartz filed a court brief filled with seemingly supportive case law, including “Petersen v. Iran Air,” “Varghese v. China Southern Airlines,” and “Martinez v. Delta Air Lines.” The citations perfectly bolstered his argument—until defense attorneys researched the cases and discovered none of them existed.

Schwartz admitted he had relied on ChatGPT to draft the filing. Judge Kevin Castel pressed him to explain how such an experienced attorney could make such glaring mistakes. Schwartz then reenacted his exchanges with ChatGPT to the court:

Mr. Schwartz
“Is Varghese a real case?”

ChatGPT
“Yes, it is a real case.”

Mr. Schwartz
“Are the other cases you provided fake?”

ChatGPT
“No, the other cases I provided are real and can be found in reputable legal databases.”

The case, now widely known as Mata v. Avianca, has become a cautionary tale of “AI hallucination,” a term describing when a generative AI presents nonsensical, inaccurate, or fabricated information as fact. Unlike search engines that retrieve actual records, generative AI predicts words and phrases to mimic human language. These systems excel at generating structured, persuasive writing but perform poorly when tasked with producing verifiable legal citations.

Organizations cannot afford to ignore this reality. Without clear policies and training, employees may misuse AI tools, risking reputational damage and even professional malpractice. Vague guidelines do little to deter improper use when the risks remain unaddressed and employees lack direction on appropriate practices.

Organizations must commit to three principles:

  • Verification is Essential: Employees must confirm AI-generated outputs against reliable, authoritative sources to ensure accuracy before using or presenting the information.

  • AI Limitations Must Be Understood: Employees should learn how AI systems function and remember that AI does not consult real-time databases. It generates responses from patterns in past data.

  • Professional Liability Still Applies: Professionals must understand they remain fully accountable for their work product, regardless of the tools used. Invoking AI as a defense does not excuse false or misleading submissions.

AI can be a powerful tool, but without responsibility, oversight, and verification, it can just as easily become a liability.

Security Team written over top of the Network1 logo.

Security Team: We monitor threats, strengthen defenses, deliver policies & training and help keep your business protected. With proactive support, expert guidance, and fast response times, we help prevent breaches before they happen and stop breaches if they do happen.

Network 1 designs, builds and supports the IT you need to run your business more securely, productively and successfully. Whether you want to outsource all of your IT needs to a reliable, responsive, service-oriented company, or need to supplement the work of your internal IT staff, we will carefully evaluate where you are now, discuss where you want to go and implement and support a plan to get you there with as little interruption as possible.

Comments are closed for this post.

Related Posts