May 27, 2025

By Daniel H. Ennis and Drew Stevens


Daniel Ennis and Drew Stevens - Headshots

Since its 2022 release, ChatGPT has revolutionized industries, but its integration into legal and business operations raises critical concerns. This article explores pressing AI legal issues, such as compliance with evolving laws, data protection, and the consequences of AI errors in real-world applications. Understanding these concerns is essential for businesses to navigate AI adoption successfully.

ChatGPT was first released to the general public in November 2022. Generally accepted as the first mass-market, large language model artificial intelligence (AI) program, ChatGPT and its competitors have rewritten how many industries envision integrating (or integrate) technology into their operations. Suddenly, outcomes that were only the realm of science fiction appeared achievable; however, the use of AI in practice comes with a myriad of legal issues to address, and some are more novel than others. This article discusses some of the most pressing AI legal issues

Hallucinations and Accuracy
One of the primary concerns with AI is its propensity to provide incorrect or untrue responses, often referred to as hallucinations. Studies have put the hallucination rate for some prominent AI programs, depending on task and complexity, in excess of 25%, with some solutions achieving a lower error rate only through refusing to answer some questions. In the legal field, hallucinations have been a significant issue with the usage of AI, and there have been multiple cases where the use of AI (and subsequent hallucinations) in court filings led to adverse consequences:

  • A lawsuit in a New York court was dismissed as a result of hallucinated citations and the relevant lawyer was sanctioned $5,000 and required to write letters of apology to judges cited in the (hallucinated) cases.
  • Attorneys in a Wyoming court were sanctioned a total of $5,000 and agreed to pay legal fees and expenses of opposing counsel in responding to a brief that contained eight cases entirely created by AI.
  • An attorney in an Indiana court faced aggregate sanctions of $15,000 for providing false case citations in three separate legal briefs.
As a result, some courts have begun to require attorneys to provide affidavits as part of their court filings that their work product was either created entirely without the benefit of AI or that any case citations generated by AI were manually checked by the attorney. As an example of how an AI hallucination could impact a non-legal business, Air Canada was required to honor an inaccurate discount promised by an AI-driven online chatbot hosted on its website despite the correct (and less favorable to the customer) discount policy being displayed elsewhere on the website.

Vendor Contract Concerns
Fundamentally, AI is a software product, with all the related contract procurement issues associated with purchasing software. A non-exhaustive list of issues to consider when contracting for any AI product would include:
  • Service level agreements, providing for minimum product standards, uptime requirements, product support, and end-of-life treatment
  • Agreements on data ownership, access, and usage
  • Provisions setting forth the initial term of the contract and the process for renewals and termination, along with the amount and cadence of pricing increases (if any)
  • Clauses addressing the impacts of data breaches and related notification requirements
  • Indemnities by the AI product provider for, among other things, any copyright violations related to training the AI model
Privacy and Confidentiality
Another major headline issue with the introduction of AI is data privacy and confidentiality. Most publicly available AI programs retain all user data, which is then used to further train the AI and potentially incorporate findings into future responses provided to different users. As an example, Samsung banned its employees in 2023 from using public AI products after multiple instances of proprietary Samsung source code were being entered into those AI products for editing purposes and subsequently being reproduced for other users. Subsequent developments in “closed” AI models (where user inputs are encrypted and not used to train the AI tool) have reduced this concern, although this approach may not allow for the AI model to grow by including those inputs in potential outputs. A closed AI model also often comes with significantly increased costs and maintenance concerns. 

Users of AI products also may have independent contractual duties not to use data otherwise in their possession to train potential AI programs. As examples only, some companies require their vendors (including law firms) to acknowledge that the company (and not the vendor) owns all data associated or developed by that vendor in connection with matters for that company. Agreements between a company and its customers also may contain confidentiality clauses that may not allow the company to use any customer information to train the company’s AI program due to the risk of disclosure of customer confidential information to a third party as part of the AI model’s output.  

Please click here to continue reading. (starts with the Applicable Law subhead on page 26)

About the Author

Dan Ennis is a partner with Parker, Hudson, Rainer & Dobbs. He is a lending attorney with a singular focus on closing lending transactions. He works with banks, finance companies, and other lenders to provide the secured debt necessary to power growing businesses across the U.S. and internationally.

Having a concentration in lending work, Dan helps his clients through transactions with a wide range of size and complexity. His deals include single-lender and syndicated credit facilities, often secured by real and personal property. 

An accomplished civil litigator, Drew Stevens is of counsel with Parker, Hudson, Rainer & Dobbs and has represented clients in complex commercial litigation across multiple industries. Drew has represented franchisors and other employers in enforcing post-termination noncompetes, commercial real estate developers in leasing and construction disputes, and hospitals and health systems in defense of claims under the False Claims Act. Drew also has extensive experience litigating intellectual property disputes, particularly trademark infringement matters, and his practice includes national insurance-coverage disputes in the context of mass torts, in addition to class-action defense.