- Oz Lindley Joins Pathward’s Commercial Finance Team
- Opening Doors: How SFNet’s Guest Lecture Program Connects Students to Careers in Secured Finance
- Building the Future of Asset-Based Lending at SLR Capital Partners: An Interview with Mac Fowle and Cedric Henley
- The Cost of Uncertainty
- The State of Lender Finance
Addressing Legal Challenges in Utilizing AI
May 27, 2025
By Daniel H. Ennis and Drew Stevens

Since its 2022 release, ChatGPT has revolutionized industries, but its integration into legal and business operations raises critical concerns. This article explores pressing AI legal issues, such as compliance with evolving laws, data protection, and the consequences of AI errors in real-world applications. Understanding these concerns is essential for businesses to navigate AI adoption successfully.
ChatGPT was first released to the general public in November 2022. Generally accepted as the first mass-market, large language model artificial intelligence (AI) program, ChatGPT and its competitors have rewritten how many industries envision integrating (or integrate) technology into their operations. Suddenly, outcomes that were only the realm of science fiction appeared achievable; however, the use of AI in practice comes with a myriad of legal issues to address, and some are more novel than others. This article discusses some of the most pressing AI legal issues
Hallucinations and Accuracy
One of the primary concerns with AI is its propensity to provide incorrect or untrue responses, often referred to as hallucinations. Studies have put the hallucination rate for some prominent AI programs, depending on task and complexity, in excess of 25%, with some solutions achieving a lower error rate only through refusing to answer some questions. In the legal field, hallucinations have been a significant issue with the usage of AI, and there have been multiple cases where the use of AI (and subsequent hallucinations) in court filings led to adverse consequences:
- A lawsuit in a New York court was dismissed as a result of hallucinated citations and the relevant lawyer was sanctioned $5,000 and required to write letters of apology to judges cited in the (hallucinated) cases.
- Attorneys in a Wyoming court were sanctioned a total of $5,000 and agreed to pay legal fees and expenses of opposing counsel in responding to a brief that contained eight cases entirely created by AI.
- An attorney in an Indiana court faced aggregate sanctions of $15,000 for providing false case citations in three separate legal briefs.
Vendor Contract Concerns
Fundamentally, AI is a software product, with all the related contract procurement issues associated with purchasing software. A non-exhaustive list of issues to consider when contracting for any AI product would include:
- Service level agreements, providing for minimum product standards, uptime requirements, product support, and end-of-life treatment
- Agreements on data ownership, access, and usage
- Provisions setting forth the initial term of the contract and the process for renewals and termination, along with the amount and cadence of pricing increases (if any)
- Clauses addressing the impacts of data breaches and related notification requirements
- Indemnities by the AI product provider for, among other things, any copyright violations related to training the AI model
Another major headline issue with the introduction of AI is data privacy and confidentiality. Most publicly available AI programs retain all user data, which is then used to further train the AI and potentially incorporate findings into future responses provided to different users. As an example, Samsung banned its employees in 2023 from using public AI products after multiple instances of proprietary Samsung source code were being entered into those AI products for editing purposes and subsequently being reproduced for other users. Subsequent developments in “closed” AI models (where user inputs are encrypted and not used to train the AI tool) have reduced this concern, although this approach may not allow for the AI model to grow by including those inputs in potential outputs. A closed AI model also often comes with significantly increased costs and maintenance concerns.
Users of AI products also may have independent contractual duties not to use data otherwise in their possession to train potential AI programs. As examples only, some companies require their vendors (including law firms) to acknowledge that the company (and not the vendor) owns all data associated or developed by that vendor in connection with matters for that company. Agreements between a company and its customers also may contain confidentiality clauses that may not allow the company to use any customer information to train the company’s AI program due to the risk of disclosure of customer confidential information to a third party as part of the AI model’s output.
Please click here to continue reading. (starts with the Applicable Law subhead on page 26)


