In 2026, it is common practice to grab your phone, tablet, or laptop and ask your chosen AI assistant a question. We use AI to get answers to simple questions, such as “What is the capital of Nebraska?” We also increasingly turn to AI for answers to complex and nuanced questions, such as “What is the most likely way that the world will end?” Although experts warn us not to turn to AI for answers to medical and legal questions, people ignore those warnings. As attorney Paul Reed explains on the latest episode of Lawyer Podcast, however, turning to AI for legal advice could be much more detrimental to your case than you realize.
Turning to AI for Legal Advice
Listen to Paul and Jason discuss this case on the LiveFeedReed podcast
Whether it’s Alexa, Google, or ChatGPT, most people have turned to some form of AI assistance for legal advice at some point. After all, why pay an attorney’s hourly fee when you can just ask your phone about the legality of something you are planning to do. Likewise, why spend the time required to meet with an attorney when your computer can tell you whether you have the basis for a lawsuit? Leaving aside the issue of the accuracy of the answers provided by an AI assistant, “talking” to an AI assistant about your legal issues could harm you in ways you may not have considered. Specifically, your “conversations” with an AI assistant could be used against you in any pending or subsequent legal matters.
United States v. Heppner (No. 25 Cr. 503, S.D.N.Y. Feb. 17, 2026)
The U.S. District Court for the Southern District of New York recently handed down an opinion that should give everyone who “chats” with AI pause. In that case, a CEO of a large corporation learned that the grand jury was meeting to consider indicting him on a variety of white-collar crimes, such as securities fraud, giving false statements, and wire fraud. In an attempt to determine how much trouble he was potentially in, the CEO turned to his AI assistant named “Claude.” He apparently engaged in lengthy and detailed conversations with Claude about his allegedly illegal activities and what the potential penalties might be if he is convicted. The CEO was subsequently indicted, and the prosecuting attorney learned about his conversations with Claude and subpoenaed the records of those conversations. The CEO hired a real-life criminal defense attorney who argued that the state was not entitled to the records because attorney-client privilege applies, and the conservations are attorney work product.
For a communication to be protected as attorney work product, the communication must:
- Be between an attorney and a client.
- Be kept confidential.
- Be for the purpose of obtaining legal advice.
The CEO was forced to admit that Claude was not, in fact, an attorney. The court went on to point out that the conversations likely failed the other two elements of the protected work product. The terms of service for the AI platform specifically indicate that the company collects and shares data, meaning that the conversations are not confidential. Finally, if the CEO’s attorney had suggested that he consult Claude for legal advice, it might meet the third criteria; however, since he was not directed to Claude by an actual attorney, it fails on that element as well.
The Moral of The Story – Stay Away from AI for Legal Matters
Although most of us are not CEOs with millions of dollars on the line, the takeaway from the Heppner case applies to everyone. “Conversations” you have with an AI assistant could come back to haunt you if you end up litigating the issues discussed. For example, imagine that you were injured in a car accident and, instead of asking an attorney, you decide to ask your AI assistant how someone walks when they have nerve pain in their leg. Your intention was to determine if your symptoms are common, but it may sound as if you were seeking information to help you fake an injury. You may know better than to post on social media about an accident in which you were injured, but it is now clear that the same warning should apply to AI searches and conversations. In short, operate on the assumption that everything you do on your phone, laptop, or tablet may be used against you in court. While it may be easier and quicker to ask ChatGPT for legal advice, the only way to protect yourself as a litigant is to save your questions for your attorney.
If you have questions or concerns about avoiding AI when you have an open personal injury case, contact the experienced personal injury attorneys at Reed & Reed.