Introduction Recently, I have noticed it more and more often in my practice as a personal injury lawyer: clients who have ‘talked’ to ChatGPT (or some other tool) before contacting me. Clients who have their answers to my questions written by ChatGPT. Clients who submit my advice to a chatbot for review. Or clients who have compensation calculated by an AI tool. Let me say up front: I completely understand. You are in a crisis, a very uncertain situation, and are full of questions. Through your laptop, you have access to a tool that, within seconds, answers all the questions you've been walking around with for a long time. Why not take advantage of that? An understandable thought, but I also want to warn you with this blog: artificial intelligence can help you gain some insight into your situation. At the same time, it can also provide false information that can damage your business (or perhaps more importantly, your expectations of it). We want to avoid that. OpenAI draws its own line My warning relies on an important change from OpenAI itself. OpenAI is the organization behind ChatGPT that conducts research in the field of artificial intelligence (AI) and its implementation. Late last year, OpenAI made an interesting change to its usage policy. ChatGPT may no longer be used for “providing customized advice requiring a license, such as legal or medical advice, without adequate involvement of a licensed professional.” Freely translated, this means that ChatGPT may continue to help you with medical or legal questions. But: tailored legal (and medical) advice for your specific situation may only come from someone with authority, preferably a lawyer or doctor. Why does OpenAI do this? Because it knows things could go wrong. OpenAI does not want to bear responsibility for any harm a user suffers, such as when they act in line with ChatGPT's legal advice and suffer harm as a result. By drawing this line sharply in the terms of use, OpenAI is trying to hedge against liability. In doing so, OpenAI seems to implicitly acknowledge that legal advice from AI does not have the same value as advice from a professional. This deserves our attention, because when even the company behind ChatGPT says you should be careful with this, it sends an important message. Why ChatGPT doesn't fully understand your business That signal is not without reason. I explain this using a real-life example. I get a call from a client who says, “I have already done a quick search through ChatGPT to find out about liability in my case.” The client then talks about the extensive output on medical malpractice, liability and compensation. I hear things like, “In medical errors, liability of the doctor is never established beyond 80%, so you are always stuck with 20% of your damages. There is case law on that”. Such information is generated (perhaps even with references to case law) and it feels true, but it is not always accurate. In personal injury cases, -