Blog

ChatGPT in your personal injury case: convenient, but not without danger

Introduction

Recently, I have noticed it more and more often in my practice as a personal injury lawyer: clients who have ‘talked’ to ChatGPT (or some other tool) before contacting me. Clients who have their answers to my questions written by ChatGPT. Clients who submit my advice to a chatbot for review. Or clients who have compensation calculated by an AI tool.

First of all, let me say: I completely understand. You are in a crisis, a very uncertain situation, and are full of questions. Through your laptop, you have access to a tool that, within seconds, answers all the questions you've been walking around with for a long time. Why wouldn't you take advantage of that?

An understandable thought, but I also want to warn you with this blog: artificial intelligence can help you gain some insight into your situation. At the same time, it can also provide false information that could damage your business (or perhaps more importantly, your expectations of it). That's what we want to avoid.

OpenAI draws its own line

My warning relies on a significant change from OpenAI itself. OpenAI is the organization behind ChatGPT that conducts research in the field of artificial intelligence (Artificial Intelligence, or AI) and its implementation.

Late last year, OpenAI made an interesting modification to the usage policy. ChatGPT should no longer be used for “providing tailored advice requiring a license, such as legal or medical advice, without adequate involvement of a licensed professional.”.

Freely translated, this means that ChatGPT may continue to help you with medical or legal questions. But: legal (and medical) advice tailored to your specific situation may only come from someone with authority, preferably a lawyer or doctor.

Why does OpenAI do this? Because it knows things could go wrong. OpenAI does not want to bear responsibility for damages suffered by a user, such as when he or she acts in line with ChatGPT's legal advice and suffers damages as a result. By drawing this line sharply in the terms of use, OpenAI is trying to hedge against liability.

With this, OpenAI seems to implicitly acknowledge that legal advice from AI does not have the same value as advice from a professional. This deserves our attention, because when even the company behind ChatGPT says you have to be careful with this, it sends an important message.

Why ChatGPT doesn't fully understand your case

That signal is not without reason. I explain this with a practical example.

I get a call from a client who says, “I've already done a quick search via ChatGPT on liability in my case.” The client then talks about the extensive output on medical malpractice, liability and compensation. I hear things like, “In medical errors, liability of the doctor is never established beyond 80%, so you are always stuck with 20% of your damages. There is case law on that”. Such information is generated (perhaps even with references to case law) and it feels true, but it is not always accurate.

In personal injury cases - as in all legal cases - “all the circumstances of the case” must be considered. You see the weighing of the circumstances of the case in almost every court decision. In other words, everything revolves around context. But that is exactly what AI lacks in many cases.

You can perfectly well ask ChatGPT what the general rules are for compensation due to medical malpractice. You just can't assume that the output applies to your case. After all, the tool doesn't know: what did the doctor say to you beforehand? What is and isn't in your medical record? What does an independent medical specialist think about the causal relationship between the medical error and your injury? How is the liable other party known in the industry? How does a particular judge handle medical errors?

At Beer advocaten, we have specialized lawyers, a medical consulting team, extensive legal databases and a large network. AI tools lack that.

What can and cannot be done?

My message is not that you should not use AI. I myself also use AI tools for specific questions, always critically evaluating the output. Such tools are good for basic explanations of legal principles, such as: “what does causation mean in medical liability law?”. In addition, those tools can also help well in structuring and checking texts.

On the other hand, AI does not work well for (legal) strategic questions, case law research and most important: getting to the bottom of your specific case. On the contrary, AI tools can produce “hallucinations” because they are designed to give you the answer you are looking for. For example, it can make up article numbers, quote law articles that don't exist and even completely fabricate case law (complete with ECLI numbers).

In addition, knowledge is not always up to date. It is important to remember that new case law appears daily and laws change, expire or are replaced regularly. AI tools do not always know this immediately. So it is essential that the output be checked by a professional.

In conclusion

Does this sound familiar? Have you used ChatGPT and are now in doubt about how to proceed with your case? Or do you want to know how to use AI wisely? We can feel free to talk about that.

Just let your lawyer know what you have discovered. Together you can figure out what is true about it, what is not, and whether it can be used in your case. Because that's exactly what OpenAI now dictates: let a professional provide the context needed to use legal advice responsibly.

If you have questions about this blog, please contact the author, Linde Mayer.