UF Law Professor Jiaying Jiang says legal inaccuracies generated by language models could make attorneys liable for the mistakes. “Clients trust their attorneys to act competently and diligently in their best interests. If an attorney relies on a language model without thoroughly reviewing and verifying the generated output, this could lead to malpractice claims.”
When GPT-4 – the latest version of OpenAI’s language model systems – was released in mid-March, several aspiring lawyers and law professors used it to take the bar exam. The Large Language Model chatbot passed every subject and performed better than 90% of human test takers.
This news was a bit shocking, undoubtedly, and raised numerous questions. Preparing for the bar exam, for most unassisted humans, necessitates 10 hours of daily studying for nearly three months (after completing three years of legal education). Suddenly, an artificial intelligence (AI) tool can pass the bar easily.