A Vancouver lawyer, Chong Ke, is facing disciplinary action after using an artificial intelligence (AI) chatbot called ChatGPT to generate legal arguments in a child custody case. The issue? The AI tool fabricated two of the cases Ke presented in court.
Ke, who was representing a father seeking permission to take his children abroad, used ChatGPT to find relevant case law. However, the opposing counsel couldn't find any record of the cases Ke cited, raising concerns about the accuracy of the AI-generated information.
While Ke claimed she was unaware of the fabricated cases and apologized for her mistake, the incident highlights the potential dangers of relying solely on AI tools in the legal field.
Here are the key takeaways:
- AI tools, like ChatGPT, are prone to errors and can generate false information. Lawyers should use them with caution and conduct thorough fact-checking.
- Citing fabricated legal cases is considered a serious offense and can lead to disciplinary action.
- The Law Society of British Columbia is investigating the incident and emphasizes the importance of lawyers adhering to professional standards when using AI tools.
This incident serves as a cautionary tale for legal professionals exploring the use of AI in their work. While AI can offer valuable assistance, it's crucial to remember that it's not a substitute for human judgment and thorough research.