Artificial intelligence (AI) is becoming part of just about everyone’s life in one way or another. It’s being used increasingly by attorneys to help them more effectively do their jobs and represent their clients.
There are growing concerns, however, about the confidentiality of information used by AI tools in law offices and other legal settings. The New York State Bar Association’s (NYSBA) Task Force on Artificial Intelligence recently published a report that included guidance to attorneys in the state on how to navigate client privacy with the many advantages offered by AI.
The task force advised attorneys to determine whether the use of AI tools will in fact help them better represent a client before they use them. They should also disclose any use of AI tools in a case to their clients.
If attorneys are going to use AI, according to the task force, they need to ensure that the appropriate people, including paralegals, know how to correctly use it. It’s crucial to ensure that any information provided by AI tools is accurate – for example, research on case law.
The potential threat to confidentiality
The head of the NYSBA says, “AI can enhance the delivery of legal services. It obviously has enormous potential because it can already draft documents, conduct research, predict outcomes, and help with case management. However, we have an obligation as attorneys to be aware of the potential consequences from its misuse that can endanger privacy and attorney-client privilege.”
The head of the task force added that AI “at one moment awes us and the next fills us with anxiety. We are aware of the enormous impact it will have on our profession but are also familiar with the many risks it poses regarding confidentiality.”
Legislation is recommended to help ensure sufficient care with AI
There is agreement that while the NYSBA can make recommendations and update its Rules of Professional Conduct, the state legislature also needs to play a role in passing laws to help ensure the responsible use of AI by attorneys as well as judges and those who work for them.
Like any tool, AI is often only as good as the humans who use it. Ultimately, those humans are the ones who have to take responsibility for errors or problems that result. Whether it’s a privacy breach or faulty information, an attorney can’t escape liability for harm that occurred to a client because of their use of AI. If you’ve been the victim of such harm, it can be worthwhile to see what your options are for getting justice and compensation.