AI and the limits of professional responsibility

Oct 29, 2025

We invite you to read the letter to the editor from our senior associate in the Corporate and Business Group, Vicente Martínez, on the use of Artificial Intelligence in legal services.

Dear Editor:

Recently, a Big Four firm admitted to using artificial intelligence to prepare a report for the Australian government, which turned out to contain significant errors. This episode left an open question: are we relying too much on technology for tasks where human judgment is still necessary?

The above case contrasts with the advances being seen in the use of AI in legal services. The most recent example is OpenAl’s Contract Data Agent, an AI agent capable of transforming contracts into structured databases, identifying relevant clauses, dates, and risks. It is a powerful promise: to free lawyers from repetitive work so they can focus on strategic analysis, negotiation, and decision-making. However, that promise also raises dilemmas. What happens when an AI tool makes a mistake in a million-dollar contract? Who is liable for the damage? The lawyer, the company that developed the model, or both? Professional responsibility cannot be delegated to an algorithm.

Artificial intelligence applied to law has a bright future, provided we know how to integrate it wisely. Using it to review, organize, or classify documents can be transformative; delegating unsupervised decision-making to it, on the other hand, can be dangerous. The lesson to be learned from the consulting firm’s case is not to distrust AI, but to learn to coexist with it intelligently and within clear limits.

Letter written by:

Vicente Martínez | Senior Associate Corporate and Business Group | vmartinezw@az.cl

Source: Diario Financiero, October 29. [See here]

Te podría interesar