Our partner Rodrigo Albagli participated in a new report by El Mercurio Legal magazine, where he addressed the challenges and opportunities posed by the growing use of Artificial Intelligence in law firms.
Having become a strategic ally in multiple areas, as it automates tasks, analyzes large volumes of data, improves process accuracy, and allows for the optimization of time and resources, the applications of Artificial Intelligence (AI) span various disciplines, from medicine to education and, of course, the legal world, where law firms have incorporated it to draft documents, review contracts, or search for case law, with such a level of permeability that many conduct periodic training or consider these tools among their fixed expenses.
Along with the benefits, challenges arise regarding its correct implementation. For example, the previous paragraph was designed using a prompt—an instruction, question, or text used to interact with AI systems—and, after minor corrections, it reminds us that artificial intelligence, despite seeming inevitable, does not replace human work for now, but rather complements it.
Back in 2024, in the May edition of this magazine, several studies referred to the use of these technologies, noting that they were focused on routine and standardized matters, allowing lawyers to focus on tasks where they could “add value.” However, progress has been rapid, and in just one year, it has gone from being executive, administrative, or document management in nature to generative.
Given this scenario, warnings are being raised about the dangers of its use for both firms and their clients, and the safeguards that would need to be adopted.
For example, last September, a U.S. court fined a California lawyer $10,000—almost $10 million—for filing an appeal full of false citations generated by ChatGPT, and a few months earlier, a federal judge had ordered two law firms to pay $31,100 for using research also generated by AI.
Other cases, both in this and other countries, include various monetary penalties and even suspension from practice or a retrial. The days when those responsible simply apologized or were given fines that were more symbolic than punitive are likely coming to an end.
And although a 2024 analysis by Stanford University’s Regulation, Evaluation, and Governance Lab (RegLab) and Human-Centered Artificial Intelligence (HAl) indicated that three out of four lawyers planned to use generative AI, despite the fact that these systems experience “hallucinations” in one out of every six queries, experts warn that the biggest problem is not these inventions, but the “blind trust” of the professionals who use them.
Analysis of investigative files and arguments for litigation, among the new features
On the other hand, for the managing partner of Albagli Zaliasnik, Rodrigo Albagli, its use “in litigation and in structuring arguments represents a huge advance in terms of efficiency and data analysis,” he warns that one of its risks “is excessive dependence on these tools, which can lead to a loss of legal judgment and critical analysis skills.”
“The challenge lies in training teams and combining artificial intelligence with human intelligence, which remains irreplaceable in legal reasoning and strategic decision-making. Law, as a social science, is based on human relationships; therefore, completely delegating decisions to AI would pose a serious challenge to its foundations,” he explains.
Source: El Mercurio Legal magazine, December 2025. [See here]



