We invite you to read the column by our Civil Litigation and Arbitration Group associate, Alexis Salvo, on the risk that generative AI poses to the validity of originally false documentary evidence.
There is a technology that is challenging documentary evidence in our civil proceedings. I am talking about Generative Artificial Intelligence (“GenAI”). These are not traditional algorithms that only analyze or classify data; we are dealing with a technology capable of creating new, original content that is often indistinguishable from reality.
In the field of evidence, the landscape has changed. It is no longer just a matter of detecting whether a PDF has been altered, whether an image has been edited in Photoshop, or whether a signature has been forged by hand. The real challenge now is the creation of documents from scratch that appear completely legitimate. This is already happening on social media, where it is increasingly difficult to distinguish between what is real and what is fake, and it is beginning to happen in the legal sphere. Imagine a perfect synthetic invoice, an email with confidential information that was never sent, or a photograph of a delivery of goods that never existed. We are faced with information that, ironically, is “originally false.”
If a litigant submits one of these photos or documents in court, they will do so under the guise of a private instrument. From a procedural standpoint, we refer to Article 346 of the Code of Civil Procedure: it is considered acknowledged if no objection is raised, and to challenge it, paragraph 3 offers us two avenues: to allege “falsity” or “lack of integrity.”
And this is where the rule gets complicated. Article 346 faces a conceptual tension for which it was not designed, because traditional grounds for challenge are not sufficient given the nature of GenIA.
Let’s analyze “lack of integrity.” One interpretation of the rule may assume that there is a true original document that someone has altered, crossed out, or modified. But a document created by AI is, technically, complete from its origin. The file was not manipulated; it was created from scratch. Can we really accuse a file that has not undergone any modification of lacking integrity?
The same is true of “falsity.” One possible interpretation of the rule says that this ground relates to authorship (such as a forged signature) or ideological content. Generative AI circumvents this control because it creates a perfect material falsity: it does not merely imitate a scribble or a signature, but generates a complete reality—a meeting, a scene, a delivery—that visually resembles the truth. Traditional calligraphic skill is left with nothing to work with.
If a judge adheres to one of the above interpretations, that synthetic image may not appear to be false or adulterated according to current standards. It is original, created from scratch, even if it reflects a lie. Let’s put this into practice. Imagine a regular lawsuit for payment of pesos. The plaintiff submits, as a private document, a hyper-realistic AI-generated photograph that clearly shows the alleged delivery of goods to the defendant’s warehouses.
The defendant knows that this delivery never took place. But the document is flawless. In challenging it, he faces a diabolical test: he is required to prove a negative fact (that the delivery did NOT take place) or to technically prove the falsity of an image that has no visible “defects.” In practice, AI reverses the burden of proof. It forces the victim of the forgery to undertake an expert and costly evidentiary effort to disprove a “reality” that the other party generated in seconds and at no cost.
Finally, how can the other party prove that the photo is fake? If they fail to do so, the judge, applying the maxims of experience and logic, could fall into a serious problem: validating a fact that never occurred. The risk does not lie in the incompetence of the judge, but in the fact that “sound criticism” — which is based on human experience — is disabled when our own sensory experience is violated by the perfection of the algorithm.
Thus, the challenge facing the law is not only technological, but existential: to prevent our justice system from ultimately validating, with the force of res judicata, a reality that was, from its conception, originally false.
Column written by:
Alexis Salvo | Associate Civil Litigation and Arbitration Group | asalvo@az.cl



