We invite you to read the column written by our senior associate at az Tech, Antonia Nudman, in which she discusses the draft law regulating artificial intelligence systems in Chile.
Amid the anticipation generated by the draft law regulating artificial intelligence systems in Chile, it is worth remembering a basic maxim: any regulation that seeks to provide certainty must clearly define its key concepts and the obligations it imposes.
If it fails to do so, it runs the risk of becoming an obstacle that ends up affecting those who seek to innovate responsibly more than those who are actually being regulated.
Today, one of the main points of concern is the definition of key concepts that determine the practical application of the requirements. A critical example is the current definition of “significant risk” in Article 3 of the bill.
According to the current text, it is understood as “risk resulting from the combination of its severity, intensity, probability of occurrence, and duration of its effects and its ability to affect one or more individuals.”
Although it may appear comprehensive at first glance, from a technical and regulatory perspective, this wording is overly broad and subjective. Variables such as “severity” or “duration of effects” lack objective parameters, which opens the door to differing interpretations by authorities or system operators.
Why does this matter? Because the application of measures such as audits, impact assessments, and mitigation plans depends on this concept.
If the threshold for “significant risk” is not defined with clear guidelines, the direct consequence is legal uncertainty for those who develop or implement AI—especially startups or SMEs—who need predictable criteria to plan investments, assess compliance, and decide where and how to innovate.
Chile already has valuable references. The Cybersecurity Framework Law defines objective criteria on the impact on critical or vitally important systems. The future Law No. 21,719 on Personal Data Protection requires impact assessments for high-risk processing, establishing verifiable standards.
Even the European Artificial Intelligence Regulation, which inspired part of this project, accurately describes what is considered “high risk” and links it to specific sectoral contexts.
This is not about preventing subsequent regulations from detailing specific scenarios. Rather, it is about ensuring that the law—from its basic text—establishes clear guiding parameters, articulated with existing regulatory frameworks, that allow all operators to anticipate compliance scenarios and avoid disparate or discretionary decisions.
Comparative learning shows that a clear and consistent definition is not a minor technical detail: it is the basis for balancing innovation, trust, and effective protection of rights.
Leaving key concepts overly open can create uncertainty for the entire chain of actors—developers, investors, users, and citizens. Chile has the opportunity to promote the adoption of AI-based technologies in a safe and responsible manner, fostering innovation without losing sight of clear and applicable standards.
Having precise definitions aligned with comparative best practices is an essential step in providing certainty and confidence to all sectors involved.
Column written by:
Antonia Nudman | Senior Associate az Tech | anudman@az.cl