Open navigation
Search
Search
Publication 19 Jan 2026 · Brazil

The use of AI in commercial contracts

7 min read

On this page

For a while, artificial intelligence ("AI") was treated only as a productivity accelerator. 

Generative AI helped write emails, review documents, automate repetitive tasks, and reduce response time. Today, this vision has become small and outdated. AI has advanced rapidly and has become a relevant operational and decision-making component in the internal flows of companies.

In the context of commercial contracts, AI is no longer a mere instrument and has started to act as an element of contract regulation, influencing the formation and/or governing the execution of the contractual relationship.

The paradox is that while AI has become a decision-making tool for companies, many commercial contracts are still negotiated as if AI had a minor role, with no real impact on the design of responsibilities in the contract.

The contract is how the parties regulate equity effects, allocate risks and build predictability. And literature on technology has insisted on one point: the digital age has not only changed the form of the contract, which is now signed electronically, but the very structure of the risks involved.

It is worth pointing that the advancement of AI in contracts happened in two dimensions: first, AI as an object, when there is the contracting of goods and services using AI; and, the second, which is deeper, in which algorithms are no longer just tools and become part of the very formulation of the contractual document and the execution of the pact between the parties.

Focusing on the second dimension mentioned, when we talk about the use of AI for the formation and execution of commercial contracts, it is possible to distinguish two most common fronts of use. On the one hand, it is a support tool for writing, reviewing, negotiating, and mapping contractual risks. On the other hand, it is a component of execution itself, influencing performance and decision-making, from automated recommendations on platforms to predictive analytics that direct routes, inventories, prices, and risks.

This duplicity, by the way, is also a useful way of organizing the debate and distinguishing two situations. In the first, AI acts during negotiation, assisting or conducting functional steps in the formation of the contract or the determination of the object, which can be called algorithmic negotiation. In the second, the algorithm is designed to govern the execution and management of the contractual relationship, automating compliance, monitoring, and performance triggers, which corresponds, in a stricter sense, to the commonly called smart contracts.

In both cases, what is at stake is not just efficiency: it is the way risks are distributed and generate the false impression of neutrality.

Generative AI, in particular, brings a combination of traits that put pressure on the traditional model of contract law: systems that are more vulnerable and open to external interference, often opaque, endowed with operational autonomy, highly complex and, above all, unpredictable in terms of results on a scale.

There is a technical point that helps to understand why contracts that use AI require different governance:

AI is not just automation

While traditional automation is built with previously predicted alternatives (the classic "if A, then B"), AI breaks the predictability between initial programming and results, because it learns and makes decisions in complex environments, with variables that change over time. The literature comparing traditional automated systems and AI systems is categorical: the former are predictable; the latter, when inserted in real and complex environments, do not allow us to accurately anticipate all the results.

7028b23795a4-1920x768-Website-stage.jpg

This detail has a direct impact on the formation of the contract. In AI-mediated contracts, the declaration of will of the system holder is not limited to the moment when a proposal or acceptance is issued. The will is formed in two moments: first, when you decide to use the system and define objectives and parameters; and then, when the system acts in an automated way in constantly changing situations and generates proposals/acceptances that do not have a perfect identity with all the holder's initial plans.

This leads to the idea of "electronic declaration of will": a way of attributing legal effects to the conduct of the automated agent and of understanding to what extent this action is attributable to its holder. The point is not to recognize the machine's "own will", but to understand how law deals with delegated decisions and with the rupture of linearity between human intention and the result of automated processing.
 

This framing helps to explain why the limits of contractual automation are not merely technological: they are legal. AI can automate certain objective and repetitive obligations, but it tends to fail precisely where the law requires judgment: interpretation, context, good faith, supervening circumstances, and balance of the pact. And that's precisely where generative AI accelerates the automation movement, but also amplifies uncertainty and the potential for error at scale.

This leads to the practical conclusion that is extremely useful today: contracts must provide for technical mechanisms of suspension and reversal. If the legal system preserves non-derogable defenses, an automated clause that blocks suspension in essential cases can be considered null and void – and, therefore, the technology must adapt to the rules of law, and not the other way around.

For commercial contracts, this translates into clauses and operational designs that establish:

Kill switch (automatic mechanism suspension in critical situations)
Human fallback (manual intervention in defined events)
Reverse transaction, when possible
Triggers for review or renegotiation
Incident governance (time, evidence, communication, and remediation plan)

Another essential point, especially in complex business contracts: automation and algorithms do not make the contract "neutral". A recurring misconception fueled by more technological discourses is to imagine that the use of AI reduces asymmetries. In practice, AI can actually consolidate unilateral models, reduce real negotiation and convert autonomy into formality, installing a peculiar "technological asymmetry", with direct repercussions on the evaluation of contractual justice and the integrity of private autonomy. The protection of the will and the integrity of private autonomy are relevant precisely to prevent the exercise of autonomy from being reduced to a simulation.

The popularization of generative AI has brought a specific risk: the feeling that technology can replace professional judgment. AI produces plausible texts and suggests essays quickly. But it does not assume due diligence, does not respond to strategic decisions and does not understand, by itself, the commercial and regulatory context. A contract is not just form; it is risk management.

And there's an even more subtle risk: AI doesn't just "create texts," it replicates patterns. By learning from contractual models and recurring language, it tends to reproduce shelf clauses that carry hidden vices: disproportionate liability limits, silent waivers, governance gaps, concepts imported from other systems, and internal inconsistencies. It does this with elevated formal appearance and elegant language in seconds. AI can "make a text that contains strategic imbalances or weaknesses seem appropriate", with sophisticated language and the appearance of a market standard. When the legal department lowers the level of review by relying on the fluidity of the result, AI can amplify risks and not mitigate them. Many contract problems are less "textual errors" and more strategic choices: responsibility allocation, data governance, auditing, remediation, and economic predictability.

And there is an important final point: it is not about demonizing the use of AI in contracts. The use of AI can be positive when it expands the ability to comply accurately and on time, reduces costs, and improves predictability.

But this only works when the contract preserves human flexibility where the law requires it and where the market itself requires it: to correct injustices, react to critical events, and protect trust.

In short, the commercial contract in the age of AI needs to do three things at once: enable innovation, protect autonomy, and govern opacity.

All this discussion may seem, at first glance, more legal complexity. In practice, it is the opposite. Well-designed contracts generate fewer incidents, fewer litigation, and more predictability, and protect strategic data, preserve negotiation margin, and reduce hidden costs arising from disputes and failures.

The goal is not to eliminate uncertainties. AI will remain probabilistic. The goal is to make the use governable: risk allocated with transparency, technology-compatible controls, and remediation mechanisms.

In short, the question that will differentiate companies in the next wave of transformation will not be whether they use AI, it will be whether their contracts allow AI to be used safely, traceably, and responsibly, and whether corporate legal can turn this discipline into a business advantage.

Back to top