When the Arbitrator Is an Algorithm: Legal and Ethical Challenges of Arbitration by Artificial Intelligence in Brazil

CategoriesBlog

When the Arbitrator Is an Algorithm: Legal and Ethical Challenges of Arbitration by Artificial Intelligence in Brazil

Bárbara Janne Fonseca da Silva[1]

Bruna Novaes Beginsky[2]

Gabriela de Oliveira Fernandes[3]

 

The development of Artificial Intelligence (AI) tools has recently sparked significant debates about their use in the legal field. Especially in highly complex disputes – which require a large volume of documentary production – discussions have focused on the feasibility of employing these tools as support instruments for arbitrators.

In this sense, AI would function as an information-filtering mechanism, designed to facilitate the analysis of the large volume of documents produced and, consequently, to increase the efficiency of arbitral proceedings. However, while the debate over using AI as a support tool for arbitrators is still unfolding in Brazil, the U.S. experience has already opened up a new possibility: conducting arbitration exclusively by machines.

The use of electronic means as alternatives to dispute resolution methods is not new. Online Dispute Resolution (ODR), for example, dates back to the 1990s, when efforts sought to simplify the resolution of disputes that arose in virtual contexts[4]. Nevertheless, even though they operated in a fully electronic environment, such platforms still involved a high degree of human intervention.

The evolution of AI tools, however, has removed the human arbitrator from the position of impartial decision-maker and placed an algorithm stead. The new alternative is tempting. After all, what could be more efficient than an AI capable of resolving a complex dispute in a few days?

At first glance, replacing the human arbitrator with an algorithm seems to meet the demand for speed in dispute resolution. Using construction arbitrations as a reference, the benefits of issuing a quick and, at first sight, qualified decision appear undeniable, especially in the context of disputes that last many years.

It is precisely in the context of a demand for efficiency that platforms like Arbitrus.ai[5] have emerged. Developed to resolve disputes through AI, the platform presents itself as a faster and more economical alternative compared to traditional arbitration or the Judiciary.

With a large document-processing capacity, the platform accepts the submission of pleadings and even the holding of hearings, promising to settle the dispute with a decision within seven days. Moreover, according to its developers, the AI’s decisions would have a high level of quality, since they are based on a “deep knowledge of the law” built from a “database with thousands of cases, statutes and regulations”[6] – the decisions, finally, are submitted to human review before publication.

Although the use of an algorithm appears to be an attractive proposal, its implementation proves to be poorly compatible with the ideals that guide arbitration in the Brazilian context. Indeed, the idea of an AI acting as an arbitrator challenges the very notion of private jurisdiction. The issue, however, is not merely theoretical: it involves restrictions imposed by the Brazilian legal order.

Article 13 of Law No. 9.307/1996 establishes that “any capable person who has the confidence of the parties may be an arbitrator.” The requirement of a “capable person” is not merely formal: it derives from civil capacity, which presupposes legal personality and discernment.

The AI, however sophisticated, does not possess its own will, nor legal responsibility. Thus, a proceeding conducted entirely by algorithmic systems would not only result in an annulable “award”[7], but would also fail to meet the minimum elements required to be recognized as arbitration in the technical sense.

This is what is observed in automated dispute resolution models used by some digital platforms, which employ the term “arbitration” broadly to designate instant decisions between users and providers. In such cases, these are private conflict-management mechanisms, but not arbitration, precisely because of the absence of a human arbitrator vested with private jurisdiction. In this scenario, any “decisions” thus issued do not fall under the legal regime of arbitration and, if presented as such, could be considered null from the outset.

However, even if a future legislative amendment were contemplated to allow the figure of an “artificial arbitrator,” complex legal and ethical challenges would arise. The first is liability for erroneous decisions. AI has no legal imputability: there is no intent, fault, or duty of care that can be attributed to an algorithm. If an unjust or incorrect decision were made, who would be held responsible? The programmer who wrote the code? The company that marketed the software? Or the arbitral institution that used it?

The Brazilian system lacks adequate mechanisms to frame errors arising from automated decisions. It would be necessary to rethink the foundations of civil liability toward a model of liability for technological risk, which recognizes the systemic nature of the harm caused by algorithmic decisions.

The second challenge is impartiality and algorithmic bias. Article 14 of the Arbitration Act requires that the arbitrator be independent and impartial[8]. In the case of AI, that guarantee depends on the neutrality of the data and the logic of the algorithm, elements that are, as a rule, inaccessible to the parties and susceptible to bias.

These learning systems reproduce patterns from the training data provided, which can generate distortions or unintended discrimination. The opacity of these systems compromises transparency and obscures the grounds that led to the outcome. In an arbitral context, this would directly affect the duty of coherence that legitimizes the judgment.

On the ethical plane, the most sensitive issue arises: the dehumanization of the act of judging. Deciding is not merely applying rules, but weighing values, interpreting contexts, and exercising prudence. Replacing the arbitrator with a machine turns judgment into a statistical operation, depriving justice of its human and relational dimension. As Carmona and Vieira point out, technology can and should be used as support, but not as a substitute for human reason in the decision-making process[9].

Considering this, Article 20 of the LGPD adds an important normative layer: by guaranteeing the data subject the right to review decisions made solely by automated processing, the law signals that the full autonomy of decision-making systems still encounters clear legal limits.

The veto of §3, which would have required human review, does not authorize the replacement of the adjudicator by machines, but preserves the possibility of future models of hybrid or AI-assisted review. Thus, although the LGPD does not legitimize an “artificial arbitrator,” it does not close the door to automated support mechanisms, provided they are subject to transparency, contestability and supervision, reaffirming that the final decision remains a human act.

Finally, there are institutional implications. Arbitration is a form of jurisdiction recognized by the State, and its legitimacy depends on public trust. Transferring this function to an autonomous system, without transparency or the possibility of control, would jeopardize the very pact that underpins private justice.

Currently, Bill no. 2338/2023[10], known as the AI Legal Framework, is pending in Congress. The Bill classifies as “high risk” the use of AI systems by judicial authorities in decision-making processes involving the interpretation and application of laws, guaranteeing affected persons the right to an explanation of automated decisions, as well as the right to challenge them and to request human review. Although there remains room for interpretation, it can be argued that the same rules should apply to the private sector of arbitration.

The approval of the Bill, in its current wording, would prevent an arbitrator from fully delegating the conduct or decision of the process to Artificial Intelligence, imposing strict rules and limits on its use.

The future of arbitration tends to adopt a hybrid model, in which AI acts as a document-analysis tool and assists in the decision-making process. Technological innovations do have the capacity to make the process swifter, resolving impasses caused by human limitations, but it is necessary to ensure that these tools are guided by ethical principles and clear normative limits in order to avoid the dehumanization of justice.

 

 

[1] Attorney in the litigation and arbitration teams at Toledo Marchetti Advogados, graduated from the Law School of the University of São Paulo

[2] Attorney on the arbitration and contract management team at Toledo Marchetti Advogados, graduated from the São Paulo Law School of FGV.

[3] Lawyer on the arbitration team at the law firm Toledo Marchetti Advogados. Postgraduate student in Contract Law at FGV/SP and in European Union Law at the University of Coimbra.

[4] SOARES, Marcos José Porto. A theory for online dispute resolution (online dispute resolution – ODR). Journal of Law and New Technologies. São Paulo, no. 8, Jul./Sep. 2020. Available at: https://dspace.almg.gov.br/handle/11037/38405. Accessed on: 02 Nov. 2025.

[5] https://www.arbitrus.ai/ Access. Accessed on: Nov 2, 2025.

[6] ARBITRUS.AI. Decisions are based on a thorough understanding of the law coming from our database of thousands of cases, statutes, and regulations. Available at: https://www.arbitrus.ai/. Accessed on: Dec 5, 2025.

[7] Brazilian Civil Procedure Code, art. 32. An arbitral award is null if: II — it was issued by someone who could not serve as an arbitrator.

[8] § 1. Persons appointed to serve as arbitrators have the duty to disclose, before accepting the position, any fact that gives rise to a justified doubt as to their impartiality and independence.

[9] CARMONA, Carlos Alberto; VIEIRA, Vitor Silveira. Artificial intelligence and the arbitral process. In: VAUGHN, Gustavo; DUARTE, Rodrigo; ARRUDA, Raphael; COSTA, Fabio; MORELLO, Ana Vitoria, Coordinators. Law, Legal Market and Society: studies in celebration of the 3rd anniversary of the young lawyers group Leading Young Lawyers. São Paulo: LUALRI Editora, 2020, p. 398.

[10] BRAZIL. Bill No. 2,338, of March 17, 2025. Provides for the development, promotion, and the ethical and responsible use of artificial intelligence based on the centrality of the human person. Available at: https://www.camara.leg.br/proposicoesWeb/prop_mostrarintegra?codteor=2868197&filename=PL%202338/2023. Accessed on: Nov 2, 2025.

Postagens relacionadas

19 DE DECEMBER DE 2025

Leveling the playing field:...

Leveling the playing field: what is Third Party Funding and the disclosure of the Funding...

0

19 DE DECEMBER DE 2025

Aggressiveness vs....

Aggressiveness vs. Assertiveness: What Is the Best Way to Conduct a Cross-Examination in...

0

19 DE DECEMBER DE 2025

The Production of Expert...

The Production of Expert Evidence in Arbitration Proceedings Involving Construction...

0

19 DE DECEMBER DE 2025

Multi-Party Arbitrations...

MULTI-PARTY ARBITRATIONS – AN OVERVIEW OF THE ARBITRATION RULES OF BRAZILIAN...

0

19 DE DECEMBER DE 2025

Commentary on Conflict of...

Commentary on Conflict of Jurisdiction No. 197.434/SP: recognition of the jurisdiction of...

0

19 DE DECEMBER DE 2025

Relatório do Evento XII...

XII Congresso CAM-CCBC de Arbitragem – Parte I    Jamyle Terceiro Jardim1  Mariana...

0

LOCALIZAÇÃO

São Paulo
Rua do Rocio, 220 – 12 andar – cj. 121
Vila Olímpia – São Paulo – SP – Brasil – 04552-000

Rio de Janeiro
Avenida Presidente Wilson, 231 – 14 andar – cj. 1401
Centro, Rio de Janeiro – RJ – Brasil – 20.030-905