by Ryan Abbott and Brinson S. Elliott
Guidelines, rules, and standards are emerging for the use of AI in Alternative Dispute Resolution (ADR). Late last year, the United Kingdom issued guidance for courts, cautioning about risks and the responsible use of AI for legal research and analysis,1 and the California2 and Florida3 State Bar Associations released professional responsibility and conduct guidance for practitioners’ use of Generative AI. These guidelines came after judges in over a dozen United States federal courts issued standing orders for AI.4 Some courts now prohibit litigants’ use of AI for filing preparation, while others require attorneys to attest that a human reviewed any Generative AI outputs used for document drafting. Other orders require disclosure and certification of AI-assisted research and verification that citations created using AI (generative or otherwise) are accurate. Some are questioning the broad nature of these latter models, noting they could require practitioners to disclose the use of any AI-assisted search engines and chatbots or “seemingly innocuous programs like Grammarly.”5 This variation is representative of broader debates around the use of emerging technologies in legal and dispute resolution contexts.
Background on AIDR Despite all the attention paid to artificial intelligence (AI) and approaches to its governance and regulation, it continues to lack a generally accepted definition. We adopt a definition of AI that focuses on its functionality rather than how it was programmed, believing the law should focus on regulating AI behavior: An algorithm or machine capable of completing tasks that would otherwise require cognition.6
From the 1970s until recently, AI models used in ADR were primarily rules-based, requiring programmers to manually code all foreseeable inputs and outputs for any given dispute. These inputs and outputs resembled human if-then logic, linking facts, legal rules, and conclusions. The AI model documented its reasoning in a decision tree, making it explainable and traceable to human actors. One early AIDR system (ADR system utilizing AI) developed by the RAND Corporation required several thousand if-then rules, exemplifying the level of technical skill required to build a system capable of handling even relatively straightforward disputes in narrowly defined areas with known parameters.7
Modern AI systems, including the foundation models underpinning ChatGPT, Google Bard, and Microsoft CoPilot, have relied on machine learning, which uses statistical methods to make classifications or predictions. The capabilities of such systems have improved dramatically in recent years largely due to the availability of Big Data (voluminous and complex datasets) used to train machine learning-based systems, coupled with advances in software designs and greater availability of computing power.
Besides rapid acceleration in the development of emerging technologies, including those with application in legal and dispute resolution contexts, circumstances such as the Covid-19 pandemic have facilitated the uptake of online dispute resolution (ODR) systems, such as those that leverage AI for document-sharing, video conferencing, and case intake.8
Approaches to Regulating ADR and AIDR In the absence of agreed-upon and enforceable qualification and licensing requirements, standards for neutral behavior and responsibilities, procedural safeguards of adjudication, and judicial review except in instances of neutral misconduct, some have concluded that ADR is subject to little to no regulation, authority, standards or monitoring, making it an “informal system”9 and a “largely unregulated industry” operating behind closed doors.10 Some commentators also argue that the “breadth, reach and enforcement mechanisms” for existing private and court ADR rules of practice make “an ethics of ADR become highly pluralistic, substantively conflictual and procedurally cumbersome.”11 For these reasons, some question the quality of ADR in the absence of procedural and institutional safeguards and enforcement mechanisms.12
Although ADR is not formally regulated in the same manner or to the same extent as legal practice or traditional litigation, existing laws apply despite not being ADR-specific. Examples include professional standards for licensed attorneys working in ADR and laws protecting the use of information in ADR proceedings. These rules and standards also apply to the development and use of AI systems in ADR, as do some existing and emerging standards and regulations that specifically govern the design, development, and deployment of AIDR systems.
Spectrum of AIDR Systems How AI impacts ADR processes and the role of the neutral (e.g., third-party negotiator, mediator, or arbitrator) depends, among other things, on the type of technology, the functions and purposes for which it is used, and the opportunities for human oversight and intervention. It is helpful to think of AIDR as existing on a loose spectrum:
Assistive technologies, which can support, inform, or make recommendations to neutrals, occupy one end, and automative technologies, which can partially or fully automate discrete tasks and, in some cases, even replace neutrals, occupy the other.13
Assistive technologies can help reduce the burden of high-volume repetitive tasks, including time-consuming administrative and procedural requirements (e.g., case-intake and document management), and provide neutrals with informational resources that support informed, accurate, and efficient decision-making. Because these technologies neither fundamentally alter ADR processes nor determine case outcomes, their development and use are generally supported in the ADR literature.14
Beyond benefiting neutrals’ workflows, assistive technologies can make ADR processes more accessible to disputants, who sometimes pursue ADR instead of traditional litigation because of its relative efficiency, affordability, and reliability.15 Assistive AIDR is therefore well positioned to meet ADR’s core objectives of providing disputants with a fair, efficient and economical resolution process.16
Automative technologies17 can help facilitate or independently perform legal research, document preparation and analysis, case negotiation, settlement, award and resolution plan drafting, and decision-making functions.18 AI systems are increasingly able to do this with a speed and scale that outpaces human ability, although it is generally difficult to find objective evidence of how accurate the systems are.19 Automative technologies are sometimes used to autonomously resolve minor, relatively straightforward disputes or to provide system outputs to support and inform human decision making.20 These systems can also empower self-represented litigants by providing informational resources (e.g., accurately forecasted case outcomes) that can help advise whether to pursue ADR altogether.21
As an example of an AIDR system, the British Columbia Civil Resolution Tribunal (CRT) is an AI expert system that provides disputants with a negotiation forum and independently performs case intake, management, and communications.22 If parties cannot reach an agreement in the automated environment, a human tribunal member will oversee the duration of the resolution process, placing the CRT somewhere in the middle of the AIDR spectrum. The CRT’s Solution Explorer, which offers free legal information and dispute assessment tools, was used over 30,000 times between April 2022 and March 2023.23 Only 24% of explorations led to a claim, suggesting that the platform may have helped users resolve their disputes at an earlier stage than they otherwise might have. These services can help alleviate concerns that ADR favors more powerful and well-resourced disputants.
AIDR Risks and Challenges What gives AI systems transformational potential may present a weakness in the ADR context. Machine-learning-based AI systems derive rules from correlative patterns in data and then apply those rules to new data. However, laws and rules do not provide “the kind of structure that can easily help an algorithm learn and identify patterns and rules.”24 Conflicts can involve multiple areas of law (e.g., tort, property, insurance, family) and disputants from different jurisdictions, which can complicate or prevent the “specialization into specific case types” necessary for training and instructing AI.
Further, most existing AI systems cannot independently execute significant tasks without any human oversight.25 The analysis and interpretation necessary to apply rules to new facts often require the ability to navigate subtle contextual differences, such as whether a behavior was ‘reasonable’ or an outcome ‘foreseeable’ in a particular situation. Human neutrals also often rely on experiences, knowledge, and normative judgments to assess disputants’ reliability and deal with social and emotional issues.26 Because complex and disputed fact sets are a feature of many cases, and AI is not currently capable of accurately measuring human credibility,27 it may not be well equipped to automate these interpretive, human aspects of ADR.
Parties in legal and dispute resolution contexts possess rights to a reasoned decision and due process. Some AI systems operate and produce predictions, recommendations, or decisions in a way that is not explainable or understandable to system users. The opacity of these “black box” AI systems makes it difficult to verify whether their outputs are valid and reliable or if there are underlying biases or errors. Not being able to access or understand the basis of a decision undermines disputants’ rights to a reasoned decision and their right to challenge and appeal a decision.
While some support AI automation in limited instances, such as high-volume and low-value disputes, or for low-complexity cases involving developed bodies of law (e.g., traffic violations), other critics conclude that automative technologies should never replace humans in dispute resolution and legal processes insofar as they lack human reasoning and common sense, and therefore cannot achieve true fairness and justice.28 United States Chief Justice John Roberts recently expressed a similar view29 that, while flawed, human adjudications are presently fairer than machine outputs, concluding that “machines cannot fully replace key actors in court.”
Existing Rules and Standards for AIDR UNCITRAL has been publishing conventions, model laws, and rules for international commercial trade law since 1966. Though only one set of standards, the UNCITRAL rules are a respected global benchmark used by professional associations, chambers of commerce, and arbitral institutions.
In 2016, UNCITRAL affirmed that all ADR rules and standards, including confidentiality, due process, independence, neutrality, and impartiality, apply equally to ODR and that fairness, transparency, due process, and accountability should underlie all ODR processes. Its Expedited Arbitration Rules likewise confirm that technology users must abide by fair proceedings rules and that neutrals should give disputants “an opportunity to express their views on the use of such technological means and consider the overall circumstances of the case, including whether such technological means are at the disposal of the parties.”30
The Regulatory Landscape for AI and AIDR The regulatory landscape for AI is dynamic and uneven. Here, we focus primarily on the European Union (EU) since it recently became the first major Western jurisdiction to develop omnibus legislation to regulate AI.
In early December 2023, the European Commission for the Efficiency of Justice (CEPEJ) adopted a set of guidelines for ODR31 that reflect existing standards and practices for ADR, including those articulated by UNCITRAL.32 Among other things, they state that ODR and AIDR systems and their deployers should provide clear and transparent rules and easy, efficient, effective and reliable processes; not infringe on data protection rights; adopt technical measures that comply with the latest standards for safety, fairness, and efficiency; have sufficient knowledge of the technology being used, including its potential risks and negative impacts; and ensure the effective participation of parties, such as by helping them understand all steps in the procedure, the outcome and the effect of the agreement.33
In December, the CEPEJ also adopted an Evaluation Tool34 that assesses compliance with the five principles of the 2018 European Ethical Charter on the use of artificial intelligence in judicial systems and their environment, which includes ODR: respect for fundamental rights in the design and use of AI tools; non-discrimination; data quality and security; transparency, impartiality and fairness; and under user control. Technology applications must also not undermine rights granted in all civil, commercial, and administrative hearings: access to a court; adversarial principle; equality of arms; impartiality and independence of judges; and right to counsel.
The CEPEJ’s perspective35 on assistive versus automative technologies is consistent with the broader ADR literature, viewing tools that do not “affect the actual administration of justice” as typically low-risk and stating that AI systems used to assist with research, interpreting facts and law, and applying law to concrete sets of facts “should not affect the independence of judges in their decision-making process,” which “should remain a human-driven activity and decision.” To this end, in the 2018 Charter, the CEPEJ referenced section 22 of Europe’s General Data Protection Regulation (GDPR), which allows persons “to refuse to be the subject of a decision based exclusively on automated processing,” when the automated decision is not required by law, and entitles them to decisions made by human decision-makers.36 Both the EU GDPR and United Kingdom (UK) Data Protection Act (2018) afford data subjects rights to be informed about and object to the use of automated decision systems, and to access meaningful information about how the system works and its potential consequences.
Interestingly, the CEPEJ postponed the release of its 2021 Roadmap to accommodate the introduction of the European Union Artificial Intelligence Act (EU AI Act) in 2021. After many months of negotiation, Members of Parliament reached a political deal on the AI Act on December 8, 2023.37 They agreed to release the provisional text38 in February 2024,39 and all rules should become fully applicable 24 months after the Act goes into force.40
Consistent with the 2018 Charter, the EU AI Act views the use of AI technologies in the administration of justice as a high-risk application subject to the following mandatory requirements before systems can be released on the market:41
High risk – Conformity assessment to demonstrate compliance with mandatory requirements for trustworthy AI (data quality, documentation and traceability, transparency, human oversight, accuracy, cybersecurity, and robustness) and quality and risk management systems.
Proposed amendments to EU product liability laws42 accord with the AI Act43 by making providers and manufacturers liable to compensate injured parties when AI or AI-enabled hardware or software products are defective, cause personal injury, property damage, data loss, or privacy breach. Critically, defectiveness is defined broadly to include “the effect on the product of any ability to acquire new features or knowledge after it is placed on the market or put into service,” seemingly referencing machine learning systems that can acquire new behaviors from ingesting and learning from new data.
The United Kingdom and the United States are also seeking to promote the responsible development, deployment, and use of AI.
According to the UK Information Commissioner’s Office,44 those who explicitly consent to automated decision system processing have a right to an explanation of the system’s decision as well as several other underlying features: rationale, responsibility, data, fairness, safety and performance, and impact. By encouraging rights to a reasoned decision and due process, explainability statements can help overcome concerns around black-box AI systems in legal and judicial contexts.
Some U.S. state privacy laws45 afford residents rights to receive meaningful explanations about AI system logic and to opt-out of automated decision-making in certain contexts. Federally, the United States has indicated46 it also views the use of AI in judicial and ADR processes as high-risk, requiring stringent protections such as, “(a) the ability to opt out of ADR processes involving automated technologies; (b) access to an explanation of how the system operates and why it arrived at its resolution, so parties can challenge or appeal the decision; and (c) comprehensive privacy-preserving security measures for systems that use, process or extract sensitive data about individuals.”
How AI Rules Will Become ADR Rules Both AI and ADR are regulated through rules that apply to more general areas, including privacy and advertising practices.47 Rules that apply to ADR, such as conflict disclosures, also apply to AI used in ADR. The emerging body of rules for AI will likewise apply to ADR.
AI is already part of many ADR processes. As AI capabilities improve, AIDR adoption will grow, and traditional ADR systems will face pressure to incorporate AI. The CEPEJ, for instance, is already directing EU member states to identify areas and sectors that could be made more effective and efficient by online ADR and supports their “use of technologies in ADR through the adoption of soft law instruments,” such as guidelines and recommendations.48
In 2020, the European Committee on Legal Affairs suggested that deployers are in control of AI system risks, and thus have liability for AI-generated harms.49 This reasoning may make human neutrals liable for harms caused by AI systems in ways they would not have been had they caused similar harm directly. For example, a neutral may be liable for using an AI system that operates with a systemic racial bias. This enhanced liability may encourage greater attention to AIDR system design, procurement, and deployment.
Human decision-making cannot be interrogated in the same way as an AI system. Though practitioners can be held liable for racially motivated behavior, a human neutral will rarely admit to racial bias. Instead, they are likely to justify an award in a reasoned decision based on permissible criteria. Where possible to detect conscious or unconscious neutral bias, this finding is unlikely to provide adequate justification for challenging a particular award’s validity. Even where very clear patterns emerge, such as that a statistically significant number of a neutral’s awards rule against disputants of a particular race, it will be very difficult to prove causation. Human neutrals are rarely held accountable or disciplined for errors or biases in judgment. AI systems, by contrast, can be evaluated for statistical error or bias, and reprogrammed or decommissioned if revealed to be producing inaccurate or invalid outputs. Emerging technologies can therefore drive unique ADR accountability mechanisms.
If emerging rules hold AI systems to higher standards than human neutrals, such as enhanced transparency and explainability, then these rules may help overcome some of the long-felt needs in ADR governance.
ENDNOTES
Ryan Abbott MD, Esq., FCIArb, is a Mediator and Arbitrator at JAMS, New York. He can be reached by emailing rabbott@jamsadr.com. Brinson S. Elliott is with The Cantellus Group in San Francisco, advising clients on the strategy, oversight, and governance of AI and other frontier technologies. She can be reached at brinson.elliott@cantellusgroup.com.
This original article was adapted by the authors from the previous, longer piece: Ryan Abbott and Brinson S. Elliott, “Putting the Artificial Intelligence in Alternative Dispute Resolution – How AI Rules Will Become ADR Rules,” Amicus Curiae, The University of London School of Advanced Study (2023), https://journals.sas.ac.uk/amicus/article/view/5627.