In the News

December 6, 2019


Via Email:

Privacy Regulations Coordinator
California Office of the Attorney General
300 South Spring Street, First Floor
Los Angeles, CA 90013

Re: Request for Clarification of CCPA De-Identification Requirements

This Comment Letter respectfully requests clarification of the requirements under the California Consumer Privacy Act (“CCPA” or “Act”) for CCPA compliant de-identification in order for companies to comply with their obligations under the Act.

  • §999.313(d)(2)(b) – proposed CCPA regulation §999.313(d)(2)(b) provides that in the context of a company’s obligations with respect to Responding to Requests to Know and Requests to Delete, a company may comply by “De-identifying the personal information.”
  • §999.323(e) – proposed CCPA regulation §999.323(e) provides that in the context of General Rules Regarding Verification, “If a business maintains consumer information that is de-identified, a business is not obligated to provide or delete this information in response to a consumer request or to reidentify individual data to verify a consumer request.”

Given the importance of the proper interpretation of “de-identification” under the above enumerated proposed regulations, clarification of the requirements for de-Identification under the Act including, inter alia, clarification of issues raised in this Comment Letter with regard to differences between de-identification under CCPA and HIPAA, are respectfully requested so that companies can comply with §§999.313(d)(2)(b) and 999.323(e) of the proposed CCPA regulations.

The CCPA is an exemplary model of a forward-thinking data protection law that enhances privacy for individuals by providing incentives for companies to implement safeguards that proactively protect information in advance of data misuse by leveraging technically enforced risk-based controls over data when in use versus relying solely on (i) encryption of data when at rest or in transit (but not when in use and it is most vulnerable) and (ii) after-the-fact remedies that fail of their essential purpose to make aggrieved parties whole in the event of violations of their privacy. [1] In the CCPA, this incentive comes in the form of an exclusion from the definition of protected Personal Information, under §1798.140(o)(3) (as amended), of information that is de-identified in accordance with the Act’s new heightened requirements for “de-Identification” under §1798.140(h). This incentive under the CCPA is analogous to incentives provided under the EU General Data Protection Regulation (“GDPR” or “Regulation”) for “pseudonymising” data to provide proactive risk-based protection – in advance – against misuse of protected Personal Data under the Regulation. [2]

The importance of proactive risk-based technical measures to balance data innovation and protection of individual privacy rights in today’s data driven world is highlighted by the fact that consent – by itself – is incapable of effectively protecting privacy rights.

“The free and informed consent that today’s privacy regime imagines simply cannot be achieved. Collection and processing practices are too complicated. No company can reasonably tell a consumer what is really happening to his or her data. No consumer can reasonably understand it. And if companies can continue to have their way with user data as long as they tell users first, consumers will continue to accept the unacceptable: If they want to reap the benefits of these products, this is the price they will have to pay…But this is not a price consumers should have to pay. It is time for something new. Legislators must establish expectations of companies that go beyond advising consumers that they will be exploiting their personal information. For some data practices, this might call for wholesale prohibition. For all data practices, a more fundamental change is called for: Companies should be expected and required to act reasonably to prevent harm to their clients. They should exercise a duty of care. The burden no longer should rest with the user to avoid getting stepped on by a giant. Instead, the giants should have to watch where they’re walking.” [3] (emphasis added)

“Maybe informed consent was practical two decades ago, but it is a fantasy today. In a constant stream of online interactions, especially on the small screens that now account for the majority of usage, it is unrealistic to read through privacy policies. And people simply don’t…Moreover, individual choice becomes utterly meaningless as increasingly automated data collection leaves no opportunity for any real notice, much less individual consent. We don’t get asked for consent to the terms of surveillance cameras on the streets or “beacons” in stores that pick up cell phone identifiers, and house guests aren’t generally asked if they agree to homeowners’ smart speakers picking up their speech. At best, a sign may be posted somewhere announcing that these devices are in place. As devices and sensors increasingly are deployed throughout the environments we pass through, some after-the-fact access and control can play a role, but old-fashioned notice and choice become impossible…Ultimately, the familiar approaches ask too much of individual consumers. As the President’s Council of Advisers on Science and Technology Policy found in a 2014 report on big data, “the conceptual problem with notice and choice is that it fundamentally places the burden of privacy protection on the individual,” resulting in an unequal bargain, “a kind of market failure.” [4] (emphasis added)

The fact that consent – by itself – is not up to the task of protecting the privacy rights of individuals highlights the critical importance of technical risk-based safeguards that protect data when in use like de-identifying data under the CCPA and Pseudonymising data under the GDPR. In fact, “the real promise of government intervention may lie in giving firms an incentive to use consumers’ personal data only in reasonable ways.” [5] And, if the only privacy-respectful alternative is to withdraw consent and to opt-out of having their data processed, this withholds from individuals the potential benefits of data processing and withholds from society as a whole the benefits of representative, non-discriminatory data analysis. [6]

For purposes of this Comment Letter, the heightened requirements for de-identification under the CCPA are referred to as “2020 De-ID Standards” to highlight a comparison between the modern requirements for de-identification under the CCPA and the standards for de-identification under the Health Insurance Portability and Accountability Act of 1996 (“HIPAA”), referred to herein as “1996 De-ID Standards.” The differences between 2020 De-ID Standards and 1996 De-ID Standards are not surprising since there is nearly a quarter of a century gap between the enactment of the two statutes and HIPAA was enacted prior to the popularity of widespread data sharing and combining whereas the CCPA was enacted in full awareness of these modern data processing practices.

The 2020 De-ID Standards under CCPA §1798.140(h) require that the following criteria must be met:

  • The information “cannot reasonably identify, relate to, describe, be capable of being associated with, or be linked, directly or indirectly, to a particular customer;” and
  • The business must have implemented technical safeguards and business processes that prohibit re-identification; and
  • The business must have implemented business processes to prevent inadvertent release even of the de-identified data; and
  • The business must not make any attempt to re-identify the information.

The above makes clear that 2020 De-ID Standards require the existence of technical safeguards that prevent recipients of data from inadmissibly re-identifying individuals represented in a data set when the data is used on a widespread or “global basis.” This “global de-identification” standard requires context-aware, risk-based management of re-identification risk. As a result, in the context of 2020 De-ID Standards, the potential re-identification risk that must be defended against “depends on what everyone else knows and can do with the dataset” because “re-identification can be highly accurate in cases where a supposedly de-identified dataset is analyzed using outside sources of information that are not, themselves, de-identified.”[7]

The HIPAA Privacy Rule provides a standard for de-identification of PHI, which generally states that health information is not PHI if it does not identify an individual and there is no reasonable basis to believe that it can be used to identify an individual. The standard provides two methods—safe harbor and expert determination—by which health information can be designated as de-identified for purposes of the standard and thus used and disclosed outside the Privacy Rule’s protections for PHI. Under both methods, de-identified data retains some risk of identification of the individuals (e.g., patients of a healthcare provider) who are the subject of the information.

Neither method requires removal of identifiers of healthcare providers or others who serve the individuals who are the subject of de-identified information. Accordingly, HIPAA de-identified data may be de-identified with respect to patients, but may include names, national provider identifiers or other identifiers of healthcare providers or covered entity workforce members. CCPA does not except personal information about providers or workforce members from its definition of personal information, however. [8] (emphasis added)

As noted above, neither the safe harbor nor the expert determination methods require de-identification of indirect identifiers that may be used to re-identify individuals represented in a data set. In 2012, the U.S. Health & Human Services Office for Civil Rights (OCR) stated in written guidance on HIPAA de-identification standards that “a covered entity‘s mere knowledge of [specific studies about methods to re-identify health information or use de-identified health information alone or in combination with other information to identify an individual] does not mean it has actual knowledge‖ that these methods would be used with the data it is disclosing. OCR does not expect a covered entity to presume such capacities of all potential recipients of de-identified data. This would not be consistent with the intent of the Safe Harbor method, which was to provide covered entities with a simple method to determine if the information is adequately de-identified.” [9] Lastly, with respect to the popular safe harbor method of HIPAA de-identification, researchers in 2017 discovered a risk of unauthorized re-identification as high as 25-28% versus earlier studies that had reported risk of below .05%. [10] For the foregoing reasons, it is clear that HIPAA requires compliant data use only within locally controlled enclave or siloed environments – i.e., “local de-identification.”

The difference between the de-identification standards under CCPA and HIPAA creates a significant risk to HIPAA covered entities, business associates and data aggregators that believe data sets de-identified using HIPAA 1996 De-ID Standards satisfy de-identification requirements under CCPA 2020 De-ID Standards since resulting data sets would include personal information about California consumers under the CCPA. If a business subject to the CCPA maintains HIPAA 1996 De-ID Standard de-identified data that does not meet CCPA 202 De-ID Standards, then the business would need to honor California consumers’ rights under the CCPA and otherwise comply with the CCPA, including the right to opt-out of the “sale” of personal information and the right to request deletion of personal information, and may be required to register as a data broker with the California Attorney General as a condition to license or otherwise disclose the data for cash or other consideration. [11]

For the foregoing reasons, we respectfully requested that the California Attorney General clarify the requirements for de-Identification under the CCPA including, inter alia, clarification of issues raised in this Comment Letter with regard to differences between de-identification under CCPA and HIPAA.

Respectfully Submitted,

M. Gary LaFever
CEO & General Counsel



[1] See

[2] The benefits of properly “Pseudonymised” data, as newly defined under Section 4(5) of the GDPR, are highlighted in multiple GDPR Articles, including:

  • Article 6(4) as a safeguard to help ensure the compatibility of new data processing.

  • Article 25(1) as a technical and organizational measure to help enforce data minimization principles and compliance with data protection by design and by default obligations.

  • Articles 32, 33 and 34 as a security measure helping to make data breaches “unlikely to result in a risk to the rights and freedoms of natural persons” thereby reducing liability and notification obligations for data breaches.

  • Article 89(1) as a safeguard in connection with processing for archiving purposes in the public interest; scientific or historical research purposes; or statistical purposes; moreover, the benefits of Pseudonymisation under this Article 89(1) also provide greater flexibility under:

    • Article 5(1)(b) with regard to purpose limitation;

    • Article 5(1)(e) with regard to storage limitation; and

    • Article 9(2)(j) with regard to overcoming the general prohibition on processing Article 9(1) special categories of personal data.

  • In addition, properly Pseudonymised data is recognized in Article 29 Working Party Opinion 06/2014 as playing “a role with regard to the evaluation of the potential impact of the processing on the data subject...tipping the balance in favour of the controller” to help support Legitimate Interest processing as a legal basis under Article GDPR 6(1)(f). Benefits from processing personal data using Legitimate Interest as a legal basis under the GDPR include, without limitation:

    • Under Article 17(1)(c), if a data controller shows they “have overriding legitimate grounds for processing” supported by technical and organizational measures to satisfy the balancing of interest test, they have greater flexibility in complying with Right to be Forgotten requests.

    • Under Article 18(1)(d), a data controller has flexibility in complying with claims to restrict the processing of personal data if they can show they have technical and organizational measures in place so that the rights of the data controller properly override those of the data subject because the rights of the data subjects are protected.

    • Under Article 20(1), data controllers using Legitimate Interest processing are not subject to the right of portability, which applies only to consent-based processing.

    • Under Article 21(1), a data controller using Legitimate Interest processing may be able to show they have adequate technical and organizational measures in place so that the rights of the data controller properly override those of the data subject because the rights of the data subjects are protected; however, data subjects always have the right under Article 21(3) to not receive direct marketing outreach as a result of such processing.

[3] See




[7] See

[8] See

[9] See at 28.

[10] See