Back to top

Blog

Submitted by Guest on March 29, 2021

By: Nathan Moore, a former legal intern for the NAI and a student at the University of Maine School of Law.

The future of privacy law and regulation in the United States could be shifting from the notice and consent regime to that of a duties-based framework. There is evidence that the notice and consent framework is not working for consumers, and with ever-growing technological capabilities, this problem will continue to grow in the future. The basic concept of the duties-based framework is that consumers transfer data to companies, which will perform a service on the data, and the companies will have to conform to the duties delegated to them by law. Multiple duties-based frameworks exist for data privacy and security, but three recently proposed frameworks are particularly relevant as Congress and the Biden Administration consider a national privacy framework: The Data Care Act of 2018, the Consumer Online Privacy Rights Act, and a Brooking Institution white paper on privacy legislation. 

Woody Hartzog and the Data Care Act of 2018 

In his article Privacy’s Constitutional Moment and the Limits of Data Protection, Woody Hartzog analyzes Senator Brian Schatz’s (D-HI) Data Care Act of 2018 and argues that the U.S. should move our privacy paradigm away from the notice and consent framework to one of strict duties. He explains how the notice and consent framework has been failing consumers and argues that the duties-based system can address more issues than the notice and consent regime, while still offering a flexible approach to privacy that consumers can trust. He examines the European Union’s General Data Protection Regulation (“GDPR”) and U.S. sectorial laws and concludes that these laws have built in limitations, are too focused on the individual and consent, and not focused enough on relationships and power, thus having an inherent vulnerability which should lead to its disuse.

Hartzog uses the duties-based framework presented in the Data Care Act as a springboard for his idea of a successful federal law that would address: “(1) corporate matters; (2) trustworthy relationships; (3) data collection and processing; and (4) personal data’s externalities.”1  Senator Schatz defines the duties of care, loyalty, and confidentiality in his law, which Hartzog incorporates into his framework. Senator Schatz defines the duty of care in privacy as requiring providers to reasonably secure individual identifying data and promptly inform users of data breaches that involve sensitive information.2 To meet the duty of care, an ad tech company employing the National Institute of Standards and Technology (“NIST”) or the International Organization for Standardization (“ISO”) would satisfy the requirement for providing security in additional to following state of federal data breach notification standards. 

Hartzog defines the duty of loyalty as the hallmark of fiduciary duties and requires a strict commitment to “refrain from self-dealing and a firm prioritization of the trustors’ interests over the interests of the entrustee.”3  Similarly, Hartzog refers to the Schatz bill, which defines the duty as, “not [using] individual identifying data in ways that harm users.”4  Hartzog explains the duty of loyalty would require that: 

An online service provider may not use individual identifying data, or data derived from individual identifying data, in any way that--(A) will benefit the online service provider to the detriment of an end user; and (B) (i) will result in reasonably foreseeable and material physical or financial harm to an end user; or (ii) would be unexpected and highly offensive to a reasonable end user. 5

Navigating this framework for an ad-tech company that does not come into contact with consumers is a rather easy duty to satisfy conceptually, but depending on the interpretation of the impacts, it could pose real challenges for ad-tech businesses. The duty of loyalty essentially does not bar the use of data but requires smart use of consumer data. Data can be responsibly passed through the stack as long as it does not harm the user, is not used in an offensive way, and is not to the detriment of the consumer. Notice may need to be altered to provide for consumer expectations, but the use of data for digital advertising could possibly be achieved. That is, consumer data could be used as long as it is safeguarded and the consumer is aware of its use. For digital advertising, consumer data is used to the benefit of the consumer because it provides for ad-supported free and low-cost content, as long as the data is not misused or the advertising practices do not provide harms to the consumer. 

Hartzog’s duties framework appears to impose stronger privacy regulations than current laws like GDPR, however, the inherent ambiguity of duties could be a cause of concern for these ad-tech companies. The ad-tech industry could likely survive under a duties-based regime, but the key to survival is the interpretation of any legal outcomes and responsible data stewardship. At the same time, new duties-based regulations could impose heavy compliance and legal burden on the ad-tech industry, and it could therefore cause many smaller companies to exit the market. 

Consumer Online Privacy Rights Act

Senator Maria Cantwell (D-WA) introduced the Consumer Online Privacy Rights Act (“COPRA”) in 2019 to try and address the privacy issues in American law. COPRA is divided into three parts: (1) privacy rights (§§ 101-110); (2) oversight (§§ 201-202); and (3) enforcement (§§ 301-304). The bill contains ambiguous definitions of the duties of care and loyalty as there is no defined duty of care. 

COPRA defines the duty of loyalty in two parts. First, a company should not “engage in a deceptive data practice or a harmful data practice.” Second, it should not “process or transfer covered data in a manner that violates any provision of this Act.”While similar to the Hartzog framework, the COPRA definition is slightly narrowed to focus on data practices. The bill narrows the scope by defining a deceptive data practice as “an act or practice involving the processing or transfer or covered data in a manner that constitutes a deceptive act or practice in violation of section 5(a)(1) of the [FTC] Act.”7 Likewise, the scope of harmful data practice is narrowed by defining such practices as those that are likely to cause, “financial, physical, or reputational injury.”COPRA further defines harmful data practices as those that cause physical or offensive intrusion upon the solitude of an individual.That conception of harmful data practices is a way in which COPRA attempts to codify the tort of intrusion into privacy law. Finally, COPRA leaves the definition of harmful data practices quite vague by including “other substantial injuries” in the third prong of its definition.10 Since this uncertainty poses a high probability for litigation, it is impractical for businesses and would likely be a barrier to innovation. 

Bridging the Gaps: a Path Forward to Federal Privacy Legislation

The Brookings Institution offers a middle ground in their privacy law orientated white paper and accompanying legislative text (“Brookings Framework”).11 The Brookings Framework seeks to take a more practice approach to balance business innovation with the duties of loyalty and care. 

The Brookings Framework proposes adding the duties of care and loyalty to any privacy law. It defines the duty of loyalty as: 

[R]equire covered entities to implement reasonable policies and practices to protect individual privacy “appropriate to the size and complexity of the covered entity and volume, nature, and intended use of the covered data processed,” limit data processing to “necessary [and] proportionate” purposes, consistent with COPRA … and communicate data practices “in a fair and transparent manner.” 12

The Brookings Framework defines the duty of care by modifying COPRA’s §101(b)(2) definition of “harmful data practices” by including in the definition, “discrimination in violation of federal anti-discrimination laws or anti-discrimination laws of any State or political subdivision thereof applicable to the covered entity.”13 The Brookings Framework would also include a prohibition of covered entities processing data in a way that could reasonably be foreseen as causing harm.14

The Brookings Framework includes a duty of loyalty that limits what companies can do with data, and it relegate activities to those that are “reasonably necessary, proportionate, and limited.”15 In an industry conscious move, the Brookings Framework proposes that laws include the duty of care language that allows companies to collect and use data for improving services, both for products requested and those that would be reasonably anticipated within the context of the covered entity’s relationship with the individual.16 For the duty of loyalty, they propose that laws build upon the concept of “reasonably anticipated in the context of the covered entity’s relationship with the individual.” The Brookings Framework suggests enlarging that conceptualization into a set of baseline duties toward individuals.17 Another element they would add to the duty of loyalty is an “obligation to communicate policies and practices for processing and transferring covered data ‘in a fair and transparent manner.’18 Finally, adding to the duty of care, the Brookings Framework suggests having a harmful data practices section, which would encompass civil rights laws and would include in the list of harms the violation of state and federal anti-discrimination laws.19 In general, the Brookings Framework approach seeks to add much-needed specificity. For ad-tech companies, this approach would be beneficial, compared to other approaches discussed above that would result in an open-ended legal framework subject to interpretation by regulators or the courts. 

By pursuing the Brookings Framework, the U.S. can effectively balance consumer protections with business needs. 

1. Woodrow Hartzog & Neil Richards, Privacy’s Constitutional Moment and the Limits of Data Protection, 61 B.C. L. Rev. 1687, 40 (2020).
2. Press Release, Office of United States Senator Brian Schatz, Schatz Leads Group of 15 Senators in Introducing New Bill to Help Protect People’s Personal Data Online (Dec. 12, 2018), https://www.schatz.senate.gov/press-releases/schatz-leads-group-of-15-se...
3. Hartzog, supra note 1, at 47.
4. Schatz, supra note 2. 
5. Hartzog, supra note 1, at 49.
6. Consumer Online Privacy Rights Act, S.2968, 116th Cong. § 101(a) (2019). 
7. Id. at § 101(b)(1). 
8. Id. at § 101(b)(2)(A). 
9. Id. at § 101(b)(2)(B).
10. Id. at § 101(b)(C). 
11. Cameron F. Kerry et. al., Bridging the Gaps: A Path Forward to Federal Privacy Legislation, Governance Studies at Brookings, June 2020.
12. Id. at 6.
13. Id.
14. Id.
15. Id. at 28.
16. Id. at 29.
17. Id.
18. Id. at 30.
19. Id. at 31.

 

Submitted by David LeDuc on March 29, 2021

At a time when Americans are spending more time than ever on internet-connected devices, policymakers and regulators are assessing “dark patterns”: techniques associated with user interfaces (UIs) that intentionally mislead or manipulate users by obscuring, subverting or impairing consumer autonomy, decision-making, or choice. The challenging task in assessing a dark pattern is differentiating from a nudge seeking to achieve a business purpose that is consistent with the objectives and preferences of its users. The Federal Trade Commission (FTC) recently announced that it will be holding a virtual workshop examining the topic, Bringing Dark Patterns to Light, scheduled for April 29, 2021. In advance of the workshop the NAI recently submitted comments to the FTC. Our comments don’t address the breadth of what might constitute dark patterns, or how to assess the issue across all facets of the online and offline world. Rather, we focus narrowly on the assessment of potential dark patterns that are relevant to the digital advertising industry and the “notice and choice” framework for digital collection of consumer data. We emphasized five key conclusions and recommendations. 

  1. The NAI is assessing existing practices and research pertaining to various UIs. The goal is to promote industry best practices to make disclosures and consumer choices more prominent and easily understandable. The NAI is uniquely positioned to provide industry-wide leadership in this area, with extensive expertise and self-reg experience enforcing requirements and promoting best practices.
     
  2. The FTC has substantial authority under Section 5 of the FTC Act to protect consumers against deceptive and unfair practices, including those that might be characterized as dark patterns. This is evidenced by academic research about the application of Section 5, as well as enforcement actions such as the recent FTC case against Age of Learning, Inc.
     
  3. Avoiding dark patterns and seeking to maximize efficiency and clarity can help to alleviate some of the challenges posed by the notice and choice model, but the model has widely recognized inherent limitations for consumer protection. Therefore, policymakers should seek to adopt alternative approaches that lessen the reliance on the model altogether, such as an increased focus on enforcing against unreasonable and harmful data practices. 
     
  4. So called “Light patterns” should be evaluated carefully. Those are practices that make proactive decisions on behalf of users, having their best intentions in mind. Practices that go beyond striving to ease user navigation and decision-making and make assumptions about what is in consumers’ best interests run the risk of promoting certain business models over others. For instance, a business practice or UI that assumes that data-driven advertising is not wanted by consumers, fails to recognize that most consumers want and demand ad-supported digital content and services. 
     
  5. The FTC, state enforcement officials, and other policymakers’ efforts should:
     
    1. continue to educate and warn businesses about the use of deceptive and unfair UIs; 
    2. educate and inform users about navigating online experiences. Admodo.gov is a good example of how the FTC can be very helpful in educating children about how to make informed decisions online; 
    3. avoid prescriptive requirements that could further undermine an already challenging online advertising framework and the ability of businesses to communicate effectively with their users; 
    4. promote self-regulation to incentivize companies who actively seek to uphold high standards, and serve as a regulatory backstop for these efforts. 

The full NAI comments and analysis can be found here.

 

Submitted by NAI on March 26, 2021

Tailored advertising plays an integral role in driving economic growth and encouraging competition among companies. It affords small businesses and startups the ability to create new content and services. We’re proud to support tailored advertising, the free content it enables, and small business that rely upon it to reach audience segments. 

See what we mean:

Submitted by David LeDuc on March 1, 2021

Less than two months into 2020, Virginia is poised to become the third state to enact a comprehensive consumer privacy bill, following California (twice) and Nevada. As of this writing, identical versions of the bill have been passed by the House and Senate, setting the Consumer Data Protection Act (CDPA) (SB 1392, HB 2307) up for a trip to Gov. Northam’s desk. The Virginia trial bar is engaged in a last-ditch effort to have the bill sent back to the legislature to add a private right of action, so stay tuned. If enacted, the CDPA has an effective date of January 2023, which is surely familiar to privacy pros, because it’s the same date as the California’s substantially revised California Privacy Rights Act (CPRA).

However, Virginia’s bill is not a California Consumer Privacy Act (CCPA) clone. In fact, as discussed below, it takes more inspiration from the European Union’s General Data Protection Regulation (GDPR) and its cousin legislation, the much debated but not yet passed Washington Privacy Act (WPA), than it does from the CCPA or the CPRA. The CDPA adopts the GDPR definitions of controller and processor and third party, but it also has a definition of “affiliate” that mirrors the CCPA definition of a business.

There are already several detailed summaries of the CDPA, such as this good piece by BakerHostetler, and a blog post by the Future of Privacy Forum. After this bill is enacted by the Governor, the NAI will provide a more detailed analysis and hold discussions with members about implementation. But for now, it’s useful to take stock of some key elements and takeaways regarding how it compares to these other models, and what its expected enactment may mean for future bills.

Creating a set of consumer “rights,” or increasing consumers’ control over their data, is table stakes for any new privacy legislation, state or federal, but no two are alike. One of the primary unanimous elements of privacy discussions globally is the need for a clear set of rights that establish a baseline for transparency and control around consumer data. At the core of the CDPA are a set of rights that look very familiar—though not identical—to the CCPA, including access, correction, deletion, copying and portability, and of course opt-out requirements. In addition to taking a different approach on the opt-out requirement, the CDPA differs in a couple key ways (discussed below). Another significant difference with the CDPA is that the right to deletion is a bit broader, applying to data “provided by or obtained about” the consumer, rather than the CCPA’s more narrow focus on data which is directly provided by the consumer.

The CDPA reflects the evolution of consumer opt-out rights beyond “sales” of their data, with a specific emphasis on advertising and profiling. It was widely recognized, though not uniformly, that providing for an opt-out of “sales” is not an effective model for consumer privacy legislation. Hence the evolution we’ve seen in the years since the passage of the CCPA; both the CPRA and WPA have expanded to focus on targeted advertising and profiling. This is an area where these models all differ quite a bit. The CDPA maintains the ill-conceived over-reliance on data transfers, but it goes further than the CCPA. The CDPA mirrors its other cousin, the WPA, expanding consumer opt-out to “targeted advertising,” and profiling or automated decision-making. So, while the definition of sale is a bit narrower than the CCPA, applying to exchange of personal data for “monetary consideration,” it addresses ad-tech in substantially more detail. The CDPA provides a meaningful opt out for consumers, without breaking the internet and pretending—like the CCPA—that transfers of IP addresses or other pseudonymous identifiers for purposes of ad measurement is a meaningful choice and that limiting such transfers is in the best interest of consumers. The CDPA specifically exempts from consumer choice the “processing personal data processed solely for measuring or reporting advertising performance, reach, or frequency,” so that’s a benefit not only for digital advertising, but ultimately for consumers, who prefer ad-supported content. Meanwhile, the CPRA remains by far the least practical model.

The CDPA provides practical incentives for companies to rely on pseudonymous data with adequate controls, an improvement upon other models. The meaningful distinction for pseudonymous data is another key difference, and a reason for the digital ads industry, and pragmatic privacy proponents, to appreciate the CDPA. While the CDPA was crafted carefully so as to not carve pseudonymous data out from the definition of personal information, which would widely be seen as a step too far, it does exempt pseudonymous data from rights of access, correction, deletion and copies/portability, assuming protections are in place around that data. This is another practical element of the CDPA, one that the NAI has promoted consistently to policymakers because it incentivizes businesses to make meaningful protections for consumer data. Specifically, the CDPA only applies this exception to businesses when they are “able to demonstrate that any information necessary to identify the consumer is kept separately and is subject to effective technical and organizational controls that prevent the controller from accessing the information.”

The CDPA reflects ongoing discussions around special treatment of sensitive data, and it looks to become the first opt-in legal requirement for processing sensitive consumer data. The CDPA, like the WPA and the CPRA, defines sensitive data and creates an affirmative opt-in requirement that is “freely given, specific, informed, and unambiguous” pertaining to the collection or processing of this information. The definition is quite similar, though not identical, to the definitions adopted in the CPRA and the WPA, as well as the GDPR’s sensitive category data. Specifically, it is defined as: “(1) personal data revealing racial or ethnic origin, religious beliefs, mental or physical health diagnosis, sexual orientation, or citizenship or immigration status; (2) the processing of genetic or biometric data for the purpose of uniquely identifying a natural person; (3) the personal data collected from a known child; and (4) precise geolocation data.” “Child” means a person younger than 13.

The CDPA will be the first opt-in for this data in the U.S., with the CPRA requiring an opt-out, and the WPA not yet enacted. This is a tricky issue for tailored advertising. On one hand, it is consistent with long held NAI standards to require opt-in for sensitive data, including mental health and sexual orientation, but it also contains a couple elements which haven’t been deemed sensitive in the advertising context, particularly racial or ethnic data. Multiple other proposed state bills are grappling with how to define and handle sensitive data, and these three bills reflect some emerging consensus around roughly what types of data can be sensitive. The NAI is contemplating more creative approaches with respect to this issue to better balance privacy protection with beneficial and harmless uses of this data, particularly around race and ethnicity.

CDPA reflects increasing support for risk assessments, and these would be required for all advertising. The CDPA would create a new requirement that controllers conduct “data protection assessments” if engaged in a wide range of activities, some of which are not clearly defined, but which do explicitly apply to targeted advertising, personal data sales, processing of sensitive data, profiling based on a set of circumstances and other factors. This is one of several requirements that is similar to the GDPR and also included in WPA. While risk assessments are already a prudent practice, what makes the CDPA provision concerning for companies is the requirement that the regulator may demand access to the assessment when conducting investigations. This of course opens the potential for a state regulator to engage in fishing expeditions, an activity that has precedent in state political environments.

The CDPA reflects support for practical enforcement, an approach that creates a big target for privacy advocates. The CDPA would be enforced by the attorney general, includes a 30-day cure period, and provides for civil fines up to $7,500, which would go to a Consumer Privacy Fund to provide for enforcement funding. Despite all the failures of the CCPA, it landed—mainly—in a good place on enforcement: It relies on the attorney general and provides a 30-day cure period. Of course, the limited private right of action and open-ended rulemaking have proven to be as il-advised as expected. The CDPA takes the positive elements from the CCPA, without any private right of action nor a mandate for extensive rulemaking. The CPRA compounded enforcement problems in CA and eliminated the cure period. AG enforcement and a cure period have been at the core of WPA drafts, but they’re also actively under debate and will likely be the crucial issues that determine whether it crosses the finish line. This remains a topic where industry and the trial bar are diametrically opposed, so it is good that the CDPA took a practical approach.

CDPA reflects further policymaker support for “nondiscrimination” requirements, and it provides for a more practical set of requirements on businesses that reflects American free markets and differential pricing. All of the models we’ve been discussing contain non-discrimination requirements on businesses tied to the choices that consumers exercise about their data. While the CDPA borrows heavily from the language of the CCPA, it makes several key changes that allows companies to offer different prices or service levels to consumers who choose to participate in a “bona fide loyalty, rewards, premium features, discounts, or club card program.” The CDPA also does not contain the same CCPA-style requirements for companies to perform valuations of consumer data. This approach is more practical from a market perspective than the WPA, which could ultimately save Virginia legal costs having to defend such limits on American businesses.

Assuming the CDPA is enacted as the next patch in the American privacy quilt, it’s likely to be a mixed bag for digital advertising—and for consumer privacy. It’s far from perfect from either perspective, but it raises the bar on privacy and amounts to the most pragmatic and evolved bill we’ve seen among the states over the last several years. Regardless, it will pose compliance challenges based on many of the key elements discussed above, and hopefully it will serve as further incentive for Congress to enact a national framework.