CCPA Lessons Learned in 2020 That Will Help Keep Your Company Out of Court

In many ways, California is leading the country by providing consumers with substantial data privacy rights under the California Consumer Privacy Act (CCPA). But with this good news for consumers comes new business and systems requirements for companies as well as potential legal exposure. Since the law became effective on January 1, 2020, CCPA lawsuits have been rolling in at a brisk pace. We’re aware of at least 40, and there are probably many more that we haven’t heard of.

In this blog, we’ll cover some of the most important CCPA lessons we’ve learned this year based on our experience with clients and recent data privacy lawsuits. We hope this will help you align your systems and business practices with CCPA requirements—and keep your company out of court!

The California Privacy Rights Act (CPRA)—passed on November 3, 2020—amends and extends CCPA and will become effective on January 1, 2023. These changes are not covered in this blog, but we’ll be diving into CPRA extensively in upcoming blogs.

We’ve boiled down our 2020 CCPA experiences into five key lessons learned:

  1. Don’t make promises you can’t keep
  2. Always get informed consent for using and sharing personal information
  3. Integrate InfoSec throughout your products and services
  4. There’s much more to personal information than just a name or SSN
  5. Breach detection and response is its own subdiscipline of data protection

Don’t Make Promises You Can’t Keep

Organizations frequently make promises about their privacy and security practices that they are not certain they can keep. When these promises are made in an organization’s websites and apps, they often wind up as Exhibit “A” in a data privacy lawsuit.

Kondrat v. Zoom

On their publicly-facing website, Zoom Video Communications, Inc. (yes, the same Zoom you used to talk to your grandmother on Thanksgiving) made several statements that were difficult to support when challenged in the Kondrat v. Zoom[1] case, including:

  • We use “end-to-end encryption for all meetings.”
  • Users are allowed to “meet securely.”
  • “We take security seriously and we are proud to exceed industry standards when it comes to your organization’s communications.”
  • We “designed policies and controls to safeguard the collection, use, and disclosure of your information.”
  • Zoom “places privacy and security as the highest priority in the lifecycle of operations of our communications infrastructure.”

This case revealed that many of these claims were difficult or impossible for Zoom to support:

  • A Zoom representative admitted that Zoom’s definitions of “end-to-end” and “endpoint” were not the same as the definitions commonly used in the technology industry. The representative further said “When we use the phrase ‘End to End’ in our literature, it is in reference to the connection being encrypted from Zoom endpoint to Zoom endpoint.” This differs from the common industry definition of “end-to-end” encryption, which would require encryption between the user’s device and the Zoom platform as well as within the Zoom platform.
  • Zoom’s China operations had access to North America meetings and artifacts, including live and recorded video and meeting minutes.
  • Recordings of many Zoom meetings were found on unprotected servers on the dark web.

The Kondrat v. Zoom case was one of eight suits we are aware of that were filed against Zoom in the Northern District of California.

Sheth v. Ring

In the Sheth v. Ring case[2] (Ring being the maker of the popular doorbell/camera security product), the plaintiff alleged that Ring’s representations that it “maintain[s] administrative, technical and physical safeguards designed to protect personal information against accidental, unlawful or unauthorized destruction, loss, alteration, access, disclosure or use” were not accurate because Ring “fails to implement entirely common and basic cybersecurity measures or protocols to guard against unauthorized access or intrusion by third parties.”

In this case, the plaintiff did not even allege that Ring systems had been breached. They simply claimed that cybersecurity measures were not implemented as advertised.

We are aware of at least five similar lawsuits that have been filed against Ring, and there are likely more.

Takeaways

So what actions should you take to guard against lawsuits like this?

  1. Review your organization’s privacy policy (i.e., your publicly-facing privacy statement). Are there any unsupported (or unsupportable) claims?
  2. Review the statements included in your apps. Are you clear as to how personal information is being used and where it is being sent?
  3. Don’t try to impress the public with jargon (e.g., “we use 2056-bit elliptical key encryption”). It will likely backfire.

Always Get Informed Consent for Using and Sharing Personal Information

Multiple data privacy laws require that companies get informed consent from consumers before using or sharing their personal information.

Under CCPA Regulation §999.305(5). “A business shall not use a consumer’s personal information for purposes materially different than those disclosed in the notice at collection. If the business seeks to… [do so]…, the business shall directly notify the consumer of this new use and obtain explicit consent from the consumer to use it for this new purpose.”

GDPR also offers a useful description of informed consent. Per GDPR Art. 4(11): “Consent of the data subject means any freely given, specific, informed and unambiguous indication of the data subject’s wishes by which he or she, by a statement or by a clear affirmative action, signifies agreement to the processing of personal data relating to him or her.”

Kondrat v. Zoom

In the Kondrat v. Zoom case, the plaintiff alleged that “…the iOS version of the Zoom mobile app was sending customer PII to Facebook without customer authorization or customer consent—even if the customer did not have a Facebook account.” They further alleged that “Upon downloading and opening the app, Zoom would connect to Facebook’s Graph API. The Graph API is the main way that app developers get data in or out of Facebook.” Notably, the Graph API was also used by Facebook to share data with Cambridge Analytica.

Burke v. Clearview AI

In the Burke v. Clearview AI[3] case, the plaintiff alleged that “Without notice or consent, Clearview illicitly “scraped” hundreds, if not thousands or more, websites, such as Facebook, Twitter, and Google, for over three billion images of consumers’ faces…. Consumers did not receive notice of this violation of their privacy rights, and they certainly have not consented to it – in writing or otherwise.”

The suit also alleged that Clearview AI used software to identify the people associated with the scraped images and sold the images and related names to law enforcement and foreign governments.

This case is somewhat unique in that it involves scraped facial images, which is a bit of a gray area under U.S. privacy laws. It is one of multiple lawsuits that have been filed against Clearview AI, and we expect these cases to be major precedent-setting privacy lawsuits in the coming year or two.

Sheth v. Ring

In the Sheth v. Ring case, the plaintiff alleged that “…an investigation of the Ring smartphone app found that it was ‘packed with third-party trackers sending out a plethora’ of customers’ personally identifiable information (“PII”)…”. They further alleged that “In other words, because Defendant failed to disclose their gross security inadequacies, and affirmatively shared customers’ information with third parties without their informed consent, they delivered  fundamentally less useful and less valuable products and services than those for which consumers like Plaintiff paid.”

Takeaways

Here are key takeaways for ensuring that you get informed consent for using or sharing personal information.

  1. Identify, with precision, what personal information you’re processing, the underlying purpose, and with whom you’re sharing it.
  2. Is that information contained in the notice that’s at every point of collection?
  3. If you wish to use personal information for a new purpose, do you have a mechanism for obtaining consent?
  4. Be wary of “one check box to rule them all” consent mechanisms for multiple uses of personal information.

Integrate InfoSec Throughout Your Products and Services

Information security needs to be an integral part of your products and services rather than an afterthought or something tacked on at the end of the development process. To accomplish this, you’ll need to make sure that security considerations are baked in to every step in your product or service development life cycle.

Sheth v. Ring

In the Sheth v. Ring case, the plaintiff alleged that “Unlike a wealth of other online service providers, Ring does not require its customers to use two-factor authentication (or “dual factor authentication”) to access their Ring Security Devices and accounts. Further, Ring neither limits the number of unsuccsefful [sic] login attempts into a user’s account nor does it notify its customers of these unsuccessful or suspicious login attempts. Ring also does not provide a way to see how many users are currently logged in to a single account.” They further alleged that “A Ring account is not a normal online account. Rather than a username and password protecting messages or snippets of personal information, such as with say, a video game account, breaking into a Ring account can grant access to exceptionally intimate and private parts of someone’s life and potentially puts their physical security at risk.”

Takeaways

  1. Identify the process you have in place for baking security into your Software Development Life Cycle (SDLC). Who owns it?
  2. Does your organization follow principles of privacy by design by ensuring the most private options are enabled by default? Note that while not a CCPA requirements, this methodology will help prevent privacy risk later on.
  3. If you are not employing a formal framework for accomplishing this, consider doing so. Here are two resources we highly recommend:

There’s Much More to Personal Information Than Just a Name or SSN

In today’s environment, where so much information is machine-readable, we need to think about personal information as being much broader than just name, SSN, address, phone number, etc. Personal information now includes IP address, geolocation data, device ids biometric data, health data, MAC address, advertising identifiers, and any other information that can be used to uniquely identify a person. Companies need to be hyper-vigilant to identify and protect the privacy of all data that can currently be considered personal information.

Kondrat v. Zoom

In the Kondrat v. Zoom case, the plaintiff alleged that “The Zoom app would notify Facebook when the user opened the app, details on the user’s device—such as the model, time zone and city from which they were connecting, which phone carrier they were using—and a unique advertiser identifier created by the user’s device which companies can use to target a user with advertisements.”

The disclosure of the unique advertiser identifier (also known as an “IDFA” or “Identifier for Advertisers”) is particularly invasive because each device is assigned a unique one, and thus they are tied to each individual user. IDFAs are unique, alphanumeric strings that are used to identify an individual device—and the individual who uses that device—to track and profile the user.

Burke v. Clearview AI

In the Burke v. Clearview AI case, the plaintiff alleged that “Unlawfully, Defendants stored billions of scraped images of faces in Clearview’s database, used its facial recognition software to generate biometric information (aka a “Faceprint”) to match the face to identifiable information, and then sold access to the database to third-party entities and agencies for commercial gain.”

The complaint continued by saying that “Biometrics are unlike other unique identifiers that are used to access finances or other sensitive information.” 740 ILCS 14/5(c). For example, “social security numbers, when compromised, can be changed.” Id. “Biometrics, however, are unique to the individual; therefore, once compromised, the individual has no recourse … [and] is at heightened risk for identity theft ….” Id.” “Recognizing this problem, the Federal Trade Commission urged companies using facial recognition technology to ask for consent before scanning and extracting biometric data from photographs.”

Sheth v. Ring

In the Sheth v. Ring case, the plaintiff made the following allegations:

  • “…four analytics and marketing companies [were] discovered to be receiving information such as the names, private IP addresses, mobile network carriers, persistent identifiers, and sensor data on the devices of paying customers,”  further exposing class members’ PII to third parties and increasing the risk of unauthorized access.
  • “…every time a customer opens the Ring app on his or her smartphone, the app sends information to Facebook about that customer, including ‘the time zone, device model, language preferences, screen resolution, and a unique identifier.’”
  • “A business analytics firm, MixPanel, receives even more sensitive PII from the Ring app, including ‘users’ full names, email addresses, device information such as operating system (OS) version and model, whether Bluetooth is enabled, and the number of Ring devices installed.’”

Takeaways

  • Conduct a review, starting from ground zero, of any information inside your information “ecosystem” that can be considered personal; also consider seemingly innocuous information that, when used in combination with other information in your environment, can enable inference of personal information (e.g., geolocation data).
  • If you have a data inventory, update it. If not, create one. Include as many people from as many domain areas as possible to ensure a comprehensive understanding of your data.
  • Determine whether, under the GDPR, a data protection impact assessment (DPIA) would be required—this is a red flag indicating that you may want to implement additional controls.

Breach Detection and Response is its Own Subdiscipline of Data Protection

CCPA does not include provisions for breach detection and response because California had already implemented laws covering this area when CCPA was introduced.

California Civ. Code §1798.81.5(b) states that “A business that owns, licenses, or maintains personal information about a California resident shall implement and maintain reasonable security procedures and practices appropriate to the nature of the information, to protect the personal information from unauthorized access, destruction, use, modification, or disclosure.”

Under California Civ. Code §1798.82(a), “A person or business that conducts business in California…shall disclose a breach of the security of the system following discovery or notification of the breach in the security of the data to a resident of California[.]…The disclosure shall be made in the most expedient time possible and without unreasonable delay[.]”

CCPA works side by side with these existing data security and data breach statutes, which are no less important to the protection of personal information than the CCPA provisions.

Companies should have a dedicated breach detection and response team to focus on this important subdiscipline of data protection. This team should implement plans, policies, practices, and controls to ensure that breaches are detected quickly and responded to in an efficient and effective manner to minimize impacts to the company and its customers and partners.

Risk assessments should be performed to determine if security procedures and practices are appropriate to the nature of the information that could potentially be breached. If your company is sued following a data breach, you will likely be asked to produce your risk assessment related to the type of data that was compromised. If you don’t have one, opposing counsel will have a field day!

Be sure to include in your plans a definition of the criteria that will be used to determine when a breach has occurred (e.g., will a ransomware attack be handled as a breach?) and detailed plans for timely notification of customers, partners, and other stakeholders.

Kondrat v. Zoom

In the Kondrat v. Zoom case, the plaintiff alleged that “On April 1, 2020, an actor in a popular dark web forum posted a link to a collection of 352 compromised Zoom accounts.…In comments on this post, several actors thanked him for the post, and one revealed intentions to troll the meetings.” They further alleged that “Additionally, security researchers recently uncovered another database on a ‘dark web’ forum containing more than 2,300 compromised  Zoom credentials, including ‘usernames and passwords for Zoom  accounts – including corporate accounts belonging to banks, consultancy  companies, educational facilities, healthcare providers and software  vendors. Some of the accounts included meeting IDs, names and host keys in addition to credentials.’”

Barnes v. Hanna Andersson and Salesforce

In the Barnes v. Hanna Andersson and Salesforce.com[4] case, the plaintiff made the following allegations:

  • “Hanna did not tell customers or the Attorneys General about this theft until over another month later, on January 15, 2020. To this day, Salesforce has not released a vulnerabilities and exposures report, nor has Salesforce made any notifications of the breach.”
  • “The notice sent to Attorneys General states that law enforcement did not inform Hanna about its customers’ credit cards being offered for sale on the “dark web” until December 5, 2019. At that time, Hanna “launched an investigation.”
  • “The date the infecting malware was supposedly removed from Salesforce’s ‘third-party ecommerce platform,’ however, was over three weeks before Hanna claims it found out about the breach.”
  • “Hanna admits it did not detect this breach on its own, nor did Salesforce notify Hanna about it – law enforcement did. How was the malware removed on November 11, 2019, without Defendants noticing it?”

Takeaways

  • Have distinct incident response and breach notification plans. Be sure to include a current list of contact information for all parties that need to be notified. Ensure incident response teams consider privacy requirements when it comes to breaches, don’t just rely on the baseline incident response plans.
  • Review criteria for determining when an incident is considered a breach.
  • Benchmark incident response against peer-accepted guidelines such as the NIST 800-61 Computer Security Incident Handling Guide.
  • Prepare a mock breach notification document based on requirements from the EU GDPR and from U.S. states. The document should contain a notification template that can quickly be updated with information specific to the breach. This will save time in getting out breach notifications.

Summary and Conclusions

So, what are the key things we’ve learned from all this?

  1. The vast majority of fines or verdicts/settlements stem from “Data Protection 101” failures exploited by garden-variety criminals. They are generally NOT esoteric actions perpetrated by advanced persistent threat (APT) actors or foreign nations.
  2. Noncompliance with CCPA could also lead to noncompliance with other fair practice and consumer protection regulations. Don’t assume a privacy violation is just a privacy violation.
  3. Adopting the NIST Privacy Framework is perhaps the best place to start. It is an excellent way to improve privacy through enterprise risk management. We highly recommend it!
  4. If you are an ISO shop, the ISO 27701 Privacy Information Management System (PIMS) is very good as well.
  5. Security standards, guidelines, and frameworks such as ISO/IEC 27001/2, NIST 800-171, and CSC Top 20 can address InfoSec-specific requirements such as CCPA section §150, GDPR Article 32 and, indirectly, other articles that cite “technical and organizational” requirements.
  6. Business partners (vendors, third parties, licensees, etc.) require vigorous policing. They are often the weakest link in your security defenses. Be sure to include appropriate privacy and security requirements in your contracts with business partners and hold them accountable. Remember that you’re always responsible for data your third parties process for you.
  7. Consider implementing a Privacy Operations Center to increase focus on and accountability for data privacy.
  8. Understanding where personal information is located within your environment and who has access to it will be key to advancing compliance. You should have a robust data inventory that documents where all personal information is stored, how it is used, and who has access to it. If specific personal information is no longer needed, get rid of it. If people have access to personal information that they don’t need, modify their access.

We Can Help

If you have questions about CCPA or would like help implementing changes in your environment to ensure CCPA compliance, Tevora’s team of data privacy and security specialists can help. Just give us a call at (833) 292-1609 or email us at sales@tevora.com. Take a look at our Privacy Tracker Tool that helps you stay up to date with every privacy regulation.

About the Authors

Christina Whiting is a Principal | Privacy, Enterprise Risk & Compliance at Tevora.

Scott M. Giordano, Esq. | V.P. and Sr. Counsel, Privacy and Compliance at Spirion.

[1] Kondrat et al. v. Zoom Video Communications, Inc., No. 5:20-cv-02520 (N.D. Cal. Apr. 13, 2020)

[2] Sheth v. Ring LLC, Case No. 2:20-cv-01538 (C.D. Cal.)

[3] Burke v. Clearview AI, Inc., Case No. 3:20-cv-00370 (S.D. Cal.)

[4] Barnes v. Hanna Andersson LLC and Salesforce.com Inc., Case No. 4:20-cv- 00812 (N.D. Cal.)