Illinois: Proposed Privacy Legislation Expands Protected Personal Information, Clarifies Notice Obligations to Include Theft of Encryption Keys, and Adds Notice Requirements To Attorney General

On March 25, the State of Illinois Legislature will hold a hearing and consider changes to an existing statute known as “The Personal Information Protection Act.” (815 ILCS 530/5)

The proposed legislation expands the scope of information to be protected in Illinois to expressly include and define medical, health insurance, biometric, consumer marketing and geolocation information as protected personal information. The proposed legislation also requires breaches of security to be provided to the Illinois Attorney General.

The legislation is remarkable in that it obligates breach disclosure in the event that defined personal information is acquired by unauthorized person in its encrypted form, where encryption keys are also acquired during a breach. This makes sense, and represents an acknowledgment that where encrypted information is stolen with the associated encryption keys, the effective result is quite likely unencrypted personal information, warranting notice to Illinois consumers. From a planning perspective, the retention of protected personal information should be separate and apart from the encryption keys, to reduce the likelihood of unauthorized disclosure to unauthorized third-parties. Where there is doubt as to whether both protected personal information and encrypted keys have been wrongfully acquired, the proposed language suggests that disclosure is required to the Attorney General where a single breach affects more than 100 Illinois residents.

Bringing Illinois in line with other jurisdictions, the statute would impose an obligation to post privacy policies conspicuously on commercial internet sites or sites that collect personal information, with specific instructions as to font size and spacing. Notably, an online service who receives notice of noncompliance has 30 days to cure, to avoid violation of the proposed statute.

For a copy of the proposed changes to Illinois’ Personal Information Protection Act, please contact our Privacy & Data Security Group.

NIST Opens Comment Period: The Security of Automated Access Management

For those involved in open and automated access technology, NIST’s Interagency Report 7966: Security of Interactive and Automated Access Management Using Secure Shell (SSH) should be of some interest.  The full report is here. This is the second public comment period for this draft report and the comment deadline is April 3, 2015.

Although NIST’s purpose of the report is to “assist organizations in understanding the basics of Secure Shell (SSH) and SSH access management”, the framework is ripe with lessons/best practices for information and privacy security measures within any organization with network engineering.

There are at least four major noteworthy components in the report:

Section 4.6: Pivoting. “Malware can be engineered to use SSH keys to spread when automated access is allowed.” Aside from the cautionary tale that a single intrusion event can quickly lead to a network infiltration, an equally important take away is that organizations need to know the location (at all times) of SSH keys so that they can be monitored for unauthorized access/duplication.

Section 4.7: Lack of Knowledge and Human Errors.  The report cites to the growing human error component which impacts the security of SSH-based systems. Some of the cited reasons include “complexity of SSH management and the lack of knowledge many administrators have regarding secure SSH configuration and management.”  It goes without saying that the human side of the security setup (which can involve thousands of hosts), makes it more likely that an unauthorized key vulnerability can be exploited with any resultant clean-up being very time consuming.

Section 6.2: Cryptographic key management and protection. “Key management and protection is another important component of solution design, including key generation, use, storage, recovery, and destruction.”  Organizations should take efforts to ensure that access to keys is always properly restricted, monitored and that retrieval can take place in a short time frame if the need arises.

Section 6.5: Preparing devices for retirement or disposal. “Devices and media that hold private keys should be sanitized or destroyed, unless the keys have been retired/rotated.” Keys that are held in mobile devices should be tracked and removed when not needed.  Devices that are retired should ensure data sanitization and/or purging take place.  A detailed guide to media sanitization is here.

Interested parties should take the opportunity to provide comments towards the finalization of these future industry standards.

Image courtesy of Flickr by Mike

To Post on Facebook, or Not to Post

We’ve all seen it make the rounds on our Facebook newsfeeds: the post that declares something along the lines of “my rights are attached to all my personal data drawings, paintings, photos, video, texts, etc.”  Its reappearance around the end of 2014 was likely due to a notice sent by Facebook regarding changes in their policies, which took effect on January 1, 2015.

In the United States, this message does not have the power to unilaterally waive the privacy terms to which each user agrees upon opening a Facebook account.  For example, the new terms state that subject to a user’s privacy and application settings, “[f]or content . . . like photos and videos (IP content), . . . you grant us a non-exclusive, transferable, sub-licensable, royalty-free, worldwide license to use any IP content that you post on or in connection with Facebook.”  The only way to terminate Facebook’s license is to delete your IP content or delete your account, but if you have shared that content with other users that have not deleted it, Facebook still maintains a license on it.

The European Union, however, has taken serious issue with this. EU data protection authorities say that this part (along with other parts) of Facebook’s policy violates their privacy laws.  On February 3, 2015, a task force led by Belgium, the Netherlands, and Germany was formed to investigate the concerns with Facebook’s privacy policy.  On February 23, 2015, a draft report commissioned by the Belgian Data Protection Authority outlined the following issues with Facebook’s policy:

  1. Consent to many of Facebook’s processing activities is likely not valid “[g]iven the limited information Facebook provides and the absence of meaningful choice;”
  1. The current “opt-out” default setting for advertising, as well as Facebook’s practice of combining and sharing data about its users, “do[] not meet the requirements for legally valid consent,” and opt-outs for location-data collection “are simply not provided;”
  1. Facebook’s new Statement of Rights and Responsibilities “contains a number of provisions which do not comply with the Unfair Contract Terms Directive” of European consumer protection law;
  1. The use of user-generated content for commercial purposes (the subject of the “my rights are attached to my personal data” post mentioned above) is not transparent and is not subject to “adequate control mechanisms;”
  1. The collection of location data parameters should be “turned off by default,” and users should be allowed “to determine when and how location data can be used by Facebook and to what purpose;”
  1. Facebook’s monitoring of its users while they are on and off the site is not in compliance with the e-Privacy Directive requiring “free and informed prior consent before storing or accessing information on an individual’s device;” and
  1. The terms “do not properly acknowledge” the fact that users cannot prevent Facebook from using their information gained from outside their network (i.e., if you have shared that content with other users that have not deleted it, Facebook may still use it).

Perhaps the necessitation of making these changes to comply with European Union laws will trickle into Facebook’s privacy policies for the U.S., but it is always wise to be wary of what you post and to periodically review social media privacy policies.

Security Threatening Dating Apps and its Affect on Employers

this month after reviewing 41 percent of the most popular dating apps for cyber security. According to the study, 60 percent of the apps are “vulnerable to potential cyberattacks that could put personal user information and organizational data at risk.” The study showed that hackers could have access to users’ locations, photos, contacts, microphone, billing information, and even the ability to change one’s dating profile. Even more concerning, the study revealed that 50 percent of companies have employees who use dating apps on their work devices, putting potentially confidential company information at risk.

Companies and online daters should be aware of the security risks these apps may pose. Companies may want to consider policies prohibiting or limiting the use of dating and other potentially risky apps on work devices to prevent exposure to confidential company information. Online daters should remember to keep their profiles vague, review app permissions regularly, and delete their profiles once they have found that special someone.  Those who do not use dating apps should consider similar self-protective privacy measures when using any app.  At a minimum, companies and their employees should have a set policy and procedure in place to counter the risks associated with these personal apps to prevent the potential breach or loss of both personal and company information.

Obama Administration Modifies Data Collection Rules in Response to Snowden Breach

The American people should be able to sleep a little easier tonight after the Obama administration set new limits on how the National Security Agency and other parts of the intelligence community collect personal data.  While the new policy does not put an end to the bulk collection program revealed by former National Security Agency contractor Edward Snowden, it does limit the situations and time period that intelligence agencies may collect bulk data.

The new policy, announced on February 3, 2015 by the Office of the Director of National Intelligence, identifies six situations in which intelligence agencies can collect bulk data: (1) to counter foreign spying, (2) thwart terrorism, (3) prevent nuclear proliferation, (4) safeguard cyberspace, (5) detect threats to U.S. and allied armed forces, and (6) combat transnational criminal threats.  Additionally, the new policy requires intelligence agencies to delete data on foreigners after five years if it is not relevant to any on-going investigation.  Similarly, data on American citizens must be deleted if it lacks foreign intelligence value.

The new data collection rules evolve in part from one of the largest security breaches in recent history when Edward Snowden revealed that the U.S. government was conducting mass surveillance on Americans and foreigners.  In June 2013, Edward Snowden, a former system administrator for the Central Intelligence Agency and a counterintelligence trainer at the Defense Intelligence Agency (DIA), disclosed to several media outlets thousands of classified documents that he acquired while working as an NSA contractor for Dell, and Booz Allen Hamilton inside the NSA center in Hawaii.  Snowden’s leaked documents revealed numerous global surveillance programs, U.S. military capabilities, operations, tactics, techniques and procedures.

For the American people, who are no strangers to personal data breaches after millions of Americans were affected by the Home Depot, Target and other data breaches in 2014, the new policy will end of bulk collection of communications and communications metadata about people who have no connection to terrorism or other crimes.

The new policy also addresses the need for new training, oversight, and compliance requirements for handling personal data, including mandatory training programs to ensure that intelligence officers know and understand their responsibility to protect the personal information of all people.

Privacy advocates have characterized the changes as modest, but a step in the right direction.

Image courtesy of Flickr by Ethan Bloch

Gordon & Rees Wishes Everyone a Happy Privacy Day!

On Jan. 22, Gordon & Rees presented its First Inaugural Legal Education Conference, a day of informative programs covering 10 legal areas of key importance to businesses. The Privacy and Data Security Group presented on the topic “Trends in Data Breach, Emerging Regulations, Enforcement and Lawsuits” at the Convene Conference Center in New York City.

The program panelists included Gordon & Rees attorneys Andrew Castricone, Craig Mariam, Linda Mullany, Peiyi Chen, and Hazel Mae Pangan, who discussed the triggering events and identification of a data breach incident, responsive and investigative measures, notification requirements to government agencies and consumers, and customer/client complaints and lawsuits. In addition to retail and institutional breaches, the panelists reviewed HIPAA/HITECH Privacy and Security Rules, as well as the HIPAA Breach Notification Rule, including its similarities and differences to other data security rules, and the Enforcement Rule under HIPAA. More than 200 guests, including clients, attorneys, business owners, consultants and industry experts were among those in attendance.

For your reference, we’ve provided Cyber/Data Breach Reference Guide: Best Practices, State Surveys, HIPAA Enforcement. This helpful guide includes a 50 state survey of the current data breach statutes as well as an additional 50 state survey of current data destruction statutes.

We thank you to all those who attended, and helped make the symposium a great success.

FTC Charges Data Broker with Theft of Consumers’ Information and Money from Accounts

According to a recent Federal Trade Commission complaint, a data broker sold sensitive personal information of hundreds of thousands of consumers – including Social Security and bank account numbers – to scammers who allegedly debited millions from their accounts.  The complaint alleges that data broker LeapLab bought payday loan applications of financially strapped consumers, and then sold that information to marketers whom it knew had no legitimate need for it. At least one of those marketers, Ideal Financial Solutions – a defendant in another FTC case – allegedly used the information to withdraw millions of dollars from consumers’ accounts without their authorization.

According to the FTC’s website and the complaint, these defendants would collect hundreds of thousands of payday loan applications from payday loan websites.  These website applications, including those bought and sold by LeapLab, contained consumers’ sensitive financial information, names, addresses, phone numbers, Social Security numbers and bank account numbers including routing numbers.

The FTC’s complaint alleges that certain non-lender third parties included marketers that made unsolicited sales offers to consumers via email, text message, or telephone calls.  According to the FTC’s complaint, the defendants had reason to believe these marketers had “no legitimate need” for the sensitive information they were selling. The defendants in the case are alleged to have violated the FTC Act’s prohibition on unfair practices.

The FTC notes that it files a complaint when it has “reason to believe” that the law has been or is being violated and it appears to the FTC that a proceeding is in the public interest.  We will monitor this case and provide further updates of interest.

Image courtesy of Flickr by John Taylor.

FTC Approves Final Order Requiring Snapchat to Implement a Stronger Privacy Policy

The Federal Trade Commission (FTC) recently approved a final order settling charges against Snapchat, Inc. (Snapchat), the developer of a mobile application that allows users to exchange impermanent photographs, referred to by Snapchat as “snaps” (the “FTC order”).

When Snapchat was launched in May 2012, users were sending approximately twenty-five snap images per second.  By November 2013, that figure surged to nearly four hundred million snaps per day, and continues to grow.  Many attribute Snapchat’s immense popularity to the intuitive user interface, the scarcity effect tied to the vanishing snaps, and Snapchat’s promise that images and video sent through the application would be irretrievably destroyed and not digitally archived after viewing.

In May 2013, the Electronic Privacy Information Center (EPIC) filed a complaint with the FTC alleging that Snapchat deceptively mislead consumers to believe that snaps would be destroyed within seconds of viewing when, in fact, they are stored on users’ phones in a relatively accessible form and can be easily captured by way of “screen-shotting” the image.  EPIC further claimed that Snapchat failed to establish and enforce security measures to protect user data.

The FTC order settles EPIC’s allegations and forbids Snapchat from misrepresenting (1) the extent to which a snap is deleted after being viewed; (2) the extent to which Snapchat is capable of detecting or notifying senders when a recipient has saved a snap; and (3) the steps taken by Snapchat to protect against misuse of user information.

The final order also directs Snapchat to implement a privacy program that will be monitored for the next twenty years.  Additionally, Snapchat agreed to revise its privacy policy to address privacy risks and to protect the confidentiality of information about its users, including names, addresses, online contact information, telephone numbers, IP addresses, geo-location, usernames, and passwords.  The revised Snapchat privacy policy now provides that Snapchat “can’t guarantee that messages will be deleted within a specific timeframe” and that, even after a snap is deleted from Snapchat’s server, it “may remain in backup for a limited period of time.”  Snapchat also now warns that, “there may be ways to access messages while still in temporary storage on recipients’ devices or, forensically, even after they are deleted.”

The final order furthers the FTC’s recent efforts to ensure that companies in the post smart phone era describe mobile applications truthfully and uphold privacy promises to end users.  The approval of the final order could well inspire other applications like Slingshot (Facebook’s answer to Snapchat), and Whisper and Secret (applications that allow users to make anonymous confessions) that promise anonymity and privacy to reassess the way in which current privacy policies are drafted and enforced.

Image courtesy of Wikimedia Commons 

‘Twas the Season for Data Breaches

With the recent hacks into Sony’s system and the emails sent to Home Depot’s customers regarding the breach of its system, data breach is no longer some fantastical notion that only plays out in a 1980s sci-fi movie. It is a real threat to businesses and their employees and customers, and that threat rises during the holiday season, when the average consumer spends approximately $800 on gifts for family, friends, and co-workers.

Venture back with me to December 2013, when Target Corporation announced that it was hacked, which resulted in 110 million of its customers having their credit- and debit-card information stolen. When I came across a recent ruling in that case, my reaction was: “Oh, yes. I vaguely remember that happening,” and I might have even been a customer who received an email from Target explaining the breach. My point is that, as consumers, the shock has worn off, and we are not surprised to hear about such breaches. But businesses cannot be so cavalier—the courts require vigilance in the protection of data.

As we have reported on our blog, multiple lawsuits arose shortly after Target’s announcement, resulting in the consolidation of all federal cases into In re: Target Corp. Customer Data Security Breach Litig., which involved claims brought by financial institutions on one hand, and by consumers on the other.  Just last month, the District of Minnesota ruled largely in favor of the financial institutions on Target’s motion to dismiss, making it clear that Target breached its duty to maintain adequate security systems.

Just in time for the holiday season, the now famous Sony breach (which, in part, resulted in the cancellation of most theater showings of the movie, “The Interview”) has triggered at least five class-action complaints filed in California federal court against Sony Pictures Entertainment, Inc.  The hacking incident allegedly exposed volumes of confidential emails, social security numbers, and salary and medical information of Sony’s former and current employees.  The gist of the complaints is that Sony, despite being aware that hackers were able to breach their system, “failed to develop, maintain, and implement internet security measures on its corporate network,” and this led to the catastrophic data breach that one complaint calls an “epic nightmare.”  Just last week at the Consumer Electronics Show, Sony’s CEO, Kazuo Hirai described the hack, noting that Sony and its current and former employees “were the victim[s] of one of the most vicious and malicious cyber attacks in recent history.”

The class action filed in Los Angeles Superior Court also blames Sony for its decision regarding “The Interview,” since the film allegedly sparked the ire of hackers who were not pleased with the subject matter (a planned talk show assassination of North Korea’s leader, who was heavily parodied).  In addition to its limited theatrical release, it was recently reported that the film has earned over $30 Million in online and on demand sales.

It is too early to predict the outcome of these actions, but it is likely that the federal complaints regarding Sony will ultimately be consolidated.  As with most data breach cases, we anticipate heavily briefed motions to dismiss on standing and other grounds.  We will, or course, track these cases and provide updated reports as developments unfold.

State Law Claims Viable For Violations of HIPAA

In a recent opinion, the Connecticut Supreme Court determined that state law claims based on violations of the Health Insurance Portability and Accountability Act (HIPAA) were viable.

The plaintiff in Byrne v. Avery Ctr. for Obstetrics & Gynecology, P.C., 314 Conn. 433 (Conn. 2014) was involved in a paternity suit and requested that the defendant, her medical provider, not produce any records to her former lover.  However, the defendant was served with a subpoena from the ex-lover, and produced the documents to the court without plaintiff’s knowledge.  See id. at 437.  The plaintiff sued the medical provider after she began experiencing harassment from her ex, who was able to review the medical records.  See id.  In the four-count complaint, the plaintiff alleged breach of contract, negligence, negligent misrepresentation, and negligent infliction of emotional distress.  See id. at 438-439.  In particular, she alleged that the defendant violated HIPAA by producing medical records without authorization.

The court determined that “the regulatory history of the HIPAA demonstrates that neither HIPAA nor its implementing regulations were intended to preempt tort actions under state law arising out of the unauthorized release of a plaintiff’s medical records.  As the plaintiff aptly notes, one commenter during the rulemaking process had raised the issue of whether a private right of action is a greater penalty, since the proposed federal rule has no comparable remedy.”  Id. at 453.  Accordingly, the court found that HIPAA did not preempt state law claims for alleged breaches of confidentiality.  See id. at 459.  However, the court declined to find, as a matter of law, whether the defendant was negligent in producing the medical documents, and remanded to the trial court for further proceedings.  We will continue to provide updates in this case.