Yes there is a Data Privacy Day, and it will be here soon

The Target security breach and the Snowden revelations about NSA surveillance have raised awareness of data privacy to new levels, making Data Privacy Day more relevant than ever in 2014. And yes, Data Privacy Day is a real thing, observed on January 28.

The Target security breach and the Snowden revelations about NSA surveillance have raised awareness of data privacy to new levels, making Data Privacy Day more relevant than ever in 2014. And yes, Data Privacy Day is a real thing, observed on January 28.

The closely entwined topics of security and privacy are currently at the forefront of public consciousness because of two events in 2013, the Snowden revelations and the Target data breach. Further events, like the security holes in supposedly private messaging service Snapchat, are making Data Privacy Day more relevant than ever in 2014. And yes, Data Privacy Day is a real thing, observed on January 28.

Data Protection Day

The United States and Canada started observing Data Protection Day in January 2008 as an extension of the Data Protection Day celebration in Europe. That event commemorates the signing, in 1981, of Convention 108, the first legally binding international treaty dealing with privacy and data protection (full title being Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data).

ESET experts will be participating in Data Privacy Day events on January 28, 2014. In America, Data Privacy Day happenings are led by our good friends at the National Cyber Security Alliance, a nonprofit, public-private partnership dedicated cybersecurity education and awareness. You may know them as the folks behind StaySafeOnline and National Cyber Security Awareness Month, which is October (you may recall ESET experts participated in numerous NCSAM events last October including Twitter chats).

We will be featuring privacy-related content on We Live Security this month, and throughout the rest of the year. While data privacy is obviously a hot topic right now, it has been near the top of the public agenda for a long time. When the Wall Street Journal and NBC conducted a telephone poll of more than 2,000 adults at the end of 1999 and asked them what they feared most in the coming century, “loss of personal privacy” topped the list; cited as the number one concern by 29 percent of respondents, well ahead of overpopulation, acts of terrorism, and racism.

Since then we have had new privacy laws and rules involving financial services, medical services, and more. We have also had a number of important privacy incidents and legal settlements, one of which I describe below, the twelfth anniversary of which occurs this month. I hope that this article, largely unaltered since I first wrote it in 2003, serves as a valuable reminder to companies like Snapchat that luring customers with pledges of privacy is not without risk.

A Prozac Moment in Privacy and Marketing

In September of 2000 someone broke into the website of the Western Union money transfer service and compromised about 20,000 credit cards, including one of mine. This event was described by the press as a “security incident.” Two years later, when the press reported that eight million credit cards had been compromised at another company, it was called a “privacy incident.” This subtle shift in language underlined a major shift in consumer perception. When incidents occur that result in the exposure of personally identifiable information–known in privacy circles as PII–the media will pounce, the public will take notice, and any individual who feels they suffered as a result of the exposure will find the lawyers lining up to take their case.

This shift in perception is important for many reasons. A security breach sounds like a technical failure but a privacy breach sounds more like a moral failure, a breach of trust. This is a pity because many of these incidents could be avoided if companies simply paid closer attention to time-honored business practices like disciplined software development methods and quality assurance controls. These can go a long way to ensuring the protection of customer PII. Indeed, the first really big privacy incident of this century, the so-called “Eli Lilly Prozac Email Incident” was a case of software development and quality assurance gone wrong. (I know this because I assisted the Federal Trade Commission with its investigation of, and ensuing settlement with, Eli Lilly; however, nothing in this article is privileged information-it is all there in the public documents at

Some readers may be familiar with this particular incident, but a surprising number of people are not, including some for whom protecting customer PII is a full-time job. For example, in 2003 I helped present a series of privacy seminars for another large pharmaceutical company. To our surprise, less than a third of those attending were aware of the facts of this very relevant case, so they obviously bear repeating. Here they are, in the words of the FTC (the term “respondent” refers to Eli Lilly, and “Medi-messenger” is an email reminder service that the company promoted at the website

“On June 27, 2001, at respondent’s direction, an Eli Lilly employee sent an email message to Medi-messenger subscribers announcing the termination of the Medi-messenger service. To do this, the employee created a new computer program to access subscribers’ email addresses and send them the email. The June 27th email disclosed the email addresses of all 669 Medi-messenger subscribers to each individual subscriber by including all of the recipients’ email addresses within the “To:” line of the message. By including the email addresses of all Medi-messenger subscribers within the June 27th email message, respondent unintentionally disclosed personal information provided to it by consumers in connection with their use of the Web site.”

You might wonder how this could happen. Surely a company like Eli Lilly has a comprehensive set of information security policies, proper software development procedures, and a software quality assurance program. In fact, Eli Lilly had all of these. For example, there was a policy that said no code was to be put into production without adequate testing and supervisor approval. But here’s the rub, something you will see in a lot of other companies: those rules were applied mainly to the IT department, the folks who grew out of the mainframe, in-house data processing departments of yore. Those rules had not been applied consistently to the Internet team, the fast-moving, fleet-footed, code-for-the-moment folks who brought you the corporate website. And who manages email? In many companies, it’s those same Internet folks, who may not be accustomed to, or feel bound by, standard IT safeguards and protocols. Here is more of what the FTC said in the settlement with Lilly that was announced on January 18, 2002:

“The June 27th disclosure of personal information resulted from respondent’s failure to maintain or implement internal measures appropriate under the circumstances to protect sensitive consumer information. For example, respondent failed to provide appropriate training for its employees regarding consumer privacy and information security; failed to provide appropriate oversight and assistance for the employee who sent out the email, who had no prior experience in creating, testing, or implementing the computer program used; and failed to implement appropriate checks and controls on the process, such as reviewing the computer program with experienced personnel and pretesting the program internally before sending out the email. Respondent’s failure to implement appropriate measures also violated certain of its own written policies.”

There are numerous lessons to learned here, especially if your company wants to avoid hefty fines and twenty years of government oversight (which were the consequences for Eli Lilly). First of all, companies need to make sure that there are strict rules for software development, and that everyone doing software development is playing by them (simply having rules was not a defense as far as the FTC was concerned because they were not applied to the folks who were developing on the cutting edge–in 2014 think mobile app developers).

The second lesson is that all employees need to be made aware of the company’s privacy policies (assuming you have these properly documented). Today’s smart companies are making sure that every employee who deals with customer PII, even the folks in IT, whom you might not think of as “customer” people, are aware of just what a big deal it is to breach the privacy promises that the company has made to its customers. Any transgressions that come to the attention of management should be addressed (this may not mean firing people–but if you don’t enforce a policy it is legally useless in your defense).

The third lesson, from this and other recent incidents is that bad news can have a cumulative effect. For example, less than six months after Eli Lilly reached a settlement with the FTC over the privacy problem, the company was accused of another Prozac-related privacy violation. This second case involved samples of Prozac which were mailed to people in Florida, through a marketing deal involving–allegedly–the recipient’s physician, the recipient’s pharmacist, and Eli Lilly sales reps. Reporters writing about this incident took the opportunity to remind people of the company’s troubles with the FTC over the earlier Prozac-related privacy incident. And you can bet that the states weighed in with their own fines leveled against Lilly.

The fourth lesson is that marketing is probably the “hot spot” for privacy of customer information. The potential for marketing via the Internet is so enormous, and the perceived cost-of-entry so low, it is understandably difficult for marketing folks to resist the urge to rush out and put up a website or send out a zillion emails. But if something goes wrong, your company could be paying hundreds of thousands of dollars to fix it, money that would have been far better spent doing it right the first time.

I will close by quoting J. Howard Beales, III, Director of the FTC’s Bureau of Consumer Protection:

“Even the unintentional release of sensitive medical information is a serious breach of consumers’ trust. Companies that obtain sensitive information in exchange for a promise to keep it confidential must take appropriate steps to ensure the security of that information.”

Sign up to receive an email update whenever a new article is published in our Ukraine Crisis – Digital Security Resource Center