Facebook’s 2012 experiment in manipulating the emotions of users, documented in research published last month, may do more than upset people and raise serious privacy issues, it may cost the social network a lot of money in 2014. That’s because Facebook has been is trouble over matters of data privacy before, leaving it exposed to potentially hefty fines. In November 2011, Facebook agreed to a proposed settlement with the Federal Trade Commission (FTC), something we blogged about numerous times. As we noted back then, the settlement:
“bars Facebook from making any further deceptive privacy claims, requires that the company get consumers’ approval before it changes the way it shares their data, and requires that it obtain periodic assessments of its privacy practices by independent, third-party auditors for the next 20 years.”
Facebook agreed to this in order to settle FTC charges that it deceived consumers “by telling them they could keep their information on Facebook private, and then repeatedly allowing it to be shared and made public.” Included in that settlement, finalized by the FTC in August of 2012, is this language:
“When the Commission issues a consent order on a final basis, it carries the force of law with respect to future actions. Each violation of such an order may result in a civil penalty of up to $16,000.”
This is the same type of provision that resulted in a $22.5 million fine for Google in 2012, when the FTC charged that Google, operating under a similar FTC settlement, misrepresented to users of Apple’s Safari browser “that it would not place tracking ‘cookies’ or serve targeted ads to those users.” This was deemed to violate the earlier settlement meant to resolve FTC charges that Google used deceptive tactics and violated its privacy promises with the social network, Google Buzz. That Consent Order, in October 2011, “barred Google from – among other things – misrepresenting the extent to which consumers can exercise control over the collection of their information.”
Bearing all of the above in mind — the numerous privacy-related FTC settlements and FTC Consent Orders, the potential for fines, and the timeline — it might seem quite staggering that someone at Facebook still thought it was okay to conduct an experiment on users, without telling them or getting their specific, individual permission, during the week of January 11–18, 2012. However, it has been my experience that companies of the size attained by Facebook by the end of 2011 often fail to keep all departments and personnel on the same page, particularly when it comes to matters of privacy.
A classic indicator of this type of corporate dysfunction is “post facto” remediation, in this particular case, the specific change that Facebook made to its data use policy in May 2012, four months after the research, adding a previously unmentioned use of your Facebook data:
As the journalist Kashmir Hill, a keen “watcher” of Facebook, pointed out in Forbes, that provision was not in place when Facebook intentionally manipulated the “Newsfeed” for hundreds of thousands of users in order to gauge their emotional response:
“In January 2012, the policy did not say anything about users potentially being guinea pigs made to have a crappy day for science, nor that “research” is something that might happen on the platform.”
To drive the point home, Hill’s article provides this handy link to a PDF of the pre-research version of the Facebook Data Use Policy. Hill also presents the arguments offered by Facebook defenders, namely that “they did this to improve the service” and “every website is doing A/B testing all the time.” However, both arguments seem to miss the point about this research experiment, as Hill herself opines, “the Facebook study with its intention to manipulate the Facebook environment for unknowing users to see whether it made them feel elated or depressed seems different to me than the normal ‘will this make someone more likely to buy this thing’ kind of testing.”
I concur. When you go shopping, at a website or brick-and-mortar store, you know that what you see is what the retailer wants you to see. You are prepared for the fact that what you see will be influenced by what the retailer knows about you. That is entirely different from a. Facebook choosing the term “Newsfeed” to describe a very selective view of your social network’s activity driven by an algorithm, and b. manipulating that algorithm to manipulate people’s emotions, without their knowledge or permission.
Now that EPIC (the Electronic Privacy Information Center) has filed formal legal documents with the FTC, alleging that Facebook engaged in deceptive trade practices and violated a 2012 Consent Order entered into with the FTC, there is potential for Facebook to be hit with FTC fines. I think it is quite possible that the commissioners will agree with the arguments made in the complaint (PDF). Bear in mind that the FTC takes EPIC seriously (it was an EPIC complaint that brought about the FTC action against Google mentioned earlier).
And that belated addition of “research” to the Facebook data use policy? It my come back to bite Facebook given point two of the complaint: “At the time of the experiment, Facebook did not state in the Data Use Policy that user data would be used for research purposes.” It is going to be hard for Facebook to argue that the provisions of the earlier version of the policy covered research when they later added research as a specific use.
So, how would those fines, if levied, be calculated? That appears to be up to the discretion of the commissioners. However, I expect that someone within Facebook is currently doing some research that goes like this:
America’s FTC is not the only body investigating the Facebook research project. Data protection authorities in the UK and Europe are said to be looking into it. This is not surprising since a. some 80% of Facebook users are outside America, and b. many Europeans have strong opinions about privacy rights and ethical research. As useful document to read in this regard is the Framework for Research Ethics (FRE) published by the Economic and Social Research Council (ESRC). The second of the six “Principles, procedures and minimum requirements of the Framework for Research Ethics” reads:
The framework acknowledges there may be isolated exceptions to informed consent, but makes it clear that these are very narrow, and I don’t think they would apply to Facebook research. What do you think about Facebook’s experiment? Does it bother you? Leave a comment and let us know.
Author Stephen Cobb, ESET