So, an interesting controversy over Facebook experimentation this week…
Recent reports have covered a study carried out by Facebook where the newsfeed content of users’ pages was manipulated to see if the emotional content of the posts shown had an emotional effect on the user. The study, conducted in partnership with researchers from the University of California and Cornell University, found that the emotional tone of what users see changes the emotional tone of what they write in their updates. The authors suggest this is evidence for ’emotional contagion’ even through cyber-space. More interesting than the finding is the ethical debate this study has raised.
Did the Facebook study obtain informed consent, a requirement for all research studies carried out with human participants? In my mind it didn’t, and this raises important further questions about whether we want companies to need to obtain informed consent in circumstances like this, and if not in this circumstance, then when? Where do we draw the line at what it is ok for large companies to do, not just in terms of collecting data, but in terms of manipulating our experiences with a specific aim to influence our mood, or other personal state.
Obtaining informed consent is clearly defined by the British Psychological Society’s Code of Ethics as ensuring that:
“Clients, particularly children and vulnerable adults, are given ample opportunity to understand the nature, purpose, and anticipated consequences of any professional services or research participation, so that they may give informed consent to the extent that their capabilities allow.”
It also guides researchers:
“Unless informed consent has been obtained, restrict research based upon observations of public behaviour to those situations in which persons being studied would reasonably expect to be observed by strangers, with reference to local cultural values and to the privacy of persons who, even while in a public space, may believe they are unobserved.”
For people whose privacy settings are set to ‘friends only’ on Facebook, I’m not sure that we can say that Facebook is a public space. Even if we can, the research Facebook did was not purely observational because it involved manipulating the newsfeed with the stated aim of manipulating emotion.
BPS also says researchers should:
“Obtain supplemental informed consent as circumstances indicate, when professional services or research occur over an extended period of time, or when there is significant change in the nature or focus of such activities.”
So one ‘yes’ doesn’t mean you can carry on researching on someone forever without them consenting separately for separate studies. But Facebook seem to be saying that ticking one ‘catch all’ terms and conditions policy should allow any research to be carried out.
Maybe the potential consequences of this experiment are not so bad… Tweaking people’s mood for one week: so what? But if we say that this is ok then where do we draw the line? Any situation where we say there is a blanket yes or no to consent is potentially dangerous. For this same reason whenever the capacity to consent to something is assessed in a legal or health context it is done only for one thing at a time, not as some kind of blanket statement about an individual’s overall capacity. For example, in cases of children under 16 making decisions about where they should live, or individuals whose mental state might compromise their ability to consent to medical treatment, capacity to make these decisions are assessed at one point in time for one decision and this is continually reviewed. We should be thinking about informed consent in the same way. Just because a person consents to one type of research doesn’t mean they will consent to every type of research. If someone asked you to sign a contract saying you would be happy for any type of research, psychological, medical or other, to be carried out on you at any point in the future without having any more information would you sign it?
Has Facebook broken the law by doing this experiment? This thorough blog explains why not, and is also an interesting defence of the ethics of the experiment. As it details, although psychological studies which use human participants need to follow strict ethical guidelines, companies don’t have the same rules. In fact the Cornell University researchers did apparently apply to an Institutional Review Board (IRB) who said they didn’t need permission because data had already been collected independently (again potentially worrying). This very good blog suggests one reason for a lack of IRB concerns was because of those vague T&C clauses. Again, though, it’s not so much the legality but the ethics that is the real issue here for me.
If this intervention is alright, then what would become ‘not alright’? Would it be ok for Facebook to hide posts from specific friends with the aim of seeing if you contacted them less? Would it be alright for them to only post pictures of someone you were ‘in a relationship’ with when they were with other attractive people in order to try to make you jealous? Would it be ok for Facebook to experiment with filtering out happy posts from your feed for a month? a year? What about filtering your posts to only show you baby pictures? (I think mine may already do that).
No easy answers here, but pretty worth thinking about and debating, so I am glad that this is causing a fuss.