In January 2012 the social media giant Facebook undertook what can only be called a research experiment on 700,000 of its users. The company decided to strategically skew what these users could see when they logged into their personal profiles. Half of these individuals were confronted by content which had been proven to have happy connotations, while the other half were met with words which were known to be sadder than average. Was it really ethical to analyse this large group of unsuspecting users? Facebook have argued that they were merely testing their product, and were not undertaking research, but there has been significant outrage over the project.
One of the biggest problems with social media is the perceived lack of privacy which happens when you discuss your private life online. It seems bizarre that even with this being an issue regularly discussed, Facebook should decide to conduct such an experiment on unwitting users. Could it be that Facebook actually thought what they were doing was actually morally acceptable?
In the past few weeks an American professor has argued that the experiment actually violated a law in the state of Maryland that requires human subjects to give consent when participating in a study. Facebook have retorted by saying that what they did was not actually research. There seems to be some regret at making a poor business decision, but there is no admission of ethical culpability.
Objectively, it is clear to see that this sort of action must be considered experimental. Manipulating an individual’s emotions must be considered on a par with physical research. What is even more concerning is the fact that there would have been children and adolescents within the sample, unknowing that what they were seeing was being controlled in a big brother-esque manner by adults who have no right to guide them. Facebook are in a position of great power, with a global audience of over 500 million people. It is of utmost importance that they act responsibly when dealing with their customers.
What has to be realised is that Facebook are a company that want to constantly grow and improve their product. Facebook is so ingrained in our daily lives that inevitably when they change things people will be affected. Should we be looking for more global regulations to protect us from this manipulation? How can we do that? There is no straight answer to either question but the balance needs to be right. People deserve to have their privacy respected by Facebook, while the company itself has the right to expand. The question is, how can we do both?