Whose world are you watching?

How secret algorithms control the news we see

From the archive | With Facebook facing ever-growing scrutiny, Eurozine revisits Mark Frary’s report for ‘Index on Censorship’ on how tech companies decide which news items you see online.

If you are on Twitter and follow a large number of people, watching your feed is like looking at The Matrix in encoded form – the news and opinions flow quickly over your screen rather like the strange green characters that drift down screens in the movie.

Yet if your usual view of the world is through Facebook, you will see a different version of reality.

Most of what you see on Twitter is curated by you, while what you see on Facebook is curated for you. Twitter gives you the raw output of those you follow, while Facebook’s algorithms (complex, artificially intelligent filters) decide what to show you, taking into account how often you ‘like’, comment on or share similar material, and which people and organisations you interact with regularly.

The divergence between these two views of the world was starkly illustrated in the coverage on the two networks of the riots that followed the 2014 shooting of African-American Michael Brown by Darren Wilson, a white police officer in Ferguson, Missouri.

Zeynep Tufecki, an assistant professor at the University of North Carolina, has been quoted saying that while her Twitter feed was rapidly taken over by live coverage of what was happening in Ferguson, her Facebook News Feed remained obsessed with the ice bucket challenge, a worldwide charity stunt that was going viral at the same time.

Ferguson did eventually start to show up on Facebook but only much later.

Writing for anarcho-collective writing group The Message a week after the shooting, Tufecki asked: ‘What if Ferguson had started to bubble, but there was no Twitter to catch on nationally? Would it ever make it through the algorithmic filtering on Facebook? Maybe, but with no transparency to the decisions, I cannot be sure.’

Other social media users noted the same ‘censorship’ and there were some attempts to quantify this, but true comparisons are made harder because Facebook, unlike Twitter, does not give unfettered access to its raw, unfiltered feed.

Algorithmic filtering is not confined to Facebook. Dr Robert Epstein of the American Institute for Behavioral Research and Technology (AIBRT) is a senior research psychologist who has studied what he calls the search engine manipulation effect. He said: ‘It’s likely that all important decisions that people make these days are influenced by Google to some extent. In a nutshell, people are likely to believe whatever Google – using its secret algorithm – chooses to rank higher in its search results.’

Google’s algorithm has been designed and tweaked to surface the best-quality content from billions of pages that make up the web. When it was first designed, this PageRank algorithm used the number of links from other sites as its guide to decide what was important and relevant, but the algorithm now uses a huge number of different factors to rank pages.

The company began by personalising results for people who were signed into their Google account, but this has since been extended to all users. So if you regularly search for topics and click on links on, for instance, the BBC news website, you are more likely to be shown search results from the BBC.

But Epstein says that Google’s algorithm has a darker side. His research, alongside AIBRT’s Ronald Robertson, contends that the way results are presented in Google search are not merely cosmetic, but can influence events that happen on the world stage.

His latest research seeks to explain the ability of search rankings to affect opinions, and he believes we are beginning to accept what Google presents as fact.

‘[Google’s] power comes from the training we all get every day while conducting routine searches – searches in which we’re seeking to answer simple factual questions, such as “What is the capital of Uganda?”’ he told Index. “This type of search virtually always gives us the correct answer in the top search position. As a result, in routine searches we learn, over and over again, that what is higher is truer.”

This could, in turn, have an effect on who we vote for. He said: ‘We are likely to believe, for example, that if high-ranked search items make one candidate look better than another, he or she must be the better candidate.’

Epstein and Robertson tested their theory using a total of 4,556 undecided voters representing the diverse demographic characteristics of the voting populations of the US and India. The experiment involved changing the order of the results presented by Google to see whether they influenced voter intention.

The results were shocking; the study showed that biased search rankings could shift the voting preferences of undecided voters by 20% or more, that the shift can be much higher in some demographic groups, and that search-ranking bias can be masked so that people show no awareness of the manipulation.

The AIBRT research has been challenged by Google. Amat Singhal, a senior vice president at Google who heads up the search engine’s ranking team, stated in an article in Politico magazine that ‘there is absolutely no truth to Epstein’s hypothesis that Google could work secretly to influence election outcomes. Google has never ever re-ranked search results on any topic (including elections) to manipulate user sentiment’.

But look at this rebuttal more closely. Singhal specifically says ‘re-ranked’ rather than ‘ranked’. What he means by this is that the algorithm decides on the ranking of search results and that no one goes in and manipulates them afterwards. Google’s stated mission to ‘organize the world’s information and make it universally accessible and useful’ should perhaps have a caveat – ‘as long as our algorithm decides you should see it’.

The AIBRT findings are echoed by results from a study at Stanford University on homophily, the long-held belief that people tend to socialise with similar people and take on shared views. In a paper titled Biased Assimilation, Homophily, and the Dynamics of Polarization, Pranav Dandekar and colleagues looked at whether this could explain the observed increase in polarisation of views in US society.

Their research showed that a process called biased assimilation – where people tend to believe evidence that supports their existing views more than they do evidence that opposes it – also plays a part.

The Stanford research went on to analyse common algorithms used on popular websites that generate personalised results based on user likes and dislikes, to see whether they had a polarising influence on opinions, i.e. if a product or idea was recommended by an algorithm that you would be more likely to purchase or believe it.

The results were clear.

‘The system that recommends the most relevant item to a user turns out to be always polarising,’ argued Dandekar. This corroborates Epstein’s work: the more we see something recommended, the more likely we are to assimilate that view as being the truth.

While these results look at Google, those interested in freedom to make your own opinions should perhaps be more concerned about Facebook’s algorithm.

Facebook is now a community of 1.5 billion people, political parties, businesses and organisations. Between them, they post more than a billion pieces of content – what they had for breakfast, their views on pop stars, quizzes on their favourite soap opera character, as well as more serious things such as news articles and personal news. Facebook itself says that at any given time there are 1,500 pieces of content from friends and others you follow that you could be reading.

Clearly, not everyone has the time to absorb all of that and so it uses an algorithm called EdgeRank to filter the content so that your feed is more manageable. The way that EdgeRank works was the focus of a fascinating piece of research by Wired magazine’s Mat Honan in late 2014. He liked everything he was presented with on the network for 48 hours and his news feed soon became a torrent of the extreme and extremely weird.

His article on the project concluded:

‘We set up our political and social filter bubbles and they reinforce themselves – the things we read and watch have become hyper-niche and cater to our specific interests. We go down rabbit holes of special interests until we’re lost in the queen’s garden, cursing everyone above ground.’

Honan’s experience is not unique. Indeed, the far-right hate group Britain First is a master of the technique, using seemingly innocent posts on Facebook – asking users to express empathy for veteran soldiers for example – to boost its reach and influence.

Britain First’s messages would not be so worrying were it not for the fact that younger people are increasingly finding their political news on Facebook. Research in 2014 by the Pew Center revealed that 61% of millennials, who reached adulthood after 2000, had heard political news or information via Facebook in the past week, compared with just 37% on local television. By contrast, Baby Boomers were mirror images of this: 39% getting their political news via Facebook and 60% from television.

You might argue that television and newspapers are also subject to censorship through their editorial decisions. But then, at least, there is a person at the heart of the decision-making process. What is at issue is the transparency. We can imagine how commissioning editors think, but the algorithms behind Facebook and Google are opaque.

Emily Bell, a journalism professor at New York’s Columbia University, said in her 2014 Reuters Memorial Lecture: ‘Social media platforms have been insistent that they are not interested in the difficult and expensive business of employing actual journalists or making “editorial decisions”. Their culture is as alien to reporting and editing as ours is to designing social software. Of course, every algorithm contains editorial decisions, every piece of software design carries social implications.’

She continued: ‘If there is a free press, journalists are no longer in charge of it. Engineers who rarely think about journalism or cultural impact or democratic responsibility are making decisions every day that shape how news is created and disseminated.

‘In creating these amazingly easy-to-use tools, in encouraging the world to publish, the platform technologies now have a social purpose and responsibility far beyond their original intent. Legacy media did not understand what it was losing, Silicon Valley did not understand what it was creating.’

George Brock, professor of journalism at City University in London, told Index: ‘Facebook is not, and knows quite well it is not, a neutral machine passing on news. Its algorithm chooses what people see, it has “community standards” that material must meet and it has to operate within the laws of many countries.’

He added: ‘Facebook is a private-sector company which has grown and made billions by very successfully keeping more people on its site for longer and longer. I can imagine that any suggestion that there are responsibilities which distract from that mission must seem like a nuisance.’

Twitter, often referred to as a firehose because it supposedly offers an unfiltered view of what those you follow are tweeting about, may not be immune to this.

Activist Paul Dietrich claimed in October that tweets relating to the US military’s programme of unmanned drone strikes have been censored from the feed. Twitter denies that it censors tweets, except in very limited circumstances.

Media anthropologist Kerric Harvey at George Washington University said the biggest problem with algorithms taking over the gatekeeper role is that they do not care about the content, only creating connections among the reader’s contacts. Speaking to Index, she said: ‘What we’re looking at here is a big difference between two conflicting agendas, which conflate into the same single function of curated news selection.’

‘The result? Filter bubbles are as much a function of the inferential profile of a person’s contacts as they are a direct result of an algorithm replacing a traditional editor. In terms of actually making news selections, it’s about as efficient as trying to fly a jumbo jet by committee. And, to an informed democracy, almost as dangerous.’

Psychologist Dr Robert Epstein concludes. ‘As long as the filtering is secret, beyond our control, and financially motivated, we should be extremely concerned. Filtering on any massive online platform such as Facebook is rapidly becoming the most powerful form of mind control that has ever existed.’

Published 15 December 2015
Original in English
First published by Index on Censorship (Winter 2015)

Contributed by Index on Censorship © Mark Frary / Index on Censorship / Eurozine

PDF/PRINT

Newsletter

Subscribe to know what’s worth thinking about.

Related Articles

Cover for: Is Europe possible?

However urgent, a common European security policy requires democratic legitimacy. The goal of ever closer union is realistic only if Europe has a clear understanding of what a federation is and can be. Part of the series ‘Lessons of war: The rebirth of Europe revisited’.

Discussion