Is Personalization More Dangerous Than Fake News?
Analysts infamously misjudged the outcome of the US presidential election. With the entire world watching in rapt interest, many are shell-shocked and still trying to make sense of exactly how the so-called “experts” got everything so wrong.
The Echo Chamber
An increasing distrust towards mainstream media is partly responsible for audiences seeking opinions and information on social media. Unfortunately, this method is vulnerable in a number of ways, as this election cycle revealed.
The social media sphere is dominated by click-bait posts and algorithms that heavily promote sensational stories that often report (or misreport) emotional content as fact. This has created a hyper-segmented ecosystem wherein each individual inhabits her own information bubble, fed and sustained by algorithmic content curated to her beliefs and preferences. That has a number of implications. For one, it means that preexisting biases are reconfirmed, world-views are reinforced, and our tolerance for opposition diminishes. And as we have seen, it also makes us more susceptible to misinformation.
Social Media Pulled into Politics
Zuckerberg stands accused of inadvertently influencing the 2016 presidential election through the circulation of fake news that made its way into the trending section of Facebook, and thereby swayed the opinions of countless voters. This has prompted intense introspection and a reassessment of industry-wide personalization algorithms.
The fake news purposefully preyed on the unsuspecting. Scammers created the fake news to push buttons and make money. The enraged and fearful among us generated more comments, clicks, and shares. That heavy traffic was quickly converted into easy cash.
When the theory first circulated that fake news had influenced the election, Zuckerberg immediately repudiated it as a “crazy idea.” However, BuzzFeed quickly published a very inconvenient truth that false election news stories garnered more engagement on Facebook than the top election stories from 19 major news outlets combined.
In one particularly notable instance, a group of Macedonians created over 100 pro-Trump websites to spread false news with expressly financial motives. The business model undermined facts and targeted fear and political bias to generate a substantial sum of money.
It seems that Zuckerberg and company have been a little naïve about the responsibilities that come with publishing news stories to over 1.8 billion people.
“There’s so much active misinformation, and it’s packaged very well, and it looks the same when you see it on a Facebook page, or you turn on your television” - President Obama
A week later, Zuckerberg changed his stance on the subject and announced they were already working on a way forward to penalize and suppress misinformation on news feeds. But, the problem is much more complicated than many people think. Restricting the ability to share opinions that are not from reputable news outlets is fraught as well, and demands an assessment of what constitutes factual or reliable news.
Most people reading this use social media as their primary source of news with rolling breaking story notifications appearing on their smartphones during the day. Advances in machine learning and artificial intelligence are already making it much easier to spoon-feed the right message to receptive audiences based on the stored personal data around their likes, dislikes, and political views.
Algorithms used to deliver personalized content based on our personal information are dangerous to the health of a democracy, more so than fake news. Personalized content removes any ideas that challenge our current worldview, prioritizing our comfort over discourse.
True, the fake news is a big problem worthy of consideration; but so is the echo chamber. And more so given it’s predominance in the marketing industry. We should beware not to insulate people in an effort to give them what they want, for fear of its erosive effects on democracy.