The Algorithm That Shapes What You Believe
There is an editor who has never attended a journalism school, has never sourced a quote, and has never lost sleep over a story. It processes billions of decisions per second, and it decides — more than any editor-in-chief alive — what the world believes. Its name is a recommendation algorithm. Its office is a data center in a nondescript industrial park. Its conflict-of-interest disclosures are: none.
The mechanics are deceptively simple. A platform observes your behavior — the posts you linger on, the videos you rewatch at 2 a.m., the outrage that makes your thumb stop scrolling — and constructs a probability model of your future attention. It then surfaces content that maximizes “engagement,” a metric that, as study after study has shown, correlates more strongly with emotional arousal than with factual accuracy.
The Engagement Trap
In 2021, internal Facebook research, later leaked to the Wall Street Journal, found that its own algorithms were amplifying divisive content because divisive content generated more reactions. Engineers proposed solutions. Executives declined to implement them. The business model, after all, is built on time-on-platform. Outrage keeps users on-platform.
This is not a bug. It is the logical output of an optimization function pointed at the wrong target. When you optimize for engagement without guardrails, you inadvertently optimize for the most primitive levers of human psychology: fear, anger, and tribal identification.
What emerges — demonstrably, measurably, across every major platform that has been studied — is a media environment calibrated for conflict. Nuance does not surface. Complexity does not trend. The algorithm is not biased left or right; it is biased toward whatever makes you feel something fast.
The Velocity of Lies
Research published in Science in 2018 analyzed 126,000 contested news stories shared on Twitter over a decade. False stories spread six times faster than true ones. They reached more people. They penetrated deeper into social networks. The reason? Novelty. False information, by definition, tends to be more surprising than true information — and surprise is a reliable engagement trigger.
The algorithm didn’t create human susceptibility to novelty. But it weaponized it at scale. A lie that once might have spread through a neighborhood now spreads through a nation in four hours.
Dissecting the Pipeline
Modern recommendation systems are not monolithic. They are ensembles: a candidate retrieval layer that narrows a universe of possible content, a ranking model that orders candidates by predicted engagement, and a post-ranking filter that applies policy constraints. The filter is where human judgment — or the absence of it — is most consequential.
At most large platforms, the filter is under-resourced and politically fraught. Content moderation is expensive, culturally complex, and generates consistent backlash from whichever political faction feels disfavored. The result is a system whose guardrails are perpetually outpaced by the volume and creativity of bad actors.
Who Holds the Code Accountable?
The uncomfortable truth is that no journalistic institution has the access to audit these systems at scale. Researchers are dependent on data donations from platforms — which platforms control. Regulatory frameworks have not kept pace. The EU’s Digital Services Act is the most ambitious attempt to date; its enforcement remains nascent.
What we are left with is an information environment shaped by systems whose inner workings are proprietary, whose incentives are misaligned with public interest, and whose reach exceeds that of any news organization in history.
The algorithm is not going away. The question is whether we develop the institutional mechanisms to hold it accountable before it finishes reshaping us.
Marcus Veil is Datum’s senior technology correspondent. He has reported on platform accountability for fifteen years.