Centralized social media, as Jack Dorsey wrote, was a grand experiment in collective global human consciousness. It was a modern-day Tower of Babel, the Human Instrumentality project from Neon Genesis Evangelion. Yes it was a way to make some people rich, but it was also an experiment in uniting the human race. Perhaps if we could all just get in one room and talk to each other, if we could just get rid of our echo chambers and our filter bubbles, we would eventually reach agreement, and the old world of war and hate and misunderstanding would melt into memory.
That experiment failed. Humanity does not want to be a global hive mind. We are not rational Bayesian updaters who will eventually reach agreement; when we receive the same information, it tends to polarize us rather than unite us. Getting screamed at and insulted by people who disagree with you doesn’t take you out of your filter bubble — it makes you retreat back inside your bubble and reject the ideas of whoever is screaming at you. No one ever changed their mind from being dunked on; instead they all just doubled down and dunked harder. The hatred and toxicity of Twitter at times felt like the dying screams of human individuality, being crushed to death by the hive mind’s constant demands for us to agree with more people than we ever evolved to agree with.
Recommendations based on user preferences often reflect the biases of the world—in this case, the diversity problems that have long been apparent in media and modeling. Those biases have in turn shaped the world of online influencers, so that many of the most popular images are, by default, of people with lighter skin. An algorithm that interprets your behavior inside such a filter bubble might assume that you dislike people with darker skin. And it gets worse: recommendation algorithms are also known to have an anchoring effect, in which their output reinforces users’ unconscious biases and can even change their preferences over time.