Songs arise out of suffering, by which I mean they are predicated upon the complex, internal human struggle of creation and, well, as far as I know, algorithms don’t feel. Data doesn’t suffer. ChatGPT has no inner being, it has been nowhere, it has endured nothing, it has not had the audacity to reach beyond its limitations, and hence it doesn’t have the capacity for a shared transcendent experience, as it has no limitations from which to transcend. ChatGPT’s melancholy role is that it is destined to imitate and can never have an authentic human experience, no matter how devalued and inconsequential the human experience may in time become.
That experiment failed. Humanity does not want to be a global hive mind. We are not rational Bayesian updaters who will eventually reach agreement; when we receive the same information, it tends to polarize us rather than unite us. Getting screamed at and insulted by people who disagree with you doesn’t take you out of your filter bubble — it makes you retreat back inside your bubble and reject the ideas of whoever is screaming at you. No one ever changed their mind from being dunked on; instead they all just doubled down and dunked harder. The hatred and toxicity of Twitter at times felt like the dying screams of human individuality, being crushed to death by the hive mind’s constant demands for us to agree with more people than we ever evolved to agree with.
The story of the laptop, what was on it, how the story was dealt with (and blocked) has been around for almost two years now but still worth exploring in a TOK context with connections to several themes (Knower, Technology, Politics). How do our prior beliefs affect how we interpret new information? How do we decide whether a claim is credible? What responsibility do social media companies have to decide what is true? What are the consequences of so few companies having so much power over the spread of information?
I like this topic because it pushes my students to confront their own discomfort with the potential weaponization of the concept of fake news but in a direction that suits their politics. The twitter video at the bottom of Sam Harris, ironically, communicates what many people actually believe.
Here are a few of articles that explain the controversy:
This was from a recent conversation with Sam Harris, whom I normally have great respect for. His defense of wide ranging conspiracies to generate politically desirable outcomes is interesting. This is a good example of consequentialist ethics.
Experts warn this is blurring the line between activism and vigilantism.
This new form of online activism is making some people do things they wouldn’t normally do, she adds, and many of those involved may not realize in the moment of their anger that this behavior is not only unethical but illegal.
“What is the difference between public shaming and vigilantism?” she asks. “And what’s the difference between ‘good’ vigilantism and ‘bad’ vigilantism?”
“Our brains are not built for the truth,” David Linden, a professor of neuroscience at the Johns Hopkins University School of Medicine, told me earlier this year. “Our brains weren’t even built to read. Our brains weren’t. Evolution is a very slow process. It takes many, many, many, many, many generations. And the change in technology and particularly in information is so rapid that there’s no way for evolution to keep up.”…
We choose who to believe, we choose who to trust, often before we realize we are doing it. It is no wonder our disinformation battles can feel so personal, especially within families.
Recommendations based on user preferences often reflect the biases of the world—in this case, the diversity problems that have long been apparent in media and modeling. Those biases have in turn shaped the world of online influencers, so that many of the most popular images are, by default, of people with lighter skin. An algorithm that interprets your behavior inside such a filter bubble might assume that you dislike people with darker skin. And it gets worse: recommendation algorithms are also known to have an anchoring effect, in which their output reinforces users’ unconscious biases and can even change their preferences over time.
The moral distance a society creates from the killing done in its name will increase the killing done in its name. We allow technology to increase moral distance; thus, technology increases the killing. More civilians than combatants die in modern warfare, so technology increases worldwide civilian murder at the hands of armies large and small.
Since its inception, the perennial thorn in Facebook’s side has been content moderation. That is, deciding what you and I are allowed to post on the site and what we’re not. Missteps by Facebook in this area have fueled everything from a genocide in Myanmar to viral disinformation surrounding politics and the coronavirus. However, just this past year, conceding their failings, Facebook shifted its approach. They erected an independent body of twenty jurors that will make the final call on many of Facebook’s thorniest decisions. This body has been called: Facebook’s Supreme Court.
So today, in collaboration with the New Yorker magazine and the New Yorker Radio Hour, we explore how this body came to be, what power it really has and how the consequences of its decisions will be nothing short of life or death.
This article does a fascinating job of evaluating what the author calls “common knowledge,” similar to the TOK concept of shared knowledge, as a way to discuss the general idea of the role of communities in forming beliefs and how modern technologies change the nature of common knowledge.
It’s only with the growth of communities of people interacting that most people gain such courage in their convictions to defy that which authoritative sources (media, political, corporate) deem to be acceptable narratives and acceptable norms. These communities generate more than validation of one’s preexisting beliefs. They generate the common knowledge that I know that many others feel the same as I do, others to whom I am joined in a community.
Science is built on a bedrock of trust, which typically involves sharing enough details about how research is carried out to enable others to replicate it, verifying results for themselves. This is how science self-corrects and weeds out results that don’t stand up. Replication also allows others to build on those results, helping to advance the field. Science that can’t be replicated falls by the wayside.
At least, that’s the idea. In practice, few studies are fully replicated because most researchers are more interested in producing new results than reproducing old ones. But in fields like biology and physics—and computer science overall—researchers are typically expected to provide the information needed to rerun experiments, even if those reruns are rare.