This raises a tough question. The ability to reason is meant to be humanity’s supreme attribute, the characteristic that most sets us apart from other animals. Why, then, has evolution endowed us with a tool so faulty that, if you bought it from a shop, you’d send it back? The French evolutionary psychologists Hugo Mercier and Dan Sperber have offered an intriguing answer to this question. If our reasoning capacity is so bad at helping us as individuals figure out the truth, they say, that’s because truth-seeking isn’t its function. Instead, human reason evolved because it helps us to argue more effectively.
Good Q and A that breaks down conspiratorial thinking. At the bottom is a link for the really well done “Conspiracy Theory Handbook.”
Conspiratorial videos and websites about COVID-19 are going viral. Here’s how one of the authors of “The Conspiracy Theory Handbook” says you can fight back. One big takeaway: Focus your efforts on people who can hear evidence and think rationally.
How do we prevent the spread of conspiracy theories?
By trying to inoculate the public against them. Telling the public ahead of time: Look, there are people who believe these conspiracy theories. They invent this stuff. When they invent it they exhibit these characteristics of misguided cognition. You can go through the traits we mention in our handbook, like incoherence, immunity to evidence, overriding suspicion and connecting random dots into a pattern. The best thing to do is tell the public how they can spot conspiracy theories and how they can protect themselves.
Interesting cartoon that explains the dangers of fake news and how to combat it in your own mind. Unfortunately I am skeptical about the value of laying out such processes to deal with this problem. How can you stop someone from being “fooled” into believing something that they already believe? That confirms and conforms to their deeper world views? The deeper issue is motivated reasoning rather than an ignorance of how to deal with new information. All that being said, this is a fun cartoon, there is more than just this one panel featured below, click on the image for the full cartoon.
When you encounter a potential risk, your brain does a quick search for past experiences with it. If it can easily pull up multiple alarming memories, then your brain concludes the danger is high. But it often fails to assess whether those memories are truly representative.
A classic example is airplane crashes.
If two happen in quick succession, flying suddenly feels scarier — even if your conscious mind knows that those crashes are a statistical aberration with little bearing on the safety of your next flight.
For people living through a ruinous financial crisis or devastating climate change — or even through rapid social change that has no material effect on their lives — it can be hard to make sense of a cascade of events that seem to have no plainly evident causal chain, or even identifiable human authors. How do you account for a world we’re meant to master, but is so complex its workings seem essentially opaque?
“Human cognition is inseparable from the unconscious emotional responses that go with it.”
In theory, resolving factual disputes should be relatively easy: Just present the evidence of a strong expert consensus. This approach succeeds most of the time when the issue is, say, the atomic weight of hydrogen.
But things don’t work that way when the scientific consensus presents a picture that threatens someone’s ideological worldview. In practice, it turns out that one’s political, religious, or ethnic identity quite effectively predicts one’s willingness to accept expertise on any given politicized issue.
How do we make better use of this piecemeal information? Computers are great at spotting patterns—but that’s just correlation. In the last few years, computer scientists have invented a handful of algorithms that can identify causal relations within single data sets. But focusing on single data sets is like looking through keyholes. What’s needed is a way to take in the whole view.
May need some context on the fact that Data is an android without emotions to help explain the clip but it speaks for itself. Don’t think I’d use it with students but it explains some important concepts succinctly.
Developing deepfake detection technology is important, but it’s only part of the solution. It is the human factor—weaknesses in our human psychology—not their technical sophistication that make deepfakes so effective. New research hints at how foundational the problem is.
The biggest threat of deepfakes isn’t the deepfakes themselves
“Deepfakes do pose a risk to politics in terms of fake media appearing to be real, but right now the more tangible threat is how the idea of deepfakes can be invoked to make the real appear fake,” says Henry Ajder, one of the authors of the report. “The hype and rather sensational coverage speculating on deepfakes’ political impact has overshadowed the real cases where deepfakes have had an impact.”