When you encounter a potential risk, your brain does a quick search for past experiences with it. If it can easily pull up multiple alarming memories, then your brain concludes the danger is high. But it often fails to assess whether those memories are truly representative.
A classic example is airplane crashes.
If two happen in quick succession, flying suddenly feels scarier — even if your conscious mind knows that those crashes are a statistical aberration with little bearing on the safety of your next flight.
For people living through a ruinous financial crisis or devastating climate change — or even through rapid social change that has no material effect on their lives — it can be hard to make sense of a cascade of events that seem to have no plainly evident causal chain, or even identifiable human authors. How do you account for a world we’re meant to master, but is so complex its workings seem essentially opaque?
“Human cognition is inseparable from the unconscious emotional responses that go with it.”
In theory, resolving factual disputes should be relatively easy: Just present the evidence of a strong expert consensus. This approach succeeds most of the time when the issue is, say, the atomic weight of hydrogen.
But things don’t work that way when the scientific consensus presents a picture that threatens someone’s ideological worldview. In practice, it turns out that one’s political, religious, or ethnic identity quite effectively predicts one’s willingness to accept expertise on any given politicized issue.
How do we make better use of this piecemeal information? Computers are great at spotting patterns—but that’s just correlation. In the last few years, computer scientists have invented a handful of algorithms that can identify causal relations within single data sets. But focusing on single data sets is like looking through keyholes. What’s needed is a way to take in the whole view.
May need some context on the fact that Data is an android without emotions to help explain the clip but it speaks for itself. Don’t think I’d use it with students but it explains some important concepts succinctly.
fake detection technology is important, but it’s only part of the solution. It is the human factor—weaknesses in our human psychology—not their technical sophistication that make deep fakes so effective. New research hints at how foundational the problem is.
The biggest threat of deepfakes isn’t the deepfakes themselves
“Deepfakes do pose a risk to politics in terms of fake media appearing to be real, but right now the more tangible threat is how the idea of deepfakes can be invoked to make the real appear fake,” says Henry Ajder, one of the authors of the report. “The hype and rather sensational coverage speculating on deepfakes’ political impact has overshadowed the real cases where deepfakes have had an impact.”
Science communication has lost its sense of empathy and misunderstands how fear can alter a person’s belief system.
When we feel so fundamentally disenfranchised, it’s comforting to concoct a fictional universe that systemically denies you the right cards. It gives you something to fight against and makes you self-deterministic.
It provides an “us and them” narrative that allows you to conceive of yourself as a little David raging against a rather haughty, intellectual establishment Goliath.