Attached are some passages from the book, Cribsheet by Emily Oster, an economist who wrote a data-driven guide to parenting. I put together some interesting passages from the introduction and from one of the chapters that does a nice job contextualizing the concepts of data driven decision making, what a good study is, the limits of those studies, and the ultimate uncertainty of all the knowledge produced using data.
Meaningful connections to constructing knowledge and data collection in the human sciences (particularly economics), natural sciences, and cognitive biases. Also deals well with problems of sorting out the differences between correlation and causation.
Generally great book for parenting, not just for its TOK connections.
How can you identify a good study? This is a hard question. Some things you can see directly. Certain approaches are better than others – randomized trials, for example, are usually more compelling than other designs. Large studies tend, on average, to be better. More studies confirming the same thing tends to increase confidence, although not always – sometimes they all have the same biases in their results
Passages from Cribsheet by Emily Oster
The reason is simple: most of us, even those of us who are scientists ourselves, lack the relevant scientific expertise needed to adequately evaluate that research on our own. In our own fields, we are aware of the full suite of data, of how those puzzle pieces fit together, and what the frontiers of our knowledge is…
There’s an old saying that I’ve grown quite fond of recently: you can’t reason someone out of a position they didn’t reason themselves into. When most of us “research” an issue, what we are actually doing is:
formulating an initial opinion the first time we hear about something,
evaluating everything we encounter after that through that lens of our gut instinct,
finding reasons to think positively about the portions of the narrative that support or justify our initial opinion,
and finding reasons to discount or otherwise dismiss the portions that detract from it.
NYTimes Op-Doc exploring the psychological basis of our need for certainty and its pitfalls. Narrated by psychologist Arie Kruglanski who coined the term “cognitive closure.”
From 2016 but still offers meaningful insight into our current moment in politics.
“People who are anxious because of the uncertainty that surrounds them are going to be attracted to messages that offer them certainty. The need for closure is the need for certainty. To have clear-cut knowledge. You feel that you need to stop processing too much information, stop listening to a variety of information and zero in on what, to you, appears to be the truth. The need for closure is absolutely essential but it can also be extremely dangerous.”
So, it could be that the effect is all in your head. It could be that the effect is real, whether it’s placebo pain relief or measurable weight loss. But either way, if your experience flies in the face of research results, you’re probably going to go with your experience. And Hitchcock says that could be a completely rational decision. If the cost of continuing (say, paying for a supplement) is small compared to the risk of discontinuing (and potentially giving up the perceived benefit), it makes sense to keep on keeping on.
Here are some other articles related to natural sciences and diet
“Human cognition is inseparable from the unconscious emotional responses that go with it.”
In theory, resolving factual disputes should be relatively easy: Just present the evidence of a strong expert consensus. This approach succeeds most of the time when the issue is, say, the atomic weight of hydrogen.
But things don’t work that way when the scientific consensus presents a picture that threatens someone’s ideological worldview. In practice, it turns out that one’s political, religious, or ethnic identity quite effectively predicts one’s willingness to accept expertise on any given politicized issue.
fake detection technology is important, but it’s only part of the solution. It is the human factor—weaknesses in our human psychology—not their technical sophistication that make deep fakes so effective. New research hints at how foundational the problem is.
The biggest threat of deepfakes isn’t the deepfakes themselves
“Deepfakes do pose a risk to politics in terms of fake media appearing to be real, but right now the more tangible threat is how the idea of deepfakes can be invoked to make the real appear fake,” says Henry Ajder, one of the authors of the report. “The hype and rather sensational coverage speculating on deepfakes’ political impact has overshadowed the real cases where deepfakes have had an impact.”
“AMERICANS BORN IN the United States are more murderous than undocumented immigrants. Fighting words, I know. But why? After all, that’s just what the numbers say.
“Still, be honest: you wouldn’t linger over a story with that headline. It’s “dog bites man.” It’s the norm. And norms aren’t news. Instead, you’ll see two dozen reporters flock to a single burning trash can during an Inauguration protest. The aberrant occurrence is the story you’ll read and the picture you’ll see. It’s news because it’s new.
Below is a link to the first in a series of New York Times videos examining the subject. It is related to the idea of intuition and how we acquire and process knowledge and information.
“While scientists have no clear understanding of the mechanisms that prevent the fact-resistant humans from absorbing data, they theorize that the strain may have developed the ability to intercept and discard information en route from the auditory nerve to the brain. “The normal functions of human consciousness have been completely nullified,” Logsdon said.”
“The Internet might very well have been designed for confirmation bias. If you have a theory, you’ll find some site purporting it to be true. (I’m constantly amazed at how many people post Natural News stories on my feed, as if anything on the site is valid.) Levitin notes that MartinLutherKing.org is run by a white supremacist group. Even experts get fooled: Reporter Jonathan Capehart published a Washington Post article “based on a tweet by a nonexistent congressman in a nonexistent district.””