Philosopher Karl Popper famously asked how to tell the two apart. His answer—falsifiability—hasn’t aged well, but the effort lives on.
Jettisoning falsifiability won’t solve our initial problem, however: demarcation is simply inevitable. Scientists have finite time and therefore must select which topics are worth working on and which are not: this implies some kind of demarcation. Indeed, there seems to be a broad consensus about which doctrines count as fringe, although debate remains about gray areas.
Attached are some passages from the book, Cribsheet by Emily Oster, an economist who wrote a data-driven guide to parenting. I put together some interesting passages from the introduction and from one of the chapters that does a nice job contextualizing the concepts of data driven decision making, what a good study is, the limits of those studies, and the ultimate uncertainty of all the knowledge produced using data.
Meaningful connections to constructing knowledge and data collection in the human sciences (particularly economics), natural sciences, and cognitive biases. Also deals well with problems of sorting out the differences between correlation and causation.
Generally great book for parenting, not just for its TOK connections.
How can you identify a good study? This is a hard question. Some things you can see directly. Certain approaches are better than others – randomized trials, for example, are usually more compelling than other designs. Large studies tend, on average, to be better. More studies confirming the same thing tends to increase confidence, although not always – sometimes they all have the same biases in their results
Passages from Cribsheet by Emily Oster
For a century, critics of all political stripes have challenged the role of science in society. Repairing distrust today requires confronting those arguments head on.
Arguments over science underlie some of our most divisive and consequential policy debates. From climate change to fracking, abortion to genetically modified foods—and much else besides—contemporary political battles generate disputes over the legitimacy of scientific theories, methodologies, institutions, concepts, and even facts. In this context, scholars, citizens, and policymakers must think carefully about science and its cultural and political ramifications. The prevailing views on these matters will significantly determine our future—and perhaps even our survival as a species. And to understand why science is so widely distrusted in the United States, it is essential to understand how that attitude has arisen.
There are a bunch of great articles from the Boston Review about science topics.
What Makes Science Trustworthy
The “scientific method” of high school textbooks does not exist. But there are scientific methods, and they play an essential role in making scientific knowledge reliable.
The more certain someone is about covid-19, the less you should trust them
Acknowledging uncertainty a little more might improve not only the atmosphere of the debate and the science, but also public trust. If we publicly bet the reputational ranch on one answer, how open minded can we be when the evidence changes?
Meaningful discussions around the concept of the production and utility of scientific knowledge, interdisciplinary knowledge, and the limitations of expertise.
It is, moreover, true that scientific consensus is often fleeting and regularly overturned, and that, in any case, consensus is neither unanimity nor a marker of infallibility. But the problem that we raise would remain a problem even if scientists were unanimous and infallible in their respective fields, and omnipotent about particular circumstances of time and place…
When the phenomena of multiple scientific fields interact, such as when it is necessary to trade off the health costs of a virus against the economic and other costs of a lockdown, policymakers can turn to experts about isolated phenomena. But there are no experts about the interaction of different kinds of phenomena or about the proper weighting of some against others. Policymakers can ask epidemiologists to weigh in on epidemiology, infectious disease specialists to weigh in on infectious disease, and economists to weigh in on economics. But there are no experts about how these subjects interact or how to balance them.
How real are the equations with which we represent nature?
Physicists’ theories work. They predict the arc of planets and the flutter of electrons, and they have spawned smartphones, H-bombs and—well, what more do we need? But scientists, and especially physicists, aren’t just seeking practical advances. They’re after Truth. They want to believe that their theories are correct—exclusively correct—representations of nature. Physicists share this craving with religious folk, who need to believe that their path to salvation is the One True Path.
But can you call a theory true if no one understands it?
The reason is simple: most of us, even those of us who are scientists ourselves, lack the relevant scientific expertise needed to adequately evaluate that research on our own. In our own fields, we are aware of the full suite of data, of how those puzzle pieces fit together, and what the frontiers of our knowledge is…
There’s an old saying that I’ve grown quite fond of recently: you can’t reason someone out of a position they didn’t reason themselves into. When most of us “research” an issue, what we are actually doing is:
formulating an initial opinion the first time we hear about something,
evaluating everything we encounter after that through that lens of our gut instinct,
finding reasons to think positively about the portions of the narrative that support or justify our initial opinion,
and finding reasons to discount or otherwise dismiss the portions that detract from it.
Very well drawn and well explained. Each step listed here raises interesting questions and discussions as well as limitations but nonetheless is a good visual introduction.
Related, the problem with peer review.
The problem with peer review is the peers. Who are “the peers” of four M.D.’s writing up an observational study? Four more M.D.’s who know just as little as the topic. Who are “the peers” of a sociologist who likes to bullshit about evolutionary psychology but who doesn’t know much about the statistics of sex ratios?