The Quest to Tell Science from Pseudoscience

Philosopher Karl Popper famously asked how to tell the two apart. His answer—falsifiability—hasn’t aged well, but the effort lives on.

Jettisoning falsifiability won’t solve our initial problem, however: demarcation is simply inevitable. Scientists have finite time and therefore must select which topics are worth working on and which are not: this implies some kind of demarcation. Indeed, there seems to be a broad consensus about which doctrines count as fringe, although debate remains about gray areas.

https://bostonreview.net/science-nature-philosophy-religion/michael-d-gordin-quest-tell-science-pseudoscience

Passages from “Cribsheet” by Emily Oster on the use and limitations on data and studies on parenting

Attached are some passages from the book, Cribsheet by Emily Oster, an economist who wrote a data-driven guide to parenting. I put together some interesting passages from the introduction and from one of the chapters that does a nice job contextualizing the concepts of data driven decision making, what a good study is, the limits of those studies, and the ultimate uncertainty of all the knowledge produced using data.

Meaningful connections to constructing knowledge and data collection in the human sciences (particularly economics), natural sciences, and cognitive biases. Also deals well with problems of sorting out the differences between correlation and causation.

Generally great book for parenting, not just for its TOK connections.

How can you identify a good study? This is a hard question. Some things you can see directly. Certain approaches are better than others – randomized trials, for example, are usually more compelling than other designs. Large studies tend, on average, to be better. More studies confirming the same thing tends to increase confidence, although not always – sometimes they all have the same biases in their results

Passages from Cribsheet by Emily Oster

 

How Americans Came to Distrust Science

For a century, critics of all political stripes have challenged the role of science in society. Repairing distrust today requires confronting those arguments head on.

Arguments over science underlie some of our most divisive and consequential policy debates. From climate change to fracking, abortion to genetically modified foods—and much else besides—contemporary political battles generate disputes over the legitimacy of scientific theories, methodologies, institutions, concepts, and even facts. In this context, scholars, citizens, and policymakers must think carefully about science and its cultural and political ramifications. The prevailing views on these matters will significantly determine our future—and perhaps even our survival as a species. And to understand why science is so widely distrusted in the United States, it is essential to understand how that attitude has arisen.

http://bostonreview.net/science-nature/andrew-jewett-how-americans-came-distrust-science

There are a bunch of great articles from the Boston Review about science topics.

What Makes Science Trustworthy

The “scientific method” of high school textbooks does not exist. But there are scientific methods, and they play an essential role in making scientific knowledge reliable.

http://bostonreview.net/science-nature-philosophy-religion/philip-kitcher-what-makes-science-trustworthy

Other articles

http://bostonreview.net/tags/science-and-technology

There Are No Experts On That for Which We Really Need Experts

Meaningful discussions around the concept of the production and utility of scientific knowledge, interdisciplinary knowledge, and the limitations of expertise.

It is, moreover, true that scientific consensus is often fleeting and regularly overturned, and that, in any case, consensus is neither unanimity nor a marker of infallibility. But the problem that we raise would remain a problem even if scientists were unanimous and infallible in their respective fields, and omnipotent about particular circumstances of time and place…

When the phenomena of multiple scientific fields interact, such as when it is necessary to trade off the health costs of a virus against the economic and other costs of a lockdown, policymakers can turn to experts about isolated phenomena. But there are no experts about the interaction of different kinds of phenomena or about the proper weighting of some against others. Policymakers can ask epidemiologists to weigh in on epidemiology, infectious disease specialists to weigh in on infectious disease, and economists to weigh in on economics. But there are no experts about how these subjects interact or how to balance them.

 

https://www.aier.org/article/there-are-no-experts-on-that-for-which-we-really-need-experts/

Is the Schrödinger Equation True? Just because a mathematical formula works does not mean it reflects reality

How real are the equations with which we represent nature?


Physicists’ theories work. They predict the arc of planets and the flutter of electrons, and they have spawned smartphones, H-bombs and—well, what more do we need? But scientists, and especially physicists, aren’t just seeking practical advances. They’re after Truth. They want to believe that their theories are correct—exclusively correct—representations of nature. Physicists share this craving with religious folk, who need to believe that their path to salvation is the One True Path.

But can you call a theory true if no one understands it?

https://www.scientificamerican.com/article/is-the-schroedinger-equation-true1/

AI is wrestling with a replication crisis

Science is built on a bedrock of trust, which typically involves sharing enough details about how research is carried out to enable others to replicate it, verifying results for themselves. This is how science self-corrects and weeds out results that don’t stand up. Replication also allows others to build on those results, helping to advance the field. Science that can’t be replicated falls by the wayside.

At least, that’s the idea. In practice, few studies are fully replicated because most researchers are more interested in producing new results than reproducing old ones. But in fields like biology and physics—and computer science overall—researchers are typically expected to provide the information needed to rerun experiments, even if those reruns are rare.

 

Masks Work. Really. We’ll Show You How.

The public health debate on masks is settled, said Joseph G. Allen, director of the Healthy Buildings program at Harvard. When you wear a mask, “You protect yourself, you protect others, you prevent yourself from touching your face,” he said. And you signal that wearing a mask is the right thing to do.

With coronavirus cases still rising, wearing a mask is more important than ever. In this animation, you will see just how effective a swath of fabric can be at fighting the pandemic.

You Must Not ‘Do Your Own Research’ When It Comes To Science

The reason is simple: most of us, even those of us who are scientists ourselves, lack the relevant scientific expertise needed to adequately evaluate that research on our own. In our own fields, we are aware of the full suite of data, of how those puzzle pieces fit together, and what the frontiers of our knowledge is…

There’s an old saying that I’ve grown quite fond of recently: you can’t reason someone out of a position they didn’t reason themselves into. When most of us “research” an issue, what we are actually doing is:

  • formulating an initial opinion the first time we hear about something,
  • evaluating everything we encounter after that through that lens of our gut instinct,
  • finding reasons to think positively about the portions of the narrative that support or justify our initial opinion,
  • and finding reasons to discount or otherwise dismiss the portions that detract from it.

https://www.forbes.com/sites/startswithabang/2020/07/30/you-must-not-do-your-own-research-when-it-comes-to-science/#62cce43b535e

What is the process for “creating science”? Explained in images

Very well drawn and well explained. Each step listed here raises interesting questions and discussions as well as limitations but nonetheless is a good visual introduction.

https://www.flickr.com/photos/188445124@N06/with/49892267702/

Related, the problem with peer review.

The problem with peer review is the peers. Who are “the peers” of four M.D.’s writing up an observational study? Four more M.D.’s who know just as little as the topic. Who are “the peers” of a sociologist who likes to bullshit about evolutionary psychology but who doesn’t know much about the statistics of sex ratios? 

Website Link