Data Analytics and Sports. Game 6 of the 2020 World Series

The conflict over the proper role of data analytics has been an ongoing story in modern sports for many years now. The controversy only intensified with the way the final game of the baseball World Series played out. Lots of angry sports talking heads sounded off about how much they hated the decision for the Rays to pull their starting pitcher and hate the role analytics play in sports. Interesting contrast between what the “data says” and what your gut says. 
 
This first link summarizes the situation and the videos at the top of the linked page are a few of the angry talk show hosts sounding off on their anger.
 
World Series 2020: Why the Tampa Bay Rays took Blake Snell out while he was mowing down the Los Angeles Dodgers
 
 
 
 
“Championships are not won by guidebooks”
 
Host Colin Cowherd makes some interesting points here about what he thinks are the appropriate times to approach the sport analytically.
 

 
 
Kevin Cash’s decision to pull Blake Snell, explained: How analytics overruled World Series context clues and cost the Rays
 
The long view of analytics points to reasons why pulling Snell was the right move, but it also ignores the individual context clues that Snell’s Game 6 domination provided.
 
 

Here are a few more resources on the topic. Depending on your students and their interests, 

Sports closer to art than science

CHANGING THE GAME- The Rise of Sports Analytics

Are super-nerds really ruining US sports? | Sport | The Guardian

 

Making people aware of their implicit biases doesn’t usually change minds. But here’s what does work

Click here for other articles tagged, “Implicit Bias”

Do the diversity or implicit bias training programs used by companies and institutions like Starbucks and the Oakland Police Department help reduce bias?

I’m at the moment very skeptical about most of what’s offered under the label of implicit bias training, because the methods being used have not been tested scientifically to indicate that they are effective. And they’re using it without trying to assess whether the training they do is achieving the desired results.

I see most implicit bias training as window dressing that looks good both internally to an organization and externally, as if you’re concerned and trying to do something. But it can be deployed without actually achieving anything, which makes it in fact counterproductive. After 10 years of doing this stuff and nobody reporting data, I think the logical conclusion is that if it was working, we would have heard about it.

https://www.pbs.org/newshour/nation/making-people-aware-of-their-implicit-biases-doesnt-usually-change-minds-but-heres-what-does-work

Plane travel only feels like it’s dangerous

OEJQNFZYSXGU65DJ2X6KL33WYI

Psychologists have long acknowledged that humans tend to “fear the rare”. Some 160,000 Americans die of heart disease each year, but few are paralyzed by the fear of clogged arteries. Instead, statistically unlikely events like earthquakes have an unreasonable grip on our imagination. While plane crash coverage may not give you a new phobia, experts regularly psotulate that extensive news coverage of plane crashes may play with our perception of risk.

https://www.popsci.com/plane-risk-safest-travel/

Here’s another great chart

18264

Coronavirus ‘Hits All the Hot Buttons’ for How We Misjudge Risk

When you encounter a potential risk, your brain does a quick search for past experiences with it. If it can easily pull up multiple alarming memories, then your brain concludes the danger is high. But it often fails to assess whether those memories are truly representative.

A classic example is airplane crashes.

If two happen in quick succession, flying suddenly feels scarier — even if your conscious mind knows that those crashes are a statistical aberration with little bearing on the safety of your next flight.

Art and reality: How accurately does “Euphoria” portray real teens’ lives? Does it matter?

The central point here is that the show Euphoria inaccurately portrays teenagers’ lives which raises the question: Is there a responsibility that comes with creating artwork? Must it be accurate? Who decides?

The claim that the show is inaccurate is backup with statistics raises the question: How can math/statistics help us acquire knowledge? (or understand reality?)

People’s perceptions of teens’ behaviors seems to be generally inaccurate beyond what this show. If presented with this article and appropriate statistics would people change their mind or perceptions of these issues? I’m not sure that it would which leads us to the question: What is the role of intuition in acquiring knowledge? Can mathematical knowledge overcome intuitive beliefs?

This reminded me of an earlier article from the New York Times:

“The Kids Are More Than All Right”

https://well.blogs.nytimes.com/2012/02/02/the-kids-are-more-than-all-right/

Does mentoring “at risk” youth do more harm than good? The fascinating “Cambridge-Somerville Youth Study”

The “Cambridge-Somerville Youth Study” was a fascinating study in constructing knowledge in the human sciences but more importantly, using scientific methods to come to conclusions that seem to be completely counterintuitive: that mentorship programs can do more harm than not intervening in the lives of children considered at risk. The Freakonomics episode linked below gets into great detail about this.

Freakonomics Podcast: When Helping Hurts

Jump ahead to the 6 minute mark to hear about the “Cambridge-Somerville Youth Study”

http://freakonomics.com/podcast/when-helping-hurts/

Charities aren’t doing enough to determine if they’re really making a difference

First do no harm. It’s a basic tenet of medicine. When intervening in peoples lives – even with good intentions – we need to check whether we are doing them any damage. But sadly, this key principle from the medical profession has not been taken to heart by charities.

https://theconversation.com/charities-arent-doing-enough-to-determine-if-theyre-really-making-a-difference-95110

Similar to an older post “How do we measure the effectiveness of charitable giving?”

https://toktopics.com/2015/02/22/how-can-we-measure-the-effectiveness-in-charitable-giving/

 

The “Smirk seen ’round the world”

sandmannInteresting situation from a TOK perspective. Below is a collection of articles about the topic. They raise a lot of interesting questions about how we acquire knowledge and the relationships among the various ways of knowing. It also lends itself to ask about the primacy of some WOKs over others.

 

Download Lesson plan on “the smirk”

Download smirk articles handout

TOK Day 31 (daily student worksheet)

What’s also interesting is how impactful the image was. The image seemed to be a perfect representation of how many people view the current moment in the United States. It fit perfectly into prior assumptions about the world and spoke to a deeper truth. Interpreting and explaining this image!and fitting it into preexisting mental schema seemed pretty easy.

Once more and more videos started to emerge and the greater context became known, there were some interesting developments. Some people Continue reading “The “Smirk seen ’round the world””

“The Rationalist Delusion” Limitations of reason in searching for truth

Knowledge Questions: What is the relationship between reason and intuition? Do we use reason or intuition more when determining truth?

The following are passages from Jonathan Haidt’s book, The Righteous Mind: Why Good People Are Divided by Politics and Religion

Anyone who values truth should stop worshipping reason. We all need to take a cold hard look at the evidence and see reasoning for what it is. The French cognitive scientists Hugo Mercier and Dan Sperber recently reviewed the vast research literature on motivated reasoning (in social psychology) and on the biases and errors of reasoning (in cognitive psychology). They concluded that most of the bizarre and depressing research findings make perfect sense once you see reasoning as having evolved not to help us find truth but to help us engage in arguments, persuasion, and manipulation in the context of discussions with other people…

In the same way, each individual reasoner is really good at one thing: finding evidence to support the position he or she already holds, usually for intuitive reasons. We should not expect individuals to produce good, open-minded, truth-seeking reasoning, particularly when self-interest or reputational concerns are in play.

This link is to the larger passage from the book.

https://theindependentwhig.com/haidt-passages/haidt/the-rationalist-delusion/