Netflix Documentary: The Social Dilemma…and related articles

Here are a couple of posts around the theme of Knowledge and Technology. Netflix has recently put out a documentary called “The Social Dilemma” (trailer linked below). It touches upon some commonly discussed themes around the dangers of communications technologies and social media. 

What’s interesting is that despite what people agree are problematic outcomes, there are disagreements among root causes. 

This is just a great line from a NYTimes Article

The trouble with the internet, Mr. Williams says, is that it rewards extremes. Say you’re driving down the road and see a car crash. Of course you look. Everyone looks. The internet interprets behavior like this to mean everyone is asking for car crashes, so it tries to supply them. 

from: ‘The Internet Is Broken’: @ev Is Trying to Salvage It

 

From the “Social Dilemma Fails to Tackle the Real Issues in Tech”, which takes a critical view of the argument put forward in The Social Dilemma:

Focusing instead on how existing inequalities intersect with technology would have opened up space for a different and more productive conversation. These inequalities actually influence the design choices that the film so heavily focuses on—more specifically, who gets to make these choices.

https://slate.com/technology/2020/09/social-dilemma-netflix-technology.html

From “The Risk Makers: Viral hate, election interference, and hacked accounts: inside the tech industry’s decades-long failure to reckon with risk”

The internet’s “condition of harm” and its direct relation to risk is structural. The tech industry — from venture capitalists to engineers to creative visionaries — is known for its strike-it-rich Wild West individualistic ethos, swaggering risk-taking, and persistent homogeneity. Some of this may be a direct result of the industry’s whiteness and maleness. For more than two decades, studies have found that a specific subset of men, in the U.S. mostly white, with higher status and a strong belief in individual efficacy, are prone to accept new technologies with greater alacrity while minimizing their potential threats — a phenomenon researchers have called the “white-male effect,” a form of cognition that protects status. In the words of one study, the findings expose “a host of new practical and moral challenges for reconciling the rational regulation of risk with democratic decision making.”

https://onezero.medium.com/the-risk-makers-720093d41f01

 

Facebook is out of control. If it were a country it would be North Korea

This is a company that facilitated an attack on a US election by a foreign power, that live-streamed a massacre then broadcast it to millions around the world, and helped incite a genocide.

I’ll say that again. It helped incite a genocide. A United Nations report says the use of Facebook played a “determining role” in inciting hate and violence against Myanmar’s Rohingya, which has seen tens of thousands die and hundreds of thousands flee for their lives.

https://www.theguardian.com/technology/2020/jul/05/facebook-is-out-of-control-if-it-were-a-country-it-would-be-north-korea

The man who built a spyware empire says it’s time to come out of the shadows

The business he leads, NSO Group, is the world’s most notorious spyware company. It’s at the center of a booming international industry in which high-tech firms find software vulnerabilities, develop exploits, and sell malware to governments. The Israeli-headquartered company has been linked to high-profile incidents including the murder of Jamal Khashoggi and spying against politicians in Spain…

We’ve gone full circle, arriving back in a thick tangle of secrecy. Money is flowing, abuses keep happening, and the hacking tools are proliferating: no one disputes that.

But who is accountable when brutal authoritarians get their hands on cutting-edge spyware to use against opponents? An already shadowy world is getting darker, and answers are becoming harder to come by.

 

When Choosing What To Believe, People Often Choose Morality Over Hard Evidence

What happens when moral beliefs collide with documented evidence? For many people, it means doubling down on whichever compliments their worldview.

The authors offer two models for this system of rationalization. In the first model, moral concerns shift the correct criteria for making judgmentsfor instance, by lowering the amount of hard evidence deemed sufficient to justify a particular belief. “Morality changes how much evidence [people] consider to be required to hold [a particular] belief in an evidentially-sound way,” the authors write.

If AI is going to help us in a crisis, we need a new kind of ethics

AI has the potential to save lives but this could come at the cost of civil liberties like privacy. How do we address those trade-offs in ways that are acceptable to lots of different people? We haven’t figured out how to deal with the inevitable disagreements.

AI ethics also tends to respond to existing problems rather than anticipate new ones. Most of the issues that people are discussing today around algorithmic bias came up only when high-profile things went wrong, such as with policing and parole decisions.

https://www.technologyreview.com/2020/06/24/1004432/ai-help-crisis-new-kind-ethics-machine-learning-pandemic/?truid=e0dd2cbe984961ceccec29c613c6f06f&utm_source=the_download&utm_medium=email&utm_campaign=the_download.unpaid.engagement&utm_term=non-subs&utm_content=07-17-2020

The Drowning Child and the Expanding Circle

Old but classic thought experiment about ethics and our responsibilities to others. Have somehow not made meaningful use of this with my students but now that Ethics is no longer its own AOK, maybe it’s time to find a place for it. Below is a selection from the full text. Click on the link below.

I am always struck by how few students challenge the underlying ethics of the idea that we ought to save the lives of strangers when we can do so at relatively little cost to ourselves. At the end of the nineteenth century WH Lecky wrote of human concern as an expanding circle which begins with the individual, then embraces the family and ‘soon the circle… includes first a class, then a nation, then a coalition of nations, then all humanity, and finally, its influence is felt in the dealings of man [sic] with the animal world’.1 On this basis the overwhelming majority of my students seem to be already in the penultimate stage – at least – of Lecky’s expanding circle. There is, of course, for many students and for various reasons a gap between acknowledging what we ought to do, and doing it;

https://www.utilitarian.net/singer/by/199704–.htm

Here is a version of the thought experiment presented as a series of questions and answers.

https://www.philosophyexperiments.com/singer/

If you want a video adaptation of it:

All of this is connects to the concept of effective altruism

The nonprofit, GiveWell, is “dedicated to finding outstanding giving opportunities and publishing the full details of our analysis to help donors decide where to give.” It tries to
“determine how much good a given program accomplishes (in terms of lives saved, lives improved, etc.) per dollar spent.”

Read more about the organization and their recommended charities.

https://www.givewell.org/charities/top-charities

 

Who Should Be Saved First? Experts Offer Ethical Guidance

This article gets into a lot of the relevant issues of medical ethics and uses appropriate ethical language in the discussion. I’m working with this in my class today (remotely). Here’s the worksheet I’m using today.

TOK day 53

This also connects to some questions that came up after Hurricane Katrina put a hospital in New Orleans in a situation in which it had to make similar decisions about life and death.

12 Katrina Hospital Ethics

Who Gets the Ventilator?

Lastly, there is a recent Freakonomics Podcast about this very question that brings together a variety of perspectives on this question.

Link to page

Italians over 80 ‘will be left to die’ as country overwhelmed by coronavirus

Please ignore the sensational headline but the article connects to many discussions that relate issues around ethics and public policy. This is a real life application of a form of “trolley problem” playing out in real life. This goes back to some of the choices faced by a hospital in New Orleans after Hurricane Katrina in 2005. When forced to make decisions about whose lives to save, how do we decide?

“The criteria for access to intensive therapy in cases of emergency must include age of less than 80 or a score on the Charlson comorbidity Index [which indicates how many other medical conditions the patient has] of less than 5.”

The ability of the patient to recover from resuscitation will also be considered.

One doctor said: “[Who lives and who dies] is decided by age and by the [patient’s] health conditions. This is how it is in a war.”

https://www.telegraph.co.uk/news/2020/03/14/italians-80-will-left-die-country-overwhelmed-coronavirus/

Ethics and Public Policy, Some Ask a Taboo Question: Is America Overreacting to Coronavirus?

Calculating the economic costs of curtailing social interaction compared with the lives saved, he agreed, might yield a useful metric for policymakers. The U.S. government routinely performs such analyses when assessing new regulations, with the “statistical value of life” currently pegged by one government agency at about $9 million.

Still, Dr. Thunstrom asked, “Do we even want to look at that? Is it too callous?”

How technology is designed to bring out the worst in us

“Technology feels disempowering because we haven’t built it around an honest view of human nature,” says tech critic Tristan Harris.

https://www.vox.com/technology/2018/2/19/17020310/tristan-harris-facebook-twitter-humane-tech-time

What is “brain hacking”? Tech insiders on why you should care

Anderson Cooper: Is Silicon Valley programming apps or are they programming people?

Tristan Harris: Inadvertently, whether they want to or not, they are shaping the thoughts and feelings and actions of people. They are programming people. There’s always this narrative that technology’s neutral. And it’s up to us to choose how we use it. This is just not true.

https://www.cbsnews.com/news/brain-hacking-tech-insiders-60-minutes/