Netflix Documentary: The Social Dilemma…and related articles

Here are a couple of posts around the theme of Knowledge and Technology. Netflix has recently put out a documentary called “The Social Dilemma” (trailer linked below). It touches upon some commonly discussed themes around the dangers of communications technologies and social media. 

What’s interesting is that despite what people agree are problematic outcomes, there are disagreements among root causes. 

This is just a great line from a NYTimes Article

The trouble with the internet, Mr. Williams says, is that it rewards extremes. Say you’re driving down the road and see a car crash. Of course you look. Everyone looks. The internet interprets behavior like this to mean everyone is asking for car crashes, so it tries to supply them. 

from: ‘The Internet Is Broken’: @ev Is Trying to Salvage It

 

From the “Social Dilemma Fails to Tackle the Real Issues in Tech”, which takes a critical view of the argument put forward in The Social Dilemma:

Focusing instead on how existing inequalities intersect with technology would have opened up space for a different and more productive conversation. These inequalities actually influence the design choices that the film so heavily focuses on—more specifically, who gets to make these choices.

https://slate.com/technology/2020/09/social-dilemma-netflix-technology.html

From “The Risk Makers: Viral hate, election interference, and hacked accounts: inside the tech industry’s decades-long failure to reckon with risk”

The internet’s “condition of harm” and its direct relation to risk is structural. The tech industry — from venture capitalists to engineers to creative visionaries — is known for its strike-it-rich Wild West individualistic ethos, swaggering risk-taking, and persistent homogeneity. Some of this may be a direct result of the industry’s whiteness and maleness. For more than two decades, studies have found that a specific subset of men, in the U.S. mostly white, with higher status and a strong belief in individual efficacy, are prone to accept new technologies with greater alacrity while minimizing their potential threats — a phenomenon researchers have called the “white-male effect,” a form of cognition that protects status. In the words of one study, the findings expose “a host of new practical and moral challenges for reconciling the rational regulation of risk with democratic decision making.”

https://onezero.medium.com/the-risk-makers-720093d41f01

 

AI ethics groups are repeating one of society’s classic mistakes

Too many councils and advisory boards still consist mostly of people based in Europe or the United States.

International organizations and corporations are racing to develop global guidelines for the ethical use of artificial intelligence. Declarations, manifestos, and recommendations are flooding the internet. But these efforts will be futile if they fail to account for the cultural and regional contexts in which AI operates…

This work is not easy or straightforward. “Fairness,” “privacy,” and “bias” mean different things (pdf) in different places. People also have disparate expectations of these concepts depending on their own political, social, and economic realities. The challenges and risks posed by AI also differ depending on one’s locale.

https://www.technologyreview.com/2020/09/14/1008323/ai-ethics-representation-artificial-intelligence-opinion/?truid=e0dd2cbe984961ceccec29c613c6f06f&utm_source=the_download&utm_medium=email&utm_campaign=the_download.unpaid.engagement&utm_term=non-subs&utm_content=09-14-2020

Facebook is out of control. If it were a country it would be North Korea

This is a company that facilitated an attack on a US election by a foreign power, that live-streamed a massacre then broadcast it to millions around the world, and helped incite a genocide.

I’ll say that again. It helped incite a genocide. A United Nations report says the use of Facebook played a “determining role” in inciting hate and violence against Myanmar’s Rohingya, which has seen tens of thousands die and hundreds of thousands flee for their lives.

https://www.theguardian.com/technology/2020/jul/05/facebook-is-out-of-control-if-it-were-a-country-it-would-be-north-korea

The man who built a spyware empire says it’s time to come out of the shadows

The business he leads, NSO Group, is the world’s most notorious spyware company. It’s at the center of a booming international industry in which high-tech firms find software vulnerabilities, develop exploits, and sell malware to governments. The Israeli-headquartered company has been linked to high-profile incidents including the murder of Jamal Khashoggi and spying against politicians in Spain…

We’ve gone full circle, arriving back in a thick tangle of secrecy. Money is flowing, abuses keep happening, and the hacking tools are proliferating: no one disputes that.

But who is accountable when brutal authoritarians get their hands on cutting-edge spyware to use against opponents? An already shadowy world is getting darker, and answers are becoming harder to come by.

 

The “Smirk seen ’round the world” Updated 7/28/2020

sandmannUpdate: Most of what’s below was posted January 2019. Since then, the boy in the left of the image filed defamation lawsuits against several news agencies and a few of them have settled.  Here are a couple of articles about those lawsuits and their resolution. This topic also fits well with the new course concepts around knowledge and knower, knowledge and technology, and knowledge and politics.

CNN Settles Lawsuit Brought by Covington Catholic Student Nicholas Sandmann (1/7/2020)

Numerous national media outlets painted Sandmann and his classmates as menacing — and in some cases racist — after an edited video emerged of Sandmann smiling, inches away from the face of Nathan Phillips, an elderly Native American man, while attending the March for Life on the National Mall. A more complete video of the encounter, which emerged later, showed that Phillips had approached the Covington students and begun drumming in their faces, prompting them to respond with school chants.

https://www.nationalreview.com/news/cnn-settles-lawsuit-brought-by-covington-catholic-student-nicholas-sandmann/

And another from 7/24/2020

https://thehill.com/homenews/media/508905-nicholas-sandmann-announces-settlement-with-washington-post-in-defamation

Interesting situation from a TOK perspective. Below is a collection of articles about the topic. They raise a lot of interesting questions about how we acquire knowledge and the relationships among the various ways of knowing. It also lends itself to ask about the primacy of some WOKs over others.

 

Download Lesson plan on “the smirk”

Download smirk articles handout

TOK Day 31 (daily student worksheet)

What’s also interesting is how impactful the image was. The image seemed to be a perfect representation of how many people view the current moment in the United States. It fit perfectly into prior assumptions about the world and spoke to a deeper truth. Interpreting and explaining this image!and fitting it into preexisting mental schema seemed pretty easy.

Once more and more videos started to emerge and the greater context became known, there were some interesting developments. Some people Continue reading “The “Smirk seen ’round the world” Updated 7/28/2020″

Twitter aims to limit people sharing articles they have not read

The problem of users sharing links without reading them is not new. A 2016 study from computer scientists at Columbia University and Microsoft found that 59% of links posted on Twitter are never clicked.

Twitter’s solution is not to ban such retweets, but to inject “friction” into the process, in order to try to nudge some users into rethinking their actions on the social network. It is an approach the company has been taking more frequently recently, in an attempt to improve “platform health” without facing accusations of censorship.

https://www.theguardian.com/technology/2020/jun/11/twitter-aims-to-limit-people-sharing-articles-they-have-not-read

If AI is going to help us in a crisis, we need a new kind of ethics

AI has the potential to save lives but this could come at the cost of civil liberties like privacy. How do we address those trade-offs in ways that are acceptable to lots of different people? We haven’t figured out how to deal with the inevitable disagreements.

AI ethics also tends to respond to existing problems rather than anticipate new ones. Most of the issues that people are discussing today around algorithmic bias came up only when high-profile things went wrong, such as with policing and parole decisions.

https://www.technologyreview.com/2020/06/24/1004432/ai-help-crisis-new-kind-ethics-machine-learning-pandemic/?truid=e0dd2cbe984961ceccec29c613c6f06f&utm_source=the_download&utm_medium=email&utm_campaign=the_download.unpaid.engagement&utm_term=non-subs&utm_content=07-17-2020

Predictive policing algorithms are racist. They need to be dismantled.

The kids Milner watched being arrested were being set up for a lifetime of biased assessment because of that arrest record. But it wasn’t just their own lives that were affected that day. The data generated by their arrests would have been fed into algorithms that would disproportionately target all young Black people the algorithms assessed. Though by law the algorithms do not use race as a predictor, other variables, such as socioeconomic background, education, and zip code, act as proxies. Even without explicitly considering race, these tools are racist.

https://www.technologyreview.com/2020/07/17/1005396/predictive-policing-algorithms-racist-dismantled-machine-learning-bias-criminal-justice/

A second, related article:

 

How America Lost Faith in Expertise And Why That’s a Giant Problem

It’s not just that people don’t know a lot about science or politics or geography. They don’t, but that’s an old problem. The bigger concern today is that Americans have reached a point where ignorance—at least regarding what is generally considered established knowledge in public policy—is seen as an actual virtue. To reject the advice of experts is to assert autonomy, a way for Americans to demonstrate their independence from nefarious elites—and insulate their increasingly fragile egos from ever being told they’re wrong…

I fear we are moving beyond a natural skepticism regarding expert claims to the death of the ideal of expertise itself: a Google-fueled, Wikipedia-based, blog-sodden collapse of any division between professionals and laypeople, teachers and students, knowers and wonderers—in other words, between those with achievement in an area and those with none. By the death of expertise, I do not mean the death of actual expert abilities, the knowledge of specific things that sets some people apart from others in various areas. There will always be doctors and lawyers and engineers and other specialists. And most sane people go straight to them if they break a bone or get arrested or need to build a bridge. But that represents a kind of reliance on experts as technicians, the use of established knowledge as an off-the-shelf convenience as desired. “Stitch this cut in my leg, but don’t lecture me about my diet.”

https://www.foreignaffairs.com/articles/united-states/2017-02-13/how-america-lost-faith-expertise

Of course technology perpetuates racism. It was designed that way.

We often call on technology to help solve problems. But when society defines, frames, and represents people of color as “the problem,” those solutions often do more harm than good. We’ve designed facial recognition technologies that target criminal suspects on the basis of skin color. We’ve trained automated risk profiling systems that disproportionately identify Latinx people as illegal immigrants. We’ve devised credit scoring algorithms that disproportionately identify black people as risks and prevent them from buying homes, getting loans, or finding jobs.

https://www.technologyreview.com/2020/06/03/1002589/technology-perpetuates-racism-by-design-simulmatics-charlton-mcilwain/