Netflix Documentary: The Social Dilemma…and related articles

Here are a couple of posts around the theme of Knowledge and Technology. Netflix has recently put out a documentary called “The Social Dilemma” (trailer linked below). It touches upon some commonly discussed themes around the dangers of communications technologies and social media. 

What’s interesting is that despite what people agree are problematic outcomes, there are disagreements among root causes. 

This is just a great line from a NYTimes Article

The trouble with the internet, Mr. Williams says, is that it rewards extremes. Say you’re driving down the road and see a car crash. Of course you look. Everyone looks. The internet interprets behavior like this to mean everyone is asking for car crashes, so it tries to supply them. 

from: ‘The Internet Is Broken’: @ev Is Trying to Salvage It

 

From the “Social Dilemma Fails to Tackle the Real Issues in Tech”, which takes a critical view of the argument put forward in The Social Dilemma:

Focusing instead on how existing inequalities intersect with technology would have opened up space for a different and more productive conversation. These inequalities actually influence the design choices that the film so heavily focuses on—more specifically, who gets to make these choices.

https://slate.com/technology/2020/09/social-dilemma-netflix-technology.html

From “The Risk Makers: Viral hate, election interference, and hacked accounts: inside the tech industry’s decades-long failure to reckon with risk”

The internet’s “condition of harm” and its direct relation to risk is structural. The tech industry — from venture capitalists to engineers to creative visionaries — is known for its strike-it-rich Wild West individualistic ethos, swaggering risk-taking, and persistent homogeneity. Some of this may be a direct result of the industry’s whiteness and maleness. For more than two decades, studies have found that a specific subset of men, in the U.S. mostly white, with higher status and a strong belief in individual efficacy, are prone to accept new technologies with greater alacrity while minimizing their potential threats — a phenomenon researchers have called the “white-male effect,” a form of cognition that protects status. In the words of one study, the findings expose “a host of new practical and moral challenges for reconciling the rational regulation of risk with democratic decision making.”

https://onezero.medium.com/the-risk-makers-720093d41f01

 

AI ethics groups are repeating one of society’s classic mistakes

Too many councils and advisory boards still consist mostly of people based in Europe or the United States.

International organizations and corporations are racing to develop global guidelines for the ethical use of artificial intelligence. Declarations, manifestos, and recommendations are flooding the internet. But these efforts will be futile if they fail to account for the cultural and regional contexts in which AI operates…

This work is not easy or straightforward. “Fairness,” “privacy,” and “bias” mean different things (pdf) in different places. People also have disparate expectations of these concepts depending on their own political, social, and economic realities. The challenges and risks posed by AI also differ depending on one’s locale.

Facebook is out of control. If it were a country it would be North Korea

This is a company that facilitated an attack on a US election by a foreign power, that live-streamed a massacre then broadcast it to millions around the world, and helped incite a genocide.

I’ll say that again. It helped incite a genocide. A United Nations report says the use of Facebook played a “determining role” in inciting hate and violence against Myanmar’s Rohingya, which has seen tens of thousands die and hundreds of thousands flee for their lives.

https://www.theguardian.com/technology/2020/jul/05/facebook-is-out-of-control-if-it-were-a-country-it-would-be-north-korea

The man who built a spyware empire says it’s time to come out of the shadows

The business he leads, NSO Group, is the world’s most notorious spyware company. It’s at the center of a booming international industry in which high-tech firms find software vulnerabilities, develop exploits, and sell malware to governments. The Israeli-headquartered company has been linked to high-profile incidents including the murder of Jamal Khashoggi and spying against politicians in Spain…

We’ve gone full circle, arriving back in a thick tangle of secrecy. Money is flowing, abuses keep happening, and the hacking tools are proliferating: no one disputes that.

But who is accountable when brutal authoritarians get their hands on cutting-edge spyware to use against opponents? An already shadowy world is getting darker, and answers are becoming harder to come by.

 

The “Smirk seen ’round the world” Updated 7/28/2020

sandmannUpdate: Most of what’s below was posted January 2019. Since then, the boy in the left of the image filed defamation lawsuits against several news agencies and a few of them have settled.  Here are a couple of articles about those lawsuits and their resolution. This topic also fits well with the new course concepts around knowledge and knower, knowledge and technology, and knowledge and politics.

CNN Settles Lawsuit Brought by Covington Catholic Student Nicholas Sandmann (1/7/2020)

Numerous national media outlets painted Sandmann and his classmates as menacing — and in some cases racist — after an edited video emerged of Sandmann smiling, inches away from the face of Nathan Phillips, an elderly Native American man, while attending the March for Life on the National Mall. A more complete video of the encounter, which emerged later, showed that Phillips had approached the Covington students and begun drumming in their faces, prompting them to respond with school chants.

https://www.nationalreview.com/news/cnn-settles-lawsuit-brought-by-covington-catholic-student-nicholas-sandmann/

And another from 7/24/2020

https://thehill.com/homenews/media/508905-nicholas-sandmann-announces-settlement-with-washington-post-in-defamation

Interesting situation from a TOK perspective. Below is a collection of articles about the topic. They raise a lot of interesting questions about how we acquire knowledge and the relationships among the various ways of knowing. It also lends itself to ask about the primacy of some WOKs over others.

 

Download Lesson plan on “the smirk”

Download smirk articles handout

TOK Day 31 (daily student worksheet)

What’s also interesting is how impactful the image was. The image seemed to be a perfect representation of how many people view the current moment in the United States. It fit perfectly into prior assumptions about the world and spoke to a deeper truth. Interpreting and explaining this image!and fitting it into preexisting mental schema seemed pretty easy.

Once more and more videos started to emerge and the greater context became known, there were some interesting developments. Some people Continue reading “The “Smirk seen ’round the world” Updated 7/28/2020″

Twitter aims to limit people sharing articles they have not read

The problem of users sharing links without reading them is not new. A 2016 study from computer scientists at Columbia University and Microsoft found that 59% of links posted on Twitter are never clicked.

Twitter’s solution is not to ban such retweets, but to inject “friction” into the process, in order to try to nudge some users into rethinking their actions on the social network. It is an approach the company has been taking more frequently recently, in an attempt to improve “platform health” without facing accusations of censorship.

https://www.theguardian.com/technology/2020/jun/11/twitter-aims-to-limit-people-sharing-articles-they-have-not-read

Using computers to teach children with no teachers

The article summarizes what Mitra spoke about in his TED talk (linked below). This tells us a lot about the role technology can play in education along with the role of intrinsic motivation and self guided learning. Our recent foray into remote learning calls into question a lot of our assumptions about education.

One group in Rajasthan, he said, learnt how to record and play music on the computer within four hours of it arriving in their village.

“At the end of it we concluded that groups of children can learn to use computers on their own irrespective of who or where they are,” he said.

https://www.bbc.com/news/technology-10663353

 

If AI is going to help us in a crisis, we need a new kind of ethics

AI has the potential to save lives but this could come at the cost of civil liberties like privacy. How do we address those trade-offs in ways that are acceptable to lots of different people? We haven’t figured out how to deal with the inevitable disagreements.

AI ethics also tends to respond to existing problems rather than anticipate new ones. Most of the issues that people are discussing today around algorithmic bias came up only when high-profile things went wrong, such as with policing and parole decisions.

https://www.technologyreview.com/2020/06/24/1004432/ai-help-crisis-new-kind-ethics-machine-learning-pandemic/?truid=e0dd2cbe984961ceccec29c613c6f06f&utm_source=the_download&utm_medium=email&utm_campaign=the_download.unpaid.engagement&utm_term=non-subs&utm_content=07-17-2020

Carl Sagan on science, progress, technology, and ignorance

Passages from his book, The Demon-Haunted World: Science as a Candle in the Dark, below are two passages that are especially prescient about our current moment. This book was published in 1995.

“We’ve arranged a global civilization in which most crucial elements profoundly depend on science and technology. We have also arranged things so that almost no one understands science and technology. This is a prescription for disaster. We might get away with it for a while, but sooner or later this combustible mixture of ignorance and power is going to blow up in our faces…

“I worry that…pseudoscience and superstition will seem year by year more tempting, the siren song of unreason more sonorous and attractive. Where have we heard it before? Whenever our ethnic or national prejudices are aroused, in times of scarcity, during challenges to national self-esteem or nerve, when we agonize about our diminished cosmic place and purpose or when fanaticism is bubbling up around us– then, habits of thought familiar from ages past reach for the controls. The candle flame gutters. Its little pool of light trembles. Darkness gathers. The demons begin to stir…

“Science is more than a body of knowledge; it is a way of thinking. I have a foreboding of an America in my children’s or grandchildren’s time — when the United States is a service and information economy; when nearly all the key manufacturing industries have slipped away to other countries; when awesome technological powers are in the hands of a very few, and no one representing the public interest can even grasp the issues; when the people have lost the ability to set their own agendas or knowledgeably question those in authority; when, clutching our crystals and nervously consulting our horoscopes, our critical faculties in decline, unable to distinguish between what feels good and whats true, we slide, almost without noticing, back into superstition and darkness. The dumbing down of America is most evident in the slow decay of substantive content in the enormously influential media, the 30-second sound bites (now down to 10 seconds or less), the lowest common denominator programming, credulous presentations on pseudoscience and superstition, but especially a kind of celebration of ignorance.”

.

Jacob Bronowski on the dangers of dogma

Relatedly, here is a clip from the 1970s television adaptation of the book, The Ascent of Man by Jacob Bronowski.

Jacob Bronowski explains why the pursuit of science is better than seeking absolute knowledge.

Predictive policing algorithms are racist. They need to be dismantled.

The kids Milner watched being arrested were being set up for a lifetime of biased assessment because of that arrest record. But it wasn’t just their own lives that were affected that day. The data generated by their arrests would have been fed into algorithms that would disproportionately target all young Black people the algorithms assessed. Though by law the algorithms do not use race as a predictor, other variables, such as socioeconomic background, education, and zip code, act as proxies. Even without explicitly considering race, these tools are racist.

https://www.technologyreview.com/2020/07/17/1005396/predictive-policing-algorithms-racist-dismantled-machine-learning-bias-criminal-justice/

A second, related article: