Recommendations based on user preferences often reflect the biases of the world—in this case, the diversity problems that have long been apparent in media and modeling. Those biases have in turn shaped the world of online influencers, so that many of the most popular images are, by default, of people with lighter skin. An algorithm that interprets your behavior inside such a filter bubble might assume that you dislike people with darker skin. And it gets worse: recommendation algorithms are also known to have an anchoring effect, in which their output reinforces users’ unconscious biases and can even change their preferences over time.
The essay gets really interesting toward the end.
The moral distance a society creates from the killing done in its name will increase the killing done in its name. We allow technology to increase moral distance; thus, technology increases the killing. More civilians than combatants die in modern warfare, so technology increases worldwide civilian murder at the hands of armies large and small.
Since its inception, the perennial thorn in Facebook’s side has been content moderation. That is, deciding what you and I are allowed to post on the site and what we’re not. Missteps by Facebook in this area have fueled everything from a genocide in Myanmar to viral disinformation surrounding politics and the coronavirus. However, just this past year, conceding their failings, Facebook shifted its approach. They erected an independent body of twenty jurors that will make the final call on many of Facebook’s thorniest decisions. This body has been called: Facebook’s Supreme Court.
So today, in collaboration with the New Yorker magazine and the New Yorker Radio Hour, we explore how this body came to be, what power it really has and how the consequences of its decisions will be nothing short of life or death.
Click here for other topics tagged “Facebook”
This article does a fascinating job of evaluating what the author calls “common knowledge,” similar to the TOK concept of shared knowledge, as a way to discuss the general idea of the role of communities in forming beliefs and how modern technologies change the nature of common knowledge.
It’s only with the growth of communities of people interacting that most people gain such courage in their convictions to defy that which authoritative sources (media, political, corporate) deem to be acceptable narratives and acceptable norms. These communities generate more than validation of one’s preexisting beliefs. They generate the common knowledge that I know that many others feel the same as I do, others to whom I am joined in a community.
Science is built on a bedrock of trust, which typically involves sharing enough details about how research is carried out to enable others to replicate it, verifying results for themselves. This is how science self-corrects and weeds out results that don’t stand up. Replication also allows others to build on those results, helping to advance the field. Science that can’t be replicated falls by the wayside.
At least, that’s the idea. In practice, few studies are fully replicated because most researchers are more interested in producing new results than reproducing old ones. But in fields like biology and physics—and computer science overall—researchers are typically expected to provide the information needed to rerun experiments, even if those reruns are rare.
Here are a couple of posts around the theme of Knowledge and Technology. Netflix has recently put out a documentary called “The Social Dilemma” (trailer linked below). It touches upon some commonly discussed themes around the dangers of communications technologies and social media.
What’s interesting is that despite what people agree are problematic outcomes, there are disagreements among root causes.
This is just a great line from a NYTimes Article
The trouble with the internet, Mr. Williams says, is that it rewards extremes. Say you’re driving down the road and see a car crash. Of course you look. Everyone looks. The internet interprets behavior like this to mean everyone is asking for car crashes, so it tries to supply them.
From the “Social Dilemma Fails to Tackle the Real Issues in Tech”, which takes a critical view of the argument put forward in The Social Dilemma:
Focusing instead on how existing inequalities intersect with technology would have opened up space for a different and more productive conversation. These inequalities actually influence the design choices that the film so heavily focuses on—more specifically, who gets to make these choices.
From “The Risk Makers: Viral hate, election interference, and hacked accounts: inside the tech industry’s decades-long failure to reckon with risk”
The internet’s “condition of harm” and its direct relation to risk is structural. The tech industry — from venture capitalists to engineers to creative visionaries — is known for its strike-it-rich Wild West individualistic ethos, swaggering risk-taking, and persistent homogeneity. Some of this may be a direct result of the industry’s whiteness and maleness. For more than two decades, studies have found that a specific subset of men, in the U.S. mostly white, with higher status and a strong belief in individual efficacy, are prone to accept new technologies with greater alacrity while minimizing their potential threats — a phenomenon researchers have called the “white-male effect,” a form of cognition that protects status. In the words of one study, the findings expose “a host of new practical and moral challenges for reconciling the rational regulation of risk with democratic decision making.”
Too many councils and advisory boards still consist mostly of people based in Europe or the United States.
International organizations and corporations are racing to develop global guidelines for the ethical use of artificial intelligence. Declarations, manifestos, and recommendations are flooding the internet. But these efforts will be futile if they fail to account for the cultural and regional contexts in which AI operates…
This work is not easy or straightforward. “Fairness,” “privacy,” and “bias” mean different things (pdf) in different places. People also have disparate expectations of these concepts depending on their own political, social, and economic realities. The challenges and risks posed by AI also differ depending on one’s locale.
This is a company that facilitated an attack on a US election by a foreign power, that live-streamed a massacre then broadcast it to millions around the world, and helped incite a genocide.
I’ll say that again. It helped incite a genocide. A United Nations report says the use of Facebook played a “determining role” in inciting hate and violence against Myanmar’s Rohingya, which has seen tens of thousands die and hundreds of thousands flee for their lives.
The business he leads, NSO Group, is the world’s most notorious spyware company. It’s at the center of a booming international industry in which high-tech firms find software vulnerabilities, develop exploits, and sell malware to governments. The Israeli-headquartered company has been linked to high-profile incidents including the murder of Jamal Khashoggi and spying against politicians in Spain…
We’ve gone full circle, arriving back in a thick tangle of secrecy. Money is flowing, abuses keep happening, and the hacking tools are proliferating: no one disputes that.
But who is accountable when brutal authoritarians get their hands on cutting-edge spyware to use against opponents? An already shadowy world is getting darker, and answers are becoming harder to come by.
Update: Most of what’s below was posted January 2019. Since then, the boy in the left of the image filed defamation lawsuits against several news agencies and a few of them have settled. Here are a couple of articles about those lawsuits and their resolution. This topic also fits well with the new course concepts around knowledge and knower, knowledge and technology, and knowledge and politics.
CNN Settles Lawsuit Brought by Covington Catholic Student Nicholas Sandmann (1/7/2020)
Numerous national media outlets painted Sandmann and his classmates as menacing — and in some cases racist — after an edited video emerged of Sandmann smiling, inches away from the face of Nathan Phillips, an elderly Native American man, while attending the March for Life on the National Mall. A more complete video of the encounter, which emerged later, showed that Phillips had approached the Covington students and begun drumming in their faces, prompting them to respond with school chants.
And another from 7/24/2020
Interesting situation from a TOK perspective. Below is a collection of articles about the topic. They raise a lot of interesting questions about how we acquire knowledge and the relationships among the various ways of knowing. It also lends itself to ask about the primacy of some WOKs over others.
TOK Day 31 (daily student worksheet)
What’s also interesting is how impactful the image was. The image seemed to be a perfect representation of how many people view the current moment in the United States. It fit perfectly into prior assumptions about the world and spoke to a deeper truth. Interpreting and explaining this image!and fitting it into preexisting mental schema seemed pretty easy.
Once more and more videos started to emerge and the greater context became known, there were some interesting developments. Some people Continue reading “The “Smirk seen ’round the world” Updated 7/28/2020″