Hello!
As winter quarter begins to wrap up at Northwestern, your friends here at Hold the Code want to take the opportunity to thank you for being part of our organization. What began merely as an idea by RAISO co-presidents, Bijal Mehta and Mason Secky-Koebel, has grown into a community of over 90 students, each with diverse backgrounds yet bound by a common interest in learning about ethical, human-centered applications of AI and CTech.
If you're interested in helping us write Hold the Code, please contact us; we'd love to have you on board.
Without further ado, here are this week's top stories.
When Will We Ban Deepfake Porn?
Activists and experts hope the answer is soon.
What are deepfakes?
Deepfakes use AI technology to create believable audio or video hoaxes. This technology gained popularity on Reddit when it was used to create fake celebrity porn. To this day, creating fake pornography remains the most popular use for this technology, with the overwhelming majority of deepfake videos featuring the non-consensual porn of women.
Deepfakes by the numbers:
Since December of 2018, 90%-95% of all deepfake videos are of non-consensual porn
90% of these videos are non-consensual porn of women
Effects and legal status
The consequences of deepfake porn can be just as devastating as the effects of revenge porn (real intimate photos or videos released without consent). In the past, victims of deepfake porn have had to change their names or even remove themselves completely from the internet.
In the US, though there are 46 states that have bans on revenge porn, only two (California and Virginia) ban deepfake porn. Despite this, activists in both the US and abroad are hopeful for new legislation that protects the victims of deepfake porn. Vice President Kamala Harris has been an advocate for a federal ban on revenge porn, and Congressman Yvette Clarke, who introduced a bill targeting Deepfakes in 2019, plans on reintroducing a revised version of the bill in the next few weeks.
Facebook vs. Facebook - How Their Oversight Board Acts as a Supreme Court
Facebook now has 3 billion users - more than a third of humanity.
Of these users, many use Facebook as their primary news source—a dynamic that has suffused the platform with toxic misinformation. Though some countries, like Germany, have passed laws attempting to curb the dissemination of hate speech, laws such as these are enforceable only in select geographical boundaries. In the United States, the question of how to regulate Facebook has highlighted tensions with the First Amendment, becoming a challenging topic of legal debate. Consequently, Facebook has been left to make difficult decisions about speech largely on its own.
Noah Feldman: “Facebook needs a supreme court”
That’s where Noah Feldman, a Harvard Law School professor, comes in. A close friend of Sherly Sandberg, the COO of Facebook, Feldman pitched the idea in 2018 that social-media companies needed “quasi-legal systems” to weigh difficult decisions around freedom of speech. Mark Zuckerberg agreed with Feldman’s proposal, noting that the question of deleting individual, high-profile posts should be left to the experts. From there, the Oversight Board was created.
How the board works:
As many as 200,000 Facebook posts become eligible for review every day.
The board chooses the most “representative” cases and hears each in a panel of 5 members, who remain anonymous to the public.
Although there are no “oral arguments,” the user whose post is in question may submit a written brief arguing their case.
The panel’s decision, if ratified by all the members, becomes final.
Questions of power
Though many outside the company have wanted the Board to have as much authority as possible, in reality, its powers are limited. Many of Facebook’s most controversial posts: conspiracy theories, disinformation, and speech are allowed to remain up. And most significantly, the board’s rulings do not become Facebook policy in the same way that a Supreme Court precedent becomes the law of the land. Even if the board decides to take down a post, similar posts are only taken down at Facebook’s discretion.
For more reading
Read Kate Klonic’s first-hand glimpse into the Board’s inner workings. And check out this profile on the board’s first 20 members.
AI Commission Recommends ‘Military AI Readiness’ by 2025
A commission tasked with studying AI’s impact on national security has recommended that the federal government increase investment in AI research and development over the next few years.
How much are we spending on AI?
The report recommends that the federal government double its federal R&D budget for AI every year until it reaches $32 billion in the fiscal year 2026.
AI in intelligence
The commission also set goals for the intelligence community to adopt more AI technologies at scale, including the creation of a “red team” dedicated to mitigating adversarial attacks on AI systems.
Future plans for an AI workforce
The commission also decided on a number of recommendations for further recruiting a more robust AI workforce including:
The US Digital Service Academy, a university whose students would agree to 5-year terms of government service after graduation
A visa category for emerging and disruptive technologies
A green card for all students who earn a Ph.D. in STEM fields
Weekly Feature: AI Knows You Best - Maybe
Imagine this: you’re on Zoom, in math class, and without even saying a word, your professor knows that you’re confused, frustrated, and on the brink of losing the rest of your attention.
That’s the vision of companies like Find Solution AI, which are currently selling facial recognition technology to schools and colleges that scan students’ faces and monitors their feelings in virtual classrooms. Though facial recognition has traditionally been used to verify identities, in recent years, researchers and startups have sought ways to interpret facial expressions to understand what a person is feeling.
Find Solution AI, and most other emotion recognition startups, base their technology on the work of Paul Ekman—a psychologist whose work centers on the similarities between facial expressions and the idea of “the seven universal emotions.”
AI ethicists push back
Many researchers, however, disagree with Ekman’s seven universal emotions theory. Furthermore, academics like Kate Crawford, the co-founder of the AI Now Institute, have pointed out that facial muscle data does not correlate reliably with a person’s inner state, and it is dangerous to assume it does. A meta-review of 1,000 studies found that people only make the expected facial expression to match their emotional state 20 - 30% of the time.
Additionally, there are serious ethical concerns about who the technology is being used to surveil, and how they are being surveilled. Thus far, the technology has applied to many populations powerless against refusing it, such as:
Children in virtual classrooms
Job candidates performing virtual interviews
Amazon workers with cameras on them while they deliver packages
And even people being questioned by the police
The dynamics created by this technology, and the harmful conclusions it can lead to, warrant further scrutiny, researchers from AI Now explain.
Read their full report here.
Written by Molly Pribble and Lex Verb