Hello!
We wanted to start this newsletter off with a joke but couldn't think of any.
We then realized an AI could probably do a better job than us.
As it turns out, a team of researchers at Stanford are taking pun-generating AI's to the next level, finding ways to train AI systems with creative wit. Testing a new approach that provides the neural network system with a pair of homophones, members of the AI research community, including Roger Levy, the director of MIT's psycholinguistics lab, have applauded the work. Many attribute humor as a distinctly human skill; finding ways to bring more human intelligence to neural nets signifies a key advancement in AI research.
Who knows? Pretty soon, AI might be writing this newsletter for us. But until then...
Let’s Face It
A study from earlier this year has concluded that facial recognition technology may be able to determine political orientation by just looking at your face. (:-o)
Methods
As this study was mainly focused on evaluating existing privacy threats related to facial recognition technology (as opposed to creating new ones), the researchers used an open-source facial recognition algorithm instead of developing one specifically for political affiliation. The system used a logistic regression model to predict someone’s self-reported political affiliation (which was classified as either conservative or liberal).
Results
The accuracy of this system was measured as the fraction of correct guesses when comparing a pair of faces - with one face being conservative and the other liberal. Here are the results:
When comparing faces on a US dating website, the system was accurate 72% of the time.
Similar results were shown for dating websites in Canada and the UK, where the system was correct 68% and 67% of the time, respectively.
For comparison, humans were only able to guess the correct political affiliation 55% of the time -- barely better than chance.
Massachusetts' Breakthrough AI Law
Regulating the use of facial recognition in criminal cases has been an ongoing challenge for lawmakers. AI ethicists have pointed to technology’s inaccuracy when it comes to identifying women and people of color.
But at the same time, automated facial recognition can be an incredibly powerful investigative tool: it has helped identify child molesters and, as described recently in a previous Hold The Code edition, the people who participated in the Jan. 6 riot at the Capitol.
Past all-or-nothing effects
In weighing these pros and cons, lawmakers have historically fallen into two camps: those who’ve outright banned the use of facial recognition technology in criminal cases, and those who have not. City Councils in Oakland, Portland, San Francisco, Minneapolis, and elsewhere have banned police use of the technology, whereas other policymakers have refused to regulate the technology, citing its use in solving recent homicide and sexual abuse cases.
Massachusetts' flexible approach to regulation
What makes the new law in Massachusetts so interesting is that it strikes a difficult balance— regulating the technology, allowing law enforcement to harness the benefits of the tool, while concurrently preventing the false arrests that have happened before. Here’s how:
Local officers must get a judge’s permission before running a face recognition search.
Only someone from the state police, the F.B.I., or the Registry of Motor Vehicles perform the search.
The law also creates a commission to study facial recognition policies and make recommendations, such as whether a criminal defendant should be told that they were identified using the technology.
A lot of the work surrounding the new bill has been attributed to Kade Crockford, an activist at the ACLU of Massachusetts. Describing the motivation behind her efforts, Crockford said:
“One of my concerns was that we would wake up one day in a world resembling that depicted in the Philip K. Dick novel “Minority Report,” where everywhere you go, your body is tracked; your physical movements, habits, activities and locations are secretly compiled and tracked in a searchable database available to god knows who.”
Legal activists are optimistic that the work in Massachusetts can set a nationwide example, providing both space and opportunity for facial recognition technology to be used to its full, most ethical potential.
Prediction or Persuasion: How AI influences Our Behavior
A recent study found that AI can exploit vulnerabilities in human habits to influence our behavior.
Before you freak out about having your mind-controlled by an evil AI supercomputer, you’ll be glad to hear that this study only tested the ability of an AI system to control human study participants in very limited, game-like settings.
What were the results?
This study was run by CSIRO’s Data61, the digital arm of Australia’s national science agency, and found that an AI system was able to influence participant behavior in a variety of game scenarios.
For example, in a game where participants had to press a button when they saw one shape (like an orange triangle) and abstain from pressing this button when they saw a different shape (like a blue circle), the AI was able to increase the number of errors a participant made by 25% just by analyzing the participant’s behavior.
RAISO’s take
Although the article does touch on the importance of data privacy and regulation, there is a noticeable lack of discussion surrounding the negative applications of this technology (and coincidentally, the author of this article is Jon Whittle, the director of CSIRO’s Data61). The author notes some of the positive applications of this sort of system (such as training someone to have healthier eating habits), but a technology that influences someone to eat an apple instead of a chocolate bar could just as easily be used to influence someone to eat a chocolate bar instead of an apple.
The article even mentions applications of influencing policy and public opinion as to potential future applications. Here at RAISO, we are wondering at what point does this influence actually become manipulation and how do we classify whether a certain application of this technology is good, bad, or somewhere in between.
Weekly Feature: A Review of "AI Isn't Going to Save Us"
Payal Arora, a digital anthropologist and author of The Next Billion Users, published a recent piece called "AI Isn't Going to Save Us." She argues that the effectiveness of AI-driven solutions hinges on the extent to which they "invest in the human and not just the machine."
She constructs her argument by taking the example of joint efforts between Intel, Microsoft, Google, and Alibaba to implement a tech solution to the crisis of illegal elephant poaching. The tech giants have built computational networks that harness AI to capture images of suspected poachers and alert rangers. Designed to process significant amounts of data with extraordinary speed and accuracy, the technology exemplifies the renewed faith in AI to save our planet.
But here's the problem
Rangers in Africa cannot fully utilize the technology because they themselves lack access to basic necessities. According to a 2016 World Wildlife Fund report, 82% of rangers had faced a life-threatening situation while on duty. Many stated that they were inadequately armed, had limited access to vehicles and training to combat organized crime, insufficient boots, shelter, and clean water supplies. In short: Tracking information on poachers is the least of their problems. Investing in humans could have even more impact than investing in machines
The implications
Arora doesn't deny that AI has the potential to "make everyone's life better for the entire world." She even claims "All technology is innately assistive." But she questions AI-enabled solutionism: an attitude within the tech world that blindly assumes that all technology is "making the world a better place." Indeed, Arola writes, "sometimes a Silicon Valley solution only exacerbates the problem.
RAISO's take
Arora urges us to be critical of the way we evaluate AI solutions: we must not lose sight of humanity when learning about the machine. Here at RAISO, we couldn't agree more. Significantly, Arora doesn't deny that AI is advancing society in incredible ways. But we share her concern that AI cannot be used to its full potential unless we consider both the machine and the person running it.
Written by Lex Verb and Molly Pribble