Hello!
Welcome to Hold The Code, Edition #11.
This week, we cover recent news about intentions within Europe to limit the use of AI in society, how AI was applied to Pfizer's vaccine rollout, emotion recognition technology (ERT), and, finally, an essay recently published regarding AI's limited metacognitive abilities.
Thank you for being part of our community, and don't forget to subscribe and share!
New EU Regulations for AI Leaked
A new set of proposals in the EU aiming to ban AI that would alter human behavior and place restrictions on “high-risk” systems was leaked to the public this week.
The list of banned AI predominantly targets systems that are at risk for discrimination, including facial recognition software, but some policy experts believe the proposals are worded too vaguely – unable to capture the true complexities of AI and increasing the potential for loopholes. The proposals to ban AI described below:
Those designed or used in a manner that manipulates human behavior, opinions or decisions ...causing a person to behave, form an opinion, or make a decision to their detriment
AI systems used for indiscriminate surveillance applied in a generalized manner
Those that exploit information or predictions and a person or group of persons in order to target their vulnerabilities
The proposals would also limit the use of algorithms used in the public sector that make crucial decisions such as dispatching emergency services, evaluating credit, sorting students into educational institutions, crime-predicting, and job recruitment analysis.
While the wording of the policy and its implications are still up for debate, Herbert Swaniker, a lawyer at Clifford Chance, states the impact it could have on AI manufacturers:
“AI vendors will be extremely focussed on these proposals, as it will require a fundamental shift in how AI is designed.”
The proposals are set to officially unveil next week, but they are unlikely to become laws for several years.
How AI-Assisted Pfizer’s Vaccine Development
Before Pfizer’s COVID-19 vaccine was approved this past December, the company was moving as quickly as possible to ensure development efforts were efficient and accurate. This meant creating the most innovative system to save time when the clock was ticking.
During the pandemic, Pfizer created a dashboard that monitored the effects of COVID-19 on clinical trial patients in real-time, cutting down on in-patient visits and saving time for researchers. The dashboard not only assisted in monitoring patients but also developed predictive models for COVID-19 outbreaks in specific counties, making it easier for researchers to pinpoint where to conduct clinical trials. The technology also helped researchers manage and gain insight from large volumes of data, and assisted them in creating a completely virtual drug application to the U.S. Food and Drug Administration.
Lidia Fonseca, Pfizer’s chief digital and technology officer, said that the company values their ability to adapt quickly and follow through with goals they set for themselves. "That keeps the organization focused on what’s most important,” she said.
if (smile == true) { person = happy; }
Imagine as you are walking through an airport, an AI system is constantly monitoring your facial expressions to determine if you are a security threat. Or if you are in a job interview and a similar system is used to rate your nervousness and dependability. These are examples of Emotional Recognition Technology (ERT) systems, a controversial technology that attempts to detect emotions from people’s facial expressions.
### facialExpression != emotion;
Recently, the theory of basic emotions (a principle that states that all emotions are biologically hard-wired into us and are always expressed consistently) is being challenged. This is an issue for ERT systems since it is the foundation on which they are built. New anthropology studies suggest that emotions can be expressed differently across cultures and societies, and the Association for Psychological Science recently concluded that there is no scientific support that a person’s emotional states can be readily inferred from their facial movements.
### bool biased = true;
Additionally, there is controversy over the biases encoded in these systems. A small, but well-documented study found that ERTs consistently rated black people’s faces as angrier than white people’s faces, regardless of expression. Facial recognition systems are also notorious for having a higher error rate for people of color. Even if a perfect, unbiased system was created, there is still cause for concern. As Deborah Raji, an AI researcher puts it:
“One way [these systems are concerning] is by not working: by virtue of having higher error rates for people of color, it puts them at greater risk. The second situation is when it does work — where you have the perfect facial recognition system, but it’s easily weaponized against communities to harass them.”
Weekly Feature: "What Separates Humans from AI? It's Doubt"
A recently published essay in the Financial Times claims that our ability to know when we don't know is what distinguishes humans from AI.
Although algorithms have proven more effective in a range of domains — they differ from humans in an ability psychologists refer to as metacognition.
"Metacognition is the capacity to think about our own thinking — to recognize when we might be wrong, for example, or when it would be wise to seek a second opinion."
Many AI researchers have concluded that AI tends to be overconfident. When a neural network is asked to generate an output for something it hasn't been trained to do, rather than throwing its rhetorical hands in the air and admitting defeat, AI often gives the wrong answer with high confidence.
In fact, a 2019 paper from Matthias Hein’s group at the University of Tübingen showed, as the test images become more and more different from the training data, the AI’s confidence goes up, not down — exactly the opposite of what it should do.
So what?
Lacking metacognition prevents one from understanding "what we have lost and when we need a helping hand. The connection between our view of ourselves and the reality of our behavior becomes weakened."
In other words, without metacognition, it is impossible to feel uncertainty.
If AI systems are unable to estimate the extent that they do not know, some philosophers, like Daniel Dennett, fear that we may place too much trust in the machines and overestimate their competence.
Can we build introspective robots?
That's exactly what Stephen Flemming's lab at University College London aims to do. Using MRI techniques to track the prefrontal activity within humans when they're unsure about a correct answer, Flemming's lab is beginning to understand how our brains represent uncertainty.
Some of his colleagues at Oxford University have applied this work to build in probability frameworks into neural networks.
Essentially, this just means that the computing system runs a problem through its neural network several times, each with different settings. The algorithm then compares the extent that competing answers are similar — serving as a basis for estimating its certainty. Computing with probabilities allows an AI to realize when it hasn't encountered a scenario or image before, thus reducing its confidence in unfamiliar situations.
Read the full essay here.
Written by Sophie Lamb, Molly Pribble, and Lex Verb