Hello!
Evanston (along with a lot of the US) is currently an arctic tundra, and Northwestern is in the depths of midterms season. But, your friends here at RAISO hope to bring a smile to your frostbitten face and relevant AI news to your inbox with this week’s edition. As a reminder, we’re a non-engineer friendly newsletter that aims to connect students across fields in raising awareness and understanding about AI and contemporary technology (CTech).
You can join our slack group here to stay most up to date with our programs. We’re working on featuring speakers in the field of ethical AI and AI research - join and be among the first to know (this is open to all).
Finally, don’t forget to subscribe if you haven't already.
The Robot Recruiters
Most college students are familiar with the recruitment and job-seeking process, a notoriously impersonal and challenging experience. These trends may continue as a growing number of companies expand their use of AI algorithms to decide whether to reject applicants, especially at the early stages of the hiring process.
Here are a few notable AI platforms:
Hirevue: an algorithm that analyzes footage of interviewees based on their movements, speech, and language. This platform eliminates the need for an interviewer to even be present, allowing the AI full discretion over who passes and who fails. A few companies that use this platform are Delta, GE, and SAS.
Pymetrics: a system that uses an aptitude test in a game-like format to make judgments about personality, intelligence, and other cognitive attributes in “just 25 minutes.” This platform is used by PwC, McDonald’s, and JP Morgan.
Before we worry about robots taking all of our jobs, we may need to worry about robots giving us our jobs in the first place.
Markets React to AI News
AI is shaping businesses and playing an increasingly important role in the markets. A clear example of this occurred last week when Palantir (PLTR) and IBM announced a global partnership.
The Basics
Palantir is a data analytics software company. They primarily sell to US government agencies, but they’re seeking to expand into health care, energy, and manufacturing sectors.
The partnership between Palantir and IBM aims to accelerate business adoption of AI, leveraging both company’s existing technologies to create a product that simplifies how businesses deploy AI-infused applications.
Upon announcing the partnership, shares of Palantir rose nearly 6%.
Why it Matters
According to an IBM study, nearly 75% of businesses surveyed say they are exploring or implementing AI. Yet, over 30% cited limited AI expertise and data complexities as barriers to adoption. IBM and Palantir’s joint product is called “Cloud Pak for Data,” and it is specifically designed to enable users to access, analyze, and take action on the vast amounts of data that is scattered across hybrid cloud environments – without the need for deep technical skills.
AI vs. The Environment
We tend to think of computers as small devices we can carry around in a briefcase or our back pocket. Since computers are relatively small and inexpensive to power, we forget the impact computing has on our energy consumption and the environment. Supercomputers and computationally expensive algorithms require vast amounts of energy and resources. In fact, training a single AI model can emit as much carbon as five cars can in their entire lifetimes.
Common carbon footprint benchmarks (in pounds of carbon dioxide):
A roundtrip flight between NYC and SF → 1,984
The average American in 1 year → 36,156
The average lifetime of a car in the US → 126,000
Training a neural network transformer (213M parameters) → 626,155
The environmental impact of training AI models is something that is often overlooked by researchers. Siva Reddy, a postdoc at Stanford working on an NLP model, says “What many of us did not comprehend is the scale of [our carbon footprint] until we saw these comparisons.”
Reddy continues, noting that “Human brains can do amazing things with little power consumption. The bigger question is how we can build such machines.”
Weekly Feature: A Profile on Timnit Gebru
Timnit Gebru is a trailblazing AI ethics researcher, highly regarded within the AI research community. She is attributed to a groundbreaking paper that showed facial recognition to be less accurate at identifying women and people of color (making it more likely to be used in ways that discriminate against them). She also co-founded the Black in AI affinity group and has consistently championed diversity in the tech industry. Most recently, she has been at the center of a recent controversy with Google—after she left Google over tensions surrounding a paper she co-authored.
Background
Timnit was born and raised in Ethiopia; she eventually received political asylum in the United States. She earned her bachelor’s and master's degrees at Stanford University and worked at Apple, and then Microsoft, before eventually accepting a position as the co-lead of Google’s ethical AI team.
What Happened at Google
While many of the details surrounding Gebru’s departure are unclear, here’s what we know:
Gebru and her Google colleagues co-authored a paper titled “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?”—which laid out the risks of large language models (AIs trained on extremely large amounts of text data).
The paper presented the history of natural language processing, as well as an overview of these four risks:
Large language models involve high environmental and financial costs.
Gathering data from the internet inevitably trains the AIs on harmful language.
There is a research opportunity cost; time would be better spent in other areas.
Large language models are deceptive: they mimic human language without understanding it, which can contribute to the spread of misinformation.
Google wasn’t too happy about these findings: many contravene Google’s corporate incentives. Google executives refused to publish Gebru’s paper, stating it “didn’t meet their publishing bar”.
This led to conflict, and in the aftermath, Gebru says Google fired her. Google says Gebru resigned—yet either way, her departure sparked outrage from the ethical AI community. More than 1,400 Google employees signed this letter in protest of Gebru’s resignation.
Implications & Next Steps
Some have argued that Google’s actions could have “a chilling effect on the future of AI ethics research.” Given that many top experts in AI ethics work at large tech companies, misaligned incentives and a lack of scholastic openness can present immense barriers to future research.
In an interview with TechCrunch, Gebru said she doesn’t see herself working at another corporation. Instead, she aims to pursue ethical AI research within the non-profit space and build on her work with Black in AI.
Written by: Lex Verb and Molly Pribble.