Hello!
From an entire California school board resigning after being caught on camera mocking parents to the #TedFled scandal being confirmed through a United Airlines employee releasing Ted Cruz’s flight information to the press, this week, and every week, we’ve seen the powers and perils of living in a data-saturated society. Hold The Code is a community passionate about exploring these topics, and we thank you for being part of it. Without further ado, here are this week’s top stories.
Not In Our Backyard: NU Faces Lawsuit
As reported by a student reporter, Waverly Long, of The Daily Northwestern, the university is facing a lawsuit accusing the school of improperly capturing and storing students’ biometric data. The lawsuit was filed in late January by an anonymous junior, attesting that Northwestern violated the Illinois Biometric Information Privacy Act.
Biometric data & its regulations
Biometrics can simply be defined as the analysis of people’s unique physical and behavioral characteristics; the technology is mainly used for identification. The Illinois Biometric Information Privacy Act (BIPA) was enacted to protect residents from companies that collect this type of data. The law says that companies must gain explicit permission and fully inform users.
The Northwestern lawsuit
In this age of remote learning, many Northwestern classes have been utilizing online test proctoring systems, such as Respondus and Examity.
The complaint alleges that through these programs, Northwestern has been problematically storing “facial recognition data, facial detection data, recorded patterns of keystrokes, eye monitoring data, gaze monitoring data, and camera and microphone recordings.”
Students do not have the ability to opt-out of these programs. They’re also left in the dark about how the data collected is being used.
Calls to bad online test proctoring software
Across universities in the United States, many others have pointed out the invasive nature of this software. The lawsuit mentions petitions currently circulating at several institutions, and references this Forbes article, further highlighting the privacy concerns at stake.
Further reading
Check out the Northwestern Daily’s reporting here.
The Firing of Google AI Ethicist Sparks Debate
Ladies and gentlemen, it’s happened again: Google has fired yet another one of its top AI Ethics researchers. On Friday, the company announced that they had fired Margaret Mitchell, the founder, and co-head, of its artificial intelligence ethics unit. The announcement comes three months after the controversial departure of Timnit Gebru, another senior figure in Google’s ethics unit.
Why was Mitchell fired?
Google claims that Mitchell violated the company’s code of conduct and security policies, yet there is speculation as to the circumstances and severity of the firing.
What are people saying?
Though many details surrounding the firing remain unknown or vague, this news has sparked a discussion over the function and necessity of AI ethics boards at tech companies, with opinions ranging from some recognizing the importance of evaluating the social effects of AI systems to others viewing it as “a way for humanities types to wedge themselves into a hot, high paying field.”
Users of Hacker News, a social news aggregator that focuses on computer science topics, have had no shortage of opinions on the necessity of AI ethics in the tech industry:
“Some of these ‘AI ethics researchers’ seem like wingnuts, and their profession appears to serve more of a PR purpose than a business purpose. What am I missing? Why are they so essential? Are these people the canaries in the coal mine, or do they simply exist because of paranoia and political correctness?”
“I think it's unwise to minimize the impact and significance of AI at this time - this is the same sort of thinking that prevented proactive and effective management of Social Networking and Big Data technologies.”
“I think most of the ‘AI ethics’ thing is two things: first, for people that are actual AI practitioners, it makes ‘AI’ seem a lot more powerful and interesting than it is. I think AI is a small part of the example problems you mentioned. Second, it's a way for humanities types to wedge themselves into a hot, high paying field.”
RAISO’s Take
Here at RAISO, we think it is extremely important to understand the social, political, and economic effects AI systems can have on our society. We believe it is vital for people to be educated and think critically about these systems, especially as they become increasingly integrated into our daily lives.
As the reach of AI continues to expand, it is extremely important to develop these systems equitably, ethically, and responsibly.
Perseverance lands on mars
Nasa’s Perseverance successfully touched down on Martian soil on Thursday after 7 months of space travel. The rover is Nasa’s most ambitious search for life on Mars since its Viking missions in the 1970s, with the mission scheduled to last for a full Martian year (or roughly 687 Earth days).
Perseverance is outfitted with an advanced AI system called the Planetary Instrument for X-ray Lithochemistry (or PIXL if you don’t have that much time). This system differs from ones used on past missions in that it has an incredibly powerful X-ray beam that can pinpoint surface features as small as a grain of salt. Nasa is using this technology to look for textures in Martian rocks that may indicate the presence of certain chemicals that are linked to possible life forms on Mars.
Controlling PIXL is an AI-powered hexapod, a device with six legs that control how the PIXL beam is positioned. This device autonomously decides how to execute microscopic movements that can aim PIXL’s beam with extreme precision.
This mission is just the beginning of Nasa’s plans:
Nasa is charged with returning humans to Mars by 2024
By 2028, Nasa plans to have established a consistent human presence on the Moon through their Artemis Lunar Exploration program
Weekly Feature: How AI contributes to poverty traps
Although credit scores have been used for decades to assess credit-worthiness, their scope is far greater now than ever before. Advances in AI have meant that risk-assessment tools consider vastly more data and increasingly affect whether you can buy a car, rent an apartment, or get a full-time job. The rapid adoption of these technologies means that algorithms now dictate which children enter foster care, which patients receive medical care, and which families get access to stable housing.
The problem:
Automated decision-making systems have created a web of interlocking traps for low-income individuals, and they disproportionately impact minorities. Having a bad credit score can have cascading effects with other systems and become unmanageable, if not downright impossible, to escape.
Additionally, a primary issue of these programs is that they lack transparency. How data is used—and how decisions are reached—are seldomly made publicly available. The lack of public vetting also makes the systems more prone to error. Take, for example, what happened in Michigan 6 years ago:
After an intense effort to automate the state’s unemployment benefits system, the algorithm incorrectly flagged over 34,000 people for fraud.
This led to devastating losses of benefits, bankruptcies, and, tragically, suicides.
Fighting Back
A growing group of civil lawyers is organizing around this topic. Michele Gilman, a fellow at the Data and Society research institute, authored a report outlining the various algorithms poverty lawyers might encounter. Gilman’s aim is to bring more public scrutiny and regulation to the hidden web of algorithms that poverty lawyers’ clients face. “In some cases, it probably should just be shut down because there’s no way to make it equitable,” she says.
Written by: Molly Pribble and Lex Verb