Hello!
Welcome to our 7th edition of Hold The Code. We're eager to share some interesting AI news, but first, we'd like to extend a few invitations:
This Thursday, March 25th, Northwestern Law & Engineering professor, Daniel Linna, is hosting a virtual panel called "Technology and the Courts: A Global Perspective. Some of the panelists will include several legal experts from universities across the world, as well as Chief Justice Luis Henry Molina, from the Dominican Republic's Supreme Court. Here's the link to register.
Monday, April 5th at 12 pm, Timnit Gebru (yes, that Timnit Gebru, the one featured in our past newsletters) is speaking at a Northwestern event called "The Hierarchy of Knowledge in Machine Learning and Related Fields and Its Consequences." You won't want to miss it.
Without further ado, here are this week's stories.
Finding Immigration Violations from Utility Bills
ICE offices have recently tapped into a once-private database (with millions of phone, electricity, and utility records) to find undocumented immigrants. The database, called CLEAR, contains 400 million names, addresses, and services from over 80 companies.
This situation is just another instance of how government agencies have exploited commercial sources to supplement their surveillance efforts with the information they are not authorized to access on their own. ICE uses this information to pursue undocumented immigrants who may have stayed off the grid by avoiding activities like getting a driver’s license but cannot live without paying for utilities for their home.
Who else uses CLEAR?
CLEAR, which is run by Thomson Reuters and based on data from Equifax, is also used by a number of other organizations including:
Police in Detroit
A credit union in California
A fraud investigator in the Midwest
Concerns surrounding ICE's use of CLEAR
“There needs to be a line drawn in defense of people’s basic dignity. And when the fear of deportation could endanger their ability to access these basic services, that line is being crossed,” states Nina Wang, a policy associate at the Georgetown Law’s Center on Privacy & Technology. “It’s a massive betrayal of people’s trust. … When you sign up for electricity, you don’t expect them to send immigration agents to your front door.”
Internet Rent
No year has been more revealing than 2020 in terms of how we are divided into two economies: the many who have been struggling to make ends meet while trying to avoid getting infected by a dangerous virus, and the few, who control the companies that are now an essential part of everyday life.
2020 by the numbers
According to tech critic Paris Marc,
Billionaires added $3.9 trillion to their wealth
Whereas workers globally lost $3.7 trillion in earnings
Monopoly money
In an economy that is shrinking for many and growing for few, the defining features of many digital platforms have been to be a monopoly no matter what. And this model has proved to be good business as users often have no other options but to use a specific platform.
The internet of landlords
This approach is best understood as an expansion of rentierism - owning property and extracting rent from those who live and work on it. This “Internet of Landlords” is transforming our social and economic interactions into services that are mediated by corporate platforms.
Think of what Amazon does for e-commerce, or what Google does for search and productivity tools. In our everyday life, we are forced to deal with an ever-growing number of landlords (often without choice), constantly paying the rent with our money and our data. By controlling the property that is required for productive and essential work and life activities, these companies hold a tremendous amount of power over the people who use these products.
Data control
Crafting policies that address data control could potentially address some of these issues. We can find inspiration in similar policies like rent and capital control. By restricting the conditions and purposes for capturing and using data, we could begin to redistribute power away from these platforms in the digital economy.
Your AI-Based College Advisor Might Be Racist
More than 500 universities across the United States use EAB's (an education research company) Navigate advising software. The program uses a predictive model to recommend students for classes and majors. It estimates student success using a range of variables.
But here’s the problem
Documents acquired by The MarkUp reveal that race is used as a predictor for student success in Navigate's model. In turn, there are large disparities in how the software treats students of different races. Black students are deemed "high risk" quadruple the rate of their white peers, which means they're far less likely to be recommended for STEM-based classes and majors.
For instance: Black students made up less than 5 percent of UMass Amherst’s undergraduate student body, but they accounted for more than 14 percent of students deemed high risk for the fall 2020 semester.
Navigate’s racially influenced risk scores “reflect the underlying equity disparities that are already present on these campuses and have been for a long time,” says Ed Venit, who manages student success research for EAB.
So why is this software being used?
Don't act too surprised: it saves money. In fact, EAB has aggressively marketed itself as a "financial imperative." Student retention is a big concern for colleges — especially public universities. EAB prides itself on, for instance, its integration at Georgia State University: since using the EAB program, Georgia state increased degrees awarded by 83%.
A path forward
Here at "Hold The Code," we're averse to data sets that are, as in this case, explicitly biased. But this doesn't mean that AI can't be used productively to increase student retention. Eliminating factors like race, considering students holistically, and doing closer research on the actual causes of student drop-outs can illuminate a meaningful application of AI.
Weekly Feature: "How to Put Out Our Democracy's Dumpster Fire"
A recently published piece in The Atlantic calls upon internet reform as the path towards salvaging our democracy. The premise of the argument is the idea that:
"An internet that promotes democratic values instead of destroying them—that makes conversation better instead of worse—lies within our grasp."
Here's how:
Our current social media landscape has eroded democratic values, the authors contend. We're living in what they describe "a Toquevillian nightmare" (a nod to Alexis De Tocqueville, who advocated for social discourse and engagement), but "instead of participating in civic organizations that give them a sense of community as well as practical experience intolerance and consensus-building, Americans join internet mobs, in which they are submerged in the logic of the crowd, clicking Like or Share and then moving on."
In short: memes, lulz, and "ironic" bigotry have won the internet. Aided by corporate America, conversations are now ruled by algorithms designed to capture attention, harvest data, and amplify the loudest, most radicalized voices.
According to the authors, in this type of internet wilderness, "democracy is impossible."
How we can reclaim it
Alternatives are possible; we know this because we have used them. Before private commercial platforms definitively took over, online public-interest projects briefly flourished.
for example, in 2002, a Harvard Professor, Lawrence Lessig, led the movement to build the Creative Commons license — which allows programmers to make their inventions available to anyone.
Take, for instance, Wikipedia: it's a glimpse into what the internet could've been. For all its mockery, it's a not-for-profit, collaborative space where "disparate people follow a common set of norms as to what constitutes evidence and truth."
Another point The Atlantic writers make is one that warms the heart of your Hold The Code writers: algorithms can be used to promote better internet governance.
Nathan Matias, a scholar in AI ethics, observed that when users on Reddit worked together to promote news from reliable sources, the Reddit algorithm itself began to prioritize higher-quality content. In his own lab, Matias works on making digital technologies that serve the public, and not just private companies. He reckons that if more labs like this exist, a new generation of citizen-scientists can work with the companies to understand how their algorithms function, find ways of holding them accountable if they refuse to cooperate and experiment with fresh approaches to governing them.
Read the full essay here.
Written by Lex Verb and Molly Pribble