Hello!
Welcome to Hold The Code, edition #14.
This week, we cover news about algorithmic lending tools, AI-powered poverty maps, how AI can be used to improve diversity in hiring, and, lastly, we review an interview with AI researcher, Kate Crawford.
Thank you for reading, subscribing, and being part of our growing community.
Bias in AI Lending Algorithms
AI discrimination in lending and housing services has been an issue ever since AI was applied in this domain. Due to biases in datasets, these algorithms make decisions that systematically discriminate against people of color. What’s more is that these algorithms make crucial decisions, like who has access to housing, who will get released on bail, who can receive healthcare, and if a homeowner is qualified to refinance a loan.
What we can do
The National Fair Housing Alliance states that diversifying the AI workforce pipeline can decrease bias in a financial institution. Lisa Rice, the CEO and President of the alliance, also notes that the US could follow the lead of the European Union, which has already shown signs of more aggressively limiting AI discrimination.
However, these proposals are not without controversy. Anthony Gonzales, a Republican congressman on the House Financial Services Committee, warns against over-regulating AI and the chilling effect this could have on the technology.
How AI-Predicted Poverty Maps Are Assisting COVID-19 Relief Efforts
The COVID-19 pandemic has disproportionately affected poor communities around the world, and as vaccination efforts have rolled out, accessing these communities has proven to be an added challenge. In many parts of the world, poverty datasets and maps of poor populations are out of date. Most of the time, the data itself is only accessible at high-level institutions and is derived from the census or surveys taken nearly a decade prior, making it difficult to pinpoint locations of poor populations.
But over the past four years, Facebook’s Data for Good team at the University of California, Berkeley, has been using machine-learning predictions based in non-traditional data to develop poverty maps around the globe. Using indicators such as economic and health factors, provided by reliable household surveys from 56 countries, the team has developed a Relative Wealth Index (RWI) to predict the location of poverty populations.
The Data for Good team and UC Berkeley are now making the maps publicly available to assist in the equitable distribution of COVID-19 relief services and vaccinations. The project has already found success in Togo, where the government has already used the system to distribute $10 million to 100,000 individuals living in poverty. By making the RWI public, the hope is to gain better access to vulnerable communities around the world to help mitigate the impacts of COVID-19 and public health issues beyond the pandemic.
Can AI Solve Unconscious Bias in the Recruitment Process?
Unconscious bias has been at the forefront of diversity, equity, and inclusion (DE&I) initiatives in recent years, especially when you consider that across all the S&P 500 companies, 85% of top executive ranks and 64% of entry-level roles are held by white workers. But according to Steve Jiang, CEO, and Co-Founder of Hiretual, an AI-powered candidate search and data hub, using AI in the recruitment process can squash the barriers that many companies are facing when it comes to DE&I efforts.
In short, company anti-bias training can’t erase the long-term unconscious biases that affect immediate DE&I recruitment efforts. So where can AI help?
Communication: AI can assist in removing non-inclusive language seen in job descriptions, interview questions, and outreach emails.
Target searches: AI can help automate targeted searches for candidates from underrepresented groups without compromising the integrity of job-specific qualifications by optimizing the sourcing process for underrepresented talent.
Equitable screening process: AI tech can hide names, images and other bias-prone information from applicant profiles. Recruiters will be able to focus solely on the qualifications of candidates.
While long-term solutions to inclusion initiatives are still the responsibility of the company and its employers, AI is there to help start the process. But the bigger question still remains, what happens if AI developers have unconscious biases? It’s still up to humans to make the world a more equitable place.
Weekly Feature: "Artificial Intelligence is Neither Artificial nor Intelligent"
A recent interview conducted by Wired with Microsoft employee and USC researcher, Kate Crawford, sheds light on some of the greatest misconceptions concerning AI.
What do people think AI is?
According to Crawford, artificial tends to be presented as an "ethereal and objective way of making decisions, something that we can plug into everything from teaching kids to deciding who gets bail." Going back to 1956, when the idea of AI began to be conceived, Crawford believes there was "a sort of original sin in the field": a widespread, false perspective that computers were completely analogous to human minds.
She goes on to say that "AI is not intelligent in any kind of human intelligence way. It’s not able to discern things without extensive human training, and it has a completely different statistical logic for how meaning is made." There is a danger, Crawford thinks, in assuming AI has more capabilities than it does.
She says that even the name — artificial intelligence — is deceptive: "AI is neither artificial nor intelligent."
So, then, that is AI?
AI is indisputably hard to define, and Crawford doesn't shy away from that point. She frames her argument mainly around the idea that it's important to understand what AI is not. She offers the critical notion that artificial intelligence is, above all, a tool.
She writes: "Statistical prediction is incredibly useful; so is an Excel spreadsheet. But it comes with its own logic, its own politics, its own ideologies that people are rarely made aware of."
The future
Crawford is optimistic that with more time, greater regulation, and the emergence of "a new coalition of activists and researchers, cognizant of the interrelatedness of capitalism and computation" more ethical understandings of AI will follow.
Our thoughts
Things we agreed with: too many people lack a complete understanding about what AI is — and, even more crucially, what it's not.
Things we disagreed with: down-playing the power of AI felt inherent to Crawford's argument. Yes, we think AI is, at the core, "a tool." But it's also more than that.
Week after week, our newsletter shows that AI is changing every industry, affecting complex, global problems, and even influencing the way we view and understand human knowledge — its capacities and limitations alike.
Read the full interview here, and stay tuned for Crawford's upcoming book, Atlas of AI.
Written by Sophie Lamb, Molly Pribble, and Lex Verb