Welcome to the 38th edition of Hold the Code. In this edition, we touch on the use of Clearview AI's facial recognition technology in the Ukraine conflict and inequalities AI can exacerbate in the economy. Our weekly feature dives into how AI plays a part in colonialism and global power imbalance.
Happy reading!
Facial Recognition in Ukraine
Last month, a leading facial recognition company, Clearview AI, gave its technology to the Ukrainian government. To create its database of faces, Clearview controversially scrapes publicly available images from companies like Instagram and Facebook. Even people in the background of images are added to the database. The CEO and founder, Hoan Ton-That, has called it “a search engine for faces”, but unlike search engines like Google that use strings of text as an input, it uses images of faces to return an output.
How Ukraine is using facial recognition
Clearview is being used to identify the living and the dead. In Kharkiv, a city in Northeast Ukraine, Ukrainian authorities were able to identify a dead body using Clearview’s technology. Authorities took a picture of the body’s face and scanned it through Clearview’s database - there was a match. Additionally, the same technology is being used by the Ukrainian government at checkpoints to identify potential enemies.
Previously, Clearview was used by US law enforcement. In fact, Ton-That has claimed that over 3,200 government agencies have bought or tried their technology. In regards to Ukraine, Ton-That saw another application for Clearview:
“We saw images of people who were prisoners of war and fleeing situations, and you know, it got us thinking that this could potentially be a technology that could be useful for identification, and also verification”
Should we be worried?
One issue with facial recognition is its accuracy. Hoan Ton-That claims that Clearview is over 99% accurate, but this may cause problems over thousands of cases. When making important decisions based on results from facial recognition technology, there is always the risk of a false positive. Therefore, to what extent should big decisions, such as persecuting enemies in war, use results from facial recognition?
There is also an ethical issue when using facial recognition. Facial recognition databases seem to violate the idea of individual sovereignty. Even if a person has never had a social media profile, and thinks they've wiped the internet clean of their image, they can still be found through images that others have posted. Is it right for companies like Clearview to use your images without consent? Last year, Clearview was fined by the UK’s Information Commissioner’s Office for failing to inform people that they were collecting their images on social media. The company has also received cease-and-desist letters from Facebook, Youtube, Google, and Twitter.
Furthermore, facial recognition could strengthen authoritarian governments around the world. China has drawn up plans to use facial recognition technology to identify journalists - a potential reinforcement for censorship and attacks on free speech. Clearview has opposed the idea of working with an authoritarian government like China and Russia, but that does not negate the possibility of another facial recognition firm selling its product to a government of this nature.
The AI Inequality Issue
As new technologies are being developed, especially in the AI sector, more and more of these developments mean good things for businesses. But do these good things extend to the rest of the economy?
Trouble in Silicon Valley
In the past, new technologies have led to the creation of numerous higher paying jobs, with widespread prosperity that has been applied to most people in the country. Now, though, we are seeing just the opposite: instead of spreading out the tech jobs across America, cities like Silicon Valley and Seattle have become hubs of development that have left the rest of the country lagging behind.
The people in control of the new tech have also been reaping the benefits at a different pace than they did previously. While automation has skyrocketed and workers’ wages have plummeted, it’s no wonder that peoples’ opinions of technology have become distrustful, even aggressive.
The Human as a Robot
One of the biggest problems that Erik Brynjolfsson, leader of the Stanford Digital Economy lab, believes is causing the disparities between the middle class and the 1 percent, at least in terms of how it relates to new AI technologies, is the goal that most developers have of building robots that replicate human capabilities. Instead of trying to develop technologies that simply replace humans, he says that a better approach for the economy would be to focus more on the extension of human abilities.
For example, we have the development in recent years of:
AI that can drive your car for you
Facial recognition software that rivals a human’s ability
Compared to:
AI that can help doctors give more accurate diagnoses to patients
Technology that facilitates teachers’ lesson plan development based on each student
The first group, where AI is attempting to replace humans, ends up potentially taking out many jobs. The second group would, instead of replacing human workers, simply extend their abilities. Herein lies the key to taking advantage of new tech in a way that benefits everyone.
Weekly Feature: A New Colonial World Order
A new MIT Technology Review series called AI Colonialism investigates the parallels between AI development and European colonialism. Although the AI industry doesn’t aim to capture land nor does it use mass-scale slavery, the industry has developed new methods of exploiting cheap labor. The series takes a closer look at the development of this “new colonial world order.”
What does the series talk about?
The first part examines how AI surveillance tools in South Africa, which extract people’s faces and behaviors, are “re-entrenching racial hierarchies and fueling a digital apartheid.”
Part two, discusses AI data-labeling firms in Venezuela that are establishing a model of worker exploitation by exploiting cheap labor during a time of economic crisis.
Part three, looks at ride-hailing drivers in Indonesia who are learning to challenge algorithmic control through building community power.
The series concludes with examining how an Indegenous couple from a rural town in New Zealand are fighting back for control of their community’s data to revitalize the Māori language.
Karen Hao, senior AI editor at MIT Technology Review, writes, “Together, the stories reveal how AI is impoverishing the communities and countries that don’t have a say in its development—the same communities and countries already impoverished by former colonial empires. They also suggest how AI could be so much more—a way for the historically dispossessed to reassert their culture, their voice, and their right to determine their own future.”
What’s the purpose of the series?
Artificial intelligence and colonialism may seem like unrelated topics, but the ultimate purpose of the series is to shed light on the lesser-known, sinister impacts of AI. Hao writes that we must acknowledge the obstacles and limitations of AI before we can discuss its benefits.
Where can I learn more?
Hao talks about a “new generation of scholars [that] is championing a ‘decolonial AI’ to return power from the Global North back to the Global South, from Silicon Valley back to the people.” You can read the full MIT Technology Review series here to get a better sense of what “decolonial AI” might look like.
Love HTC? ❤️
Follow RAISO (our parent org) on social media for more updates, discussions, and events!
Instagram: @Raisogram
Twitter: @raisotweets
RAISO Website: https://www.raiso.org
Written by Jake Connell, Hope McKnight, and Ian Lei
Edited by Dwayne Morgan