Hello!
Welcome to Hold the Code #22, our first edition since our summer hiatus!
In this edition, we cover AI that can smell, new possibilities for AI assistants at Facebook, and the impact of empathy in language technologies. Our weekly feature also discusses the possibility of an AI Bill of Rights.
Since our last full edition in June, we’ve added a few new writers to our team: Dwayne Morgan, Ian Lei, and Michelle Zhang. They join returning contributors Larina Chen, Molly Pribble, and Mason Secky-Koebel. Be on the lookout for stories from our newest team members in the coming weeks!
I Smell a Neural Net
If you give a neural network a nose, then you...aren’t really getting anywhere. But, if you train a neural network to classify odors, it matches a human’s olfactory connectivity surprisingly well.
Computing Odors
Although neuroscientists have deemed the olfactory system the “oldest” sensory system from an evolutionary perspective, it remains one of the least understood. Generally, odors — like your morning cup of coffee or your new roommate — interact with receptor neurons that line the interior of your nose. From there, some other neuroscientific witchcraft (or, processing) takes place until the smell reaches your brain.
This process can, in some ways, be described as computation: you take in some input, it is “processed” from your nose to your brain, and then your brain “classifies” it - the smell can be good or bad, or even provoke nostalgia.
Researchers recently put this to the test when they used a neural network to classify both single and mixed-odors. The researchers found that the neural network, which is a machine learning method where connected, artificial neurons “re-wire” themselves based on the task at hand, end up matching the olfactory systems underlying connection when trained.
Study Snapshot
Here’s a snapshot of the study:
The neural network mimicked a fruit fly’s olfactory system (olfactory systems between organisms is surprisingly similar)
It took the neural network only a few minutes to converge to a setup where it classified odors at high accuracies
The structure that the three-layer network took on was incredibly similar to the physical structure of the fruit fly’s olfactory system, implying that the system of flies, and perhaps other organisms like humans, is already quite optimal
Although neural networks are great at tasks to mimic biology, the question of whether they can be used to study the brain, remains a key question. Studies like this truly make it sound plausible.
You can read more about the study here.
A Front-Row Seat for AI
Ever wanted a friend who perfectly understands you and gives you just the help you need? Facebook is leading a long term AI project to turn this into a reality — through integrating user-centric AI assistants into immersive devices such as augmented reality and virtual reality (AR/VR) glasses and headsets.
Current computer vision algorithms are trained to interpret images and videos from a third-person perspective, making the AI assistants developed from this approach interact with the user from a by-stander perspective. The Ego4D project led by Facebook, however, approaches the challenge from a different point of view — literally! The project uses 3000+ hours of video footage taken from first-person perspective to develop AI assistants that share the same egocentric perspective as the user. These assistants can “think” from the user’s perspective, making them better at solving the user’s egocentric problems (“where did I put my keys?”).
The hope of Ego4D is for egocentric AI assistants + mixed reality devices to become as ubiquitous and useful in the future as smartphones are today. Critics of the project, however, voice privacy concerns – particularly how the project and Facebook will handle intimate first-person insights into our lives.
How do you feel about giving AI a front-row seat to your life?
A Language of Empathy
Where does empathy fit within technology?
Empathy within AI software has always been controversial, with movies and television shows depicting the inevitable “demise” of the human race to a machine that is capable of feeling. Even the plausibility of a system that is capable of feeling is widely questioned but does this rule out any possibility for empathy linked to software?
Building connections
MIT Senior Rujul Gandhi, a Linguistics and Computer Science double major, reveals her perspective on the role of empathy within technology.
“Language is fundamentally a people thing; you can’t ignore the people when you’re designing technology that relates to language.”
Gandhi’s love for languages developed in her early childhood. At the age of 6, she encountered a book titled, “What's Behind the Word?” that captivated her and fostered her fascination with language. Throughout her studies of various languages, Gandhi found technology to be effective in building connections that permeate language barriers. She then began to study both technology and language as a whole, searching for ways to solve language problems through technology.
Gandhi has worked with Tarjimly, a nonprofit organization that connects refugees with interpreters through a smartphone application. Realizing the potential to these translating systems, she considers the potential incorporation of sign language and various other theoretical developments in linguistics that would improve communication.
You can read more about Rujul Gandhi and her work here.
Weekly Feature: Rethinking our Bill of Rights in the Age of AI
Language translation, facial recognition, and cancer-identifying algorithms are all among the technologies driven by data and interpreted by artificial intelligence that have drastically changed the world in the last ten years.
Despite the advances made in artificial intelligence, Eric Lander, director of the White House Office of Science and Technology Policy (OSTP), and his deputy Alondra Nelson warn of the dangers AI presents. Lander and Nelson title their op-ed, published in Wired, “Americans Need a Bill of Rights for an AI-powered World.”
Limitations of AI
AI is not perfect. The data used to train machines has a strong effect on how machines perform, so flawed data sets are problematic, especially when they are not representative of American society. For instance, flawed data sets have led to errors such as:
Virtual assistants failing to recognize Southern accents
Facial recognition responsible for discriminatory and wrongful arrests
Health care algorithms failing to identify the severity of kidney disease in African-American patients
Lander and Nelson write that machines can perpetuate systemic injustices if the data used is based on prior examples, resulting in sexist hiring tools and racist mortgage approval algorithms. Additionally, they raise concerns over the privacy of AI-powered technologies, calling for more transparency.
What do they want?
During the upcoming months, Lander and Nelson say the OSTP will work on developing a bill of rights to “guard against the powerful technologies we have created.” As a result, they call for a “public request for information” on biometrics, technologies that recognize and analyze faces, voices, heart rate, physical movements, and more. The office wants to hear input from those who encounter biometric technologies in their daily lives, from data scientists to consumers. Lander and Nelson believe that hearing various perspectives will aid in their efforts to develop this bill of rights.
What is an AI Bill of Rights?
In light of artificial intelligence’s limitations and the dangers they pose, such as harming marginalized communities, Lander and Nelson argue that Americans need a “bill of rights” to guard against the powerful technologies we have created. This bill would provide greater clarity into certain rights data-driven technology would respect:
“Your right to know when and how AI is influencing a decision that affects your civil rights and civil liberties”
“[Y]our freedom from being subjected to AI that hasn’t been carefully audited to ensure that it’s accurate, unbiased, and has been trained on sufficiently representative data sets”
“[Y]our freedom from pervasive or discriminatory surveillance and monitoring in your home, community, and workplace”
“[Y]our right to meaningful recourse if the use of an algorithm harms you.”
In practice, the “bill of rights” would mandate federal contractors to use technologies that abide by this bill and prevent the federal government from purchasing software that could violate these rights.
Lander and Nelson write developing an AI bill of rights will be challenging, but ultimately essential for the nation.
“From its founding, America has been a work in progress—aspiring to values, recognizing shortcomings, and working to fix them. We should hold AI to this standard as well. It’s on all of us to ensure that data-driven technologies reflect, and respect, our democratic values.”
Written by Mason Secky-Koebel, Dwayne Morgan, Larina Chen, and Ian Lei
Edited by Molly Pribble