Hello, World!
Hold the Code is officially one year old! Thank you for joining us on our mission to increase awareness of the impacts of AI, promote technical literacy, and make information on computing technology more accessible.
In our 30th edition, we discuss how voice assistants use social cues to gain our trust and how AI could be revolutionizing military medicine in the near future. Our weekly feature is a bit technical in this edition and explains a new technique to verify that image classification algorithms are working correctly.
Happy reading!
Our New Friends: The Voice-User Interfaces
Social cues are important for everyone. The tiny, imperceptible changes we make when talking to someone are crucial to how you’re able to interact with others. But social cues aren’t just important for us high-minded humans. For some voice-user interfaces like Alexa or Google Home, it’s a huge benefit to be able to exhibit some social cues of their own.
A Human Touch
Now, these aren't the terrifying humanoid robot social cues of sci-fi movies, where robots blend into the populace and extract information from our minds in order to wreak havoc and take over the world. Little things, like moving around to focus on the person speaking or simply having a human name make voice-user interfaces more trustworthy and personable. To put it simply, people tend to like things that are able to act like people. They will even be more sociable with each other if they’re around a voice-user interface that acts more like a human than a robot, making side comments or glancing at each other.
Transparency or Trust?
If an interface has a name or a wake word that isn’t associated with the company that made them (e.g. instead of “Hey Google” it was something like “Hey Claire”), users tend to trust it more. But does doing this negatively affect the transparency of the whole exchange? Wouldn’t it be a good idea to know what company made the interface? Are voice-user interfaces even a good idea? The tradeoffs between trust and transparency raise lots of questions.
Hey, Alexa, what do you think?
Predicting Medicine
With AI revolutionizing every field it touches, there is no surprise that medicine could be next — especially now that the US military is dead set on making that happen. In 2018 the Pentagon established the Joint Artificial Intelligence Center (JAIC) with one goal: increase the US’s utilization of machine learning in all aspects of military functioning. As in the world of national interests, it is increasingly considered common sense that whichever nations fail to properly adapt AI to their needs will be the nations that fall behind in every aspect of competition.
What does the JAIC do?
While much of the work of the JAIC involves applying machine learning to fields like cybersecurity and resource directing, another major area of consideration is military medical care. Under the JAIC, the Pentagon is looking to use their vast amounts of patient data and information to improve care and preventative measures, for everything from covid complications to mental health concerns. In one of their largest projects, a machine learning algorithm is currently being trained off of data including 55 million tissue specimens and 850 million medical imaging slides to detect early cancers with greater accuracy than a human could.
Does this benefit civilians too?
While the fact that this is coming from the Department of Defense and aimed towards soldiers may at first seem like it benefits primarily military and war efforts, there is also the hope that such technology will soon become widespread. Dr. Hassan Tetteh, head of the JAIC’s health mission, points out that many of the most important medical advances in history have been driven by military necessity, including blood transfusions developed at the battlefields of the Civil War, and the mass inoculation of armies during the American Revolution. If the advent of using AI to better patient care within the military becomes a reality, then it will likely soon become a benefit to us all.
With people needing complex medical help every day — and the current limits of modern medicine in meeting those needs — the addition of machine learning to medical treatment could help save countless lives. Although there are concerns about removing the human aspect from patient care, the advent of the JAIC’s algorithms and similar advancements stand to represent hope for a better medical future.
Weekly Feature: Featuring Features
If I asked you how to tell a robin and a penguin apart, you may tell me to compare the robin’s red breast and twig-like legs against the penguin’s black-and-white coloring and webbed feet. If I asked an AI the same question, the answer may not be the same. Depending on how the AI was trained, it may have learned to look for different (and potentially incorrect) features to identify a robin or a penguin. So how do we know what features the AI is looking for? And how do we verify if these features are accurate characteristics and not false correlations?
What is Feature Attribution?
In image classification, feature attribution refers to techniques that researchers use to determine which parts of an image are the most important to the AI’s prediction. These algorithms use each pixel in an image as one feature. After the algorithm is trained, feature attribution can be used to identify which pixels are most important to an algorithm when it is classifying an image.
When Features Fail…
So feature attribution tells us what features the AI looks for when classifying an image? Well… sometimes. Research has shown that feature attribution methods may not be as accurate as we think and can overlook false correlations that an image classification AI may have picked up on.
For example, our AI may have learned that penguins only exist in snow. The algorithm may not have learned any features of what penguins actually look like, and it may be relying instead on the white background to identify a penguin from a robin. Our model might then struggle with the image above and misclassify the robin as a penguin. Traditional feature attribution methods may fail to notice this correlation, so how do we know if our AI is working properly?
Fixing the Features
Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have developed a method to further validate if an AI has learned appropriate features beyond what can be done with traditional feature attribution. This method works by altering the images on a subset of the training data to see how the model adapts.
In our robin-versus-penguin example, it make look something like this:
We add a watermark to all penguin images in our training dataset and leave the robin images unedited.
We re-train our model with this new data.
If the model has learned appropriate correlations, we should see that the model now uses the pixels on the watermark to distinguish between a penguin and a robin.
If the model has not learned appropriate correlations, we would see that the model is still using the snowy, white background as the primary characteristic of a penguin, and we would probably need to redesign our algorithm.
By checking that our AI reacts to these modifications, we ensure that it is not relying on any false correlations between image classes.
Why does this matter?
While it may seem trivial that our AI may not be able to correctly differentiate between a robin and a penguin, there can be serious consequences when image classification goes awry in more high-stakes scenarios.
Image classification has many applications in medicine, as it is often used as a tool in diagnosing injuries and illnesses from X-Rays or CT scans. If these AIs have learned false correlations, the results can have major impacts by missing diagnoses or misdiagnosing patients. Therefore, using this technique to identify faulty features can have many benefits in the application of image classification. To learn more about the research being done on this feature-finding technique, read the full article here.
(P.S. Is anyone else sick of the word “feature” yet?)
Written by Arielle Michelman, Hope McKnight, and Molly Pribble
Edited by Molly Pribble
what a great read this morning. The penguin/robin analogy was very helpful.