Hold the code #6



As AI continues to shape our world, your friends at Hold The Code are grateful for the opportunity to share relevant news and ideas with you each week. To our Northwestern readers, we hope finals go well, no invasive test proctoring software is used (see our 3rd edition for more), and that you enjoy a safe, restful spring break.

We're eager to continue expanding our diverse, growing community here at RAISO. Please consider sharing this newsletter, join our Slack channel, and check out our website if you haven't done so.

Sign Up Here

When you want to use your Alexa to play music, tell you the weather, or order Tide pods, nowadays, that is only a few voice-commands away.

But for a population that is globally 25% larger than the population of the US, technological innovation has been slow.

The hard-of-hearing community has seen a relatively slow rollout of innovation with regards to their hearing-able companions. Lucky for this population, change seems to be on the way.

The problem: a lack of data and interfaces

Computing solutions for deaf people is no easy task.

For starters, it is still a non-trivial task to get computers to understand sign language. Like with languages that can be conveyed with the spoken and written word, sign-languages have their own set of grammars, idioms, and dialects. Subdilties of everyday use can also fly under the radar without robust sources of data.

That leads to the second problem: getting data is harder. A spoken-language corpus can consist of a billion words from as many as 1,000 distinct speakers. On the other hand, an equivalent data-set in a sign language might have fewer than 100,000 signs from only ten people. This is in part because sign-languages are dependent on facial features.

And, currently, the best solutions for a deaf person interfacing with a computer include motion-tracking gloves and multiple cameras - although progress is being made, this remains a burden.

Who is working on this?

One company that is dedicated to creating interfaces for the deaf is SignAll, a Hungarian firm developing software that can recognize American Sign Language (ASL). Although their solution still relies on those gloves mentioned earlier, multiple cameras, and is slower than normal conversation, innovation hopes to take their technology down to real-time, gloveless, and single-camera use-cases.

Additionally, both SignAll and Microsoft are developing their own corpora of sign-language data to get more reliable information to make these kinds of solutions possible. The CEO of SignAll, Dr. Zsolt Robotka, emphasizes the need to prioritize translating sign language into text and speech. This can lead to their goal of widespread two-way communication between humans and computers using sign language.

Could Machines and People Experience Mutual Love?

Tinder has always failed you. You haven’t left your apartment in 13 months and forgot how to speak to other people. Summer’s coming; the vaccine is here; you’re looking for that special someone…

Look no further: introducing Lovot—a “cuddly robot” that resembles a Teletubby, has 50 body sensors, 3 cameras, comes in various colors, and whose “only purpose is to be loved.”

(We do consider ourselves matchmakers here at “Hold The Code.”)

The demand for robot companions is at an all-time high, and new products, like Lovot, are being developed every year. Maybe this sounds a bit like the 2013 Academy Award-nominated film, Her, which tells the science-fictional story of a middle-aged man who falls in love with his artificial intelligent program.

But this is no fiction.

Take, for example, Affectiva, an AI company that grew out of MIT’s Media Lab. They say they have the biggest emotion data repository in the world - 9+ million analyzed faces across 90 different countries. Here's how it works:

  • Machine learning algorithms are being used to analyze different facial expressions and map them as emotions.

  • According to Affectiva’s website, their mission is to “humanize technology with artificial emotional intelligence.”

  • This is made possible through deep learning and artificial neural networks: diverse, unstructured data sets can be applied as algorithms, which then can be embedded into apps and devices as “emotional intelligence.”

However, there’s a leap between making machines capable of producing emotionally intelligent behavior—and actually finding ways for humans to develop mutual feelings. According to Olivia Chi of UBTech, the producer of the AlphaMini biped robot, the quality of the technology, coupled with cultural and age factors, determines ultimately the depth of human interaction. As the technology advances, she predicts additional functionality will give the products greater realism, making it more likely that the user will accept the robot as a companion.

Your own lab-built best friend

The success of these robot devices, according to Tufts professor, Daniel Dennett, hinges on the anthropomorphism of the devices. They can be whatever we want—whatever we need—them to be.

According to Dennett, “we can project our feelings on and into our companion robot, and they seem to read our feelings, and project them right back.”

But if it sounds too good to be true…

A close friend might suspect you’re feeling down. What if your robot companion, using its sophisticated AI, detects you have depression and tells Facebook, causing Facebook to tailor your advertisements accordingly? Critics of the technology point to the private nature of human emotions: these types of devices can create what Professor Andrew McStay calls “360-degree surveillance.”

This technology also raises the broader, philosophical question: what is love, if it can be so easily replicated by a machine? Humans are thought to have a comparative advantage in emotional intelligence and human connection; if we can be outperformed by machines, then what do human beings really have to offer one another?

No Ads For Adaptive Fashion

Facebook’s ad-regulating algorithms seem to be flagging adaptive fashion ads for violating their policy. Ads, like the one seen above from the company Mighty Well, are consistently rejected by these algorithms when adaptive fashion companies attempt to advertise on Facebook.

What is Adaptive Fashion?

Adaptive fashion is a relatively new, but quickly growing, section of the fashion industry. Adaptive fashion brands create a wide range of products from patterned colostomy bags to “seated fit” undergarments designed for those who use a wheelchair.

Why is Facebook blocking these advertisements?

These ads are often flagged for the promotion of “medical and health care products and services” even though they are not promoting these items. Facebook’s ad algorithms are not able to adjust for the context of these ads and often flag them simply for featuring a model in a wheelchair or wearing a catheter sleeve.

The bigger picture

This issue is just another example of the implicit biases embedded into machine learning algorithms and how these biases can be detrimental to marginalized communities. These often reflect our own implicit biases, such as excluding those with disabilities from being interested in fashion, which can be encoded into algorithms that are then deployed at-scale.

“It’s the untold story of the consequences of classification in machine learning,” said Kate Crawford, the author of the upcoming book, Atlas of AI. “Every classification system in machine learning contains a worldview. Every single one.”

Weekly Feature: NU's Tips for Poisoning Your Data

We are constantly creating data

In 2020, every person generated 1.7 megabytes of data in just 1 second (source).

Every time you send an email, like a Tweet, or stream a new TV show, you are creating data. And it’s really hard to stop.

In 2019, Kashmir Hill, then a reporter for Gizmodo, famously tried to block out five major tech companies - Amazon, Facebook, Google, Microsoft, and Apple -  first individually, then all at once (source). Over the following weeks, she realized the true extent of how intertwined these companies are in our daily lives. Want to watch a new movie or listen to new music? Netflix, Hulu, and Spotify all rely on Amazon Web Services (AWS) or Google Cloud to distribute their content. Want to push new updates to your Github repo? That’s owned by Microsoft.

Not only are we creating massive amounts of data, but we are also generating massive paychecks for Big Tech

Every year, Google gets over $120 billion from ad revenue generated by your data.

And they don’t even send you a thank you card…

So what can we do about it?

A recent paper out of Northwestern University (Go ‘Cats!), suggests that we can use our data as a collective bargaining chip. These tech giants may have powerful algorithms at their disposal, but they are worthless without enough quality data to train them.

Ph.D. students Nicholas Vincent and Hanlin Li have proposed three ways in which the public can leverage their data against Big Tech:

  • Data strikes: This involves withholding or deleting data, so a tech firm cannot use it by using privacy tools or leaving a platform completely.

  • Data poisoning: This method is our personal favorite and involves purposefully contributing meaningless or harmful data. For example, AdNauseam is a browser extension that "clicks" on every single ad that Google sends your way, confusing their advertising algorithms.

  • Conscious data contribution: This involves giving meaningful data to a competitor of a platform you want to protest. Think Tumblr over Facebook.

These methods have already seen some real-world success. In January, millions of WhatsApp users deleted their accounts when Facebook (the owner of WhatsApp) announced that WhatsApp data would be shared with the rest of the company. This mass exodus from WhatsApp eventually caused Facebook to delay its policy change.

Similarly, Google recently announced it would stop tracking individuals across the web with targeted advertising. While there is currently some skepticism about whether or not this is a meaningful change or another Google rebrand, Vincent thinks the increased use of tools like AdNauseam may have contributed to this outcome.

“AI systems are dependent on data. It’s just a fact about how they work,” Vincent says. “Ultimately, that is a way the public can gain power.”

Here at RAISO, we want to extend our congratulations to Nicholas Vincent and Hanlin Li for their exceptional work! If you’re interested in learning more about this research, you can read their full paper here.

Written by Molly Pribble, Lex Verb, and Mason Secky-Koebel