HTC is back!
For our first release of the Fall quarter, we cover AI’s role in eye disease screening as well as a crucial Supreme Court case! In addition, for our weekly feature, we dive into the age-old debate of whether machines can be sentient. Give it up to our new writers—Samarth Arul, Zoey Soh, and Nadia Bidarian—who wrote their first articles for HTC!
Happy reading! And as always, tell a friend about HTC if you enjoyed reading.
A-Eye Disease Screening
Source: HIT Consultant
For decades, undergoing a dilated eye exam with a licensed ophthalmologist or retina specialist was the only method to screen for diabetic retinopathy, a disease that can cause damage to the retina of diabetics, and which is the leading cause of blindness in American adults. Yet, based on data from the CDC, as many as 50% of patients do not get eye examinations or receive examinations too late for effective treatment. However, these trends may see a change with the emergence of Eyenuk, a company that has developed an AI that screens for retinopathy with 97% accuracy.
How AI-Screening Works
As with many AI-based technologies, learning is first done on a large data set of images to “train” an algorithm to detect for some metric or quality. Eyenuk’s EyeArt AI program receives an image of the eye’s retina (through a “fundus photograph”) and determines whether any signs of diabetic retinopathy potentially exist. The machine learning and deep learning technology used in EyeArt was developed with support from the National Institutes for Health (NIH).
Currently, Eyenuk’s technology focuses on screening for signs of diabetic retinopathy, though the company notes that similar deep learning technology can be used for other diseases, such as glaucoma and macular degeneration, as well as potentially detecting signs of Alzheimer’s, elevated stroke risk, and cardiovascular disease through retinal scans.
How Effective is AI-Based Eye Screening?
In short, very effective. A study published in September found that the AI method achieved a sensitivity of approximately 97%, while retina specialists achieved an accuracy of 60%, and general ophthalmologists just 20%. As noted in the publication, the AI system could serve as a low-cost detection tool that could “help address the diabetic eye screening burden.” The company, which raised $26 million in Series A funding, now plans to work on bringing this technology to a global scale, potentially giving millions of diabetes the opportunity to be screened for retinopathy.
Will the Supreme Court reshape the internet?
Source: Wikimedia Commons
The Supreme Court has decided to take up a case that would enable it to interpret Section 230, the controversial law that grants internet companies immunity for being sued for the content that users’ post on their platforms. The case, Gonzalez v. Google, centers around a student killed in a terrorist attack. The case was brought by her family who argues that YouTube’s algorithm can radicalize viewers by showing content inciting violence.
What is Section 230?
"No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider" (47 U.S.C. § 230).
Essentially, Section 230 states that online intermediaries are not legally responsible for the content that users post on their site. For example, if you’re the victim of defamation on Twitter, you’re able to sue the person making the defamatory statements, but you would not be able to sue Twitter. This law was enacted to encourage the growth of the internet and is credited for the rise of social media companies like Facebook and Twitter. However, the internet has changed a lot since then and these platforms are not only publishers of content, but are also responsible for promoting it through their secretive algorithms. Whether Section 230 applies to these algorithms and how they promote content is a difficult legal question that has divided judges, and not along political lines.
Does Section 230 cover social media algorithms?
Gonzalez v. Google was brought by the family of Nohemi Gonzalez, a college student who was killed in the November 2015 Islamist terrorist attacks in Paris. The family is arguing that YouTube’s algorithms promote Islamic State videos, which are the “central manner in which ISIS enlisted support and recruits”, to potentially interested viewers. They argue that online platforms forfeit their protections under Section 230 when they recommend content and target ads to their users.
The fate of the internet
Vox claims that it’s “unlikely that social media sites would be financially viable” as they could be sued every time a user posts a defamatory comment. With tech companies' immunity under Section 230 being challenged, the fate of social media companies remains unclear.
Sorry, experts: You may want AI to be sentient, but it’s not
Source: Wikimedia Commons
We humans have a bad habit – we tend to see humanity where it’s not. Whether it’s a dog flashing us puppy dog eyes or a chatbot that asks us automated questions, we trust the appearance of humanness quite easily.
Perhaps, suggests Cade Metz, a technology correspondent with The New York Times, too easily.
The Issue
Artificial intelligence is neither sentient nor conscious, which, as Metz writes, are two hallmarks of the human experience.
Sentience: The ability to experience feelings and sensations
Consciousness: Being awake and aware of your surroundings
While neither term is easily measured, even the most primitive of animals, like worms, have an awareness of their surrounding world. AI does not.
But engineers working in the field of AI, scholars who “live with one foot in the future,” have begun to convince themselves that the machines they have built are truly sentient.
“I know a person when I talk to it. It doesn’t matter whether they have a brain made of meat in their head. Or if they have a billion lines of code,” said Google engineer Blake Lemoine.
Lemoine believed the machine he built was sentient, though his bosses did not. They later fired him.
Even those within the AI industry have noticed this alarming pattern among their colleagues: an inability to distinguish between their dreams for the future and the reality of today.
“There are lots of dudes in our industry who struggle to tell the difference between science fiction and real life,” said Andrew Feldman, founder of Cerebras, a company working to accelerate the progress of AI
To fully understand the state of AI today amidst a sea of conflicting voices, it’s important to break down what AI is capable of and what it’s not.
What AI Can Do
Generate tweets and blog posts
Recognize images and speech
Translate into different languages
Google Translate
Generate images
A new tool called DALL-E allows a user to input any short description, such as “astronaut riding a horse,” and the software will generate an image to match.
(Attempt to) hold a conversation
AI often spits out complete nonsense in conversation, but researchers say the technology is getting better at carrying its weight in conversation.
AI can learn skills by pinpointing patterns when given vast amounts of data. But it cannot freely think on its own.
What AI Can’t Do
Emote
Converse like humans
Mimic humans reliably
“If you ask for 10 speeches in the voice of Donald J. Trump, it might give you five that sound remarkably like the former president–and five others that come nowhere close,” said Metz.
In fact, Alison Gopnik, a UC Berkeley professor who participates in an AI research group, said that in terms of intelligence, AI is somewhere “between a slime mold and my 2-year-old grandson.”
Not so scary now, huh?
Still, there is a real danger in AI researchers keeping their attention on the future. It means that they ignore the issues of today, namely, that AI currently has a frightening power to mislead others.
The Importance of Now
Right now, this technology can generate blogs, Tweets, and hold conversations on an increasingly more realistic scale. Just as we tend to talk to dogs or cats like they’re human, citizens tend to interact with this technology as though it’s human too.
These developments open the doors for disinformation on a massive scale, generating fake images or falsely attributed posts that can misinform a community right before a presidential election. Chatbots, too, can mimic human conversations and sway voters to one side of the aisle or another.
The most pressing issue facing AI right now is not whether it’s sentient, or whether these machines have become as intelligent and free-willed as humans. Because we know the answer to that question is no. Instead, we need to brace for potentially massive spreads of misinformation on a scale impossible to carry out without the help of new technology.
AI researchers cannot afford to live in the future any longer; there are enough issues facing the field of artificial intelligence in the present.
Love HTC?❤️
Follow RAISO (our parent org) on social media for more updates, discussions, and events!
Instagram: @Raisogram
Twitter: @raisotweets
RAISO Website: https://www.raiso.org
Written by Samarth Arul, Zoey Soh, and Nadia Bidarian
Edited by Dwayne Morgan