Hello, World!
RAISO just held a discussion at NU about therapy bots, and Hold the Code is following the trend! Our weekly feature talks about how AI can detect mental illness and prompt patients to intervene accordingly. This edition also features stories about AI’s abilities in competitive coding and its implications when integrated into autonomous vehicles.
Happy reading!
AlphaCode: Could it replace programmers?
AlphaCode, a new AI from DeepMind (Google’s AI firm), is now able to code at a competitive level. In a recent test, the system placed in the top 54% of human competitive coders.
What is Competitive Coding?
Coding competitions test a programmer’s ability to conceptualize, implement, and debug a solution to a certain challenge. Questions — usually given to competitors in written natural language — can cover topics ranging from graphs and trees to combinatorics to recursive algorithms. Code is then judged, primarily by its correctness (but other factors such as time and space complexity are sometimes considered as well).
Out-Coding the Code
In a series of ten coding challenges (curated by the coding competition organizer, Codeforces), AlphaCode was judged to be about on-par with an average competitive coder. The challenges given to the AI were written in natural language and covered topics on theoretical computer science and algorithms.
“I can safely say the results of AlphaCode exceeded my expectations…AlphaCode managed to perform at the level of a promising new competitor,” said Codeforces founder Mike Mirzayanov.
Potential Applications & Limitations
While the results of AlphaCode are promising, there is a long way to go before AI could replace human programmers. AI systems are often criticized for producing buggy code. Additionally, these AI’s are often trained on publicly available libraries of code, which causes them to sometimes reproduce copyrighted material in their solutions.
Before these systems are spitting out complex programs on their own, they are more likely to be adopted as coding assistants that offer autocomplete suggestions to human programmers (like GitHub Copilot). However, the rise of AlphaCode and similar systems points to the potential of AI in programming:
“In the longer-term, we’re excited by [AlphaCode’s] potential for helping programmers and non-programmers write code, improving productivity or creating new ways of making software,” said Oriol Vinyals, a principal research scientist on AlphaCode at DeepMind.
When Autopilot Fails…
With artificial intelligence playing roles in increasingly risky activities, a new question is starting to arise: if a computer messes up, whose fault is it? This question has recently become more than a hypothetical, as a driver in a 2019 fatal car crash has been charged with two counts of vehicular manslaughter, despite using Tesla’s autopilot at the time of the crash.
What is the precedent?
That’s just it, there is none. This is the first instance of a person being charged with a felony in the US for a motor accident involving a partially automated vehicle. The driver was using a 2016 Tesla Model S, one of the most widely used autopilot systems on the market with an estimated 765,000 Tesla vehicles equipped with it in the US alone. In this instance, however, instead of the autopilot system safely guiding the vehicle according to traffic regulations, it brought the vehicle through a red light where it then struck a car crossing the intersection.
The fact that this is the first instance of felony charges doesn’t mean this accident comes as a surprise. The National Transportation Safety Board’s investigation into the use of autopilot concluded that the addition of automated driving systems has caused widespread misuse, creating overconfidence and inattention in drivers and had already led to multiple crashes before this one. In 2018, they stated the addition of autopilot “permitted the driver to disengage from the driving task,” creating dangerous road conditions.
In total, autopilot has been held responsible for a total of 26 crashes since 2016.
If autopilot fails, whose fault is it?
The National Highway Traffic Safety Administration has taken the steadfast stance that drivers in all vehicles, automated or not, must be vigilant and prepared to respond at any moment. Tesla has similarly claimed that their autopilot systems are not meant to fully drive themselves, however, not everyone is convinced that their hands are clean in all of this. USC law professor Bryant Walker Smith is of the opinion that Tesla “could be ‘criminally, civilly, or morally culpable” for putting these new, so easily abused technologies into consumers hands in the first place.
While the waters are still murky as to who is responsible for accidents like these, one thing that all parties can agree on is that, at least for now, fully self-driving cars have not fully crossed the line from fantasy to reality.
Weekly Feature: The Algorithm for Mental Health
Mental health has been having a heyday during the past few years. With rates of anxiety and depression increasing at dramatic rates since the start of the pandemic (as well as COVID-19 making it a risk to even leave the house), it’s become extremely important to find a way to tackle the struggle of making good care safe and accessible. One way of approaching this problem that has garnered a lot of attention in recent years is with AI, where machine learning algorithms have been found extremely useful.
What is Machine Learning?
Machine learning is a kind of AI technology where computers are given lots of data and examples of what to do when they notice certain patterns and then become — hopefully — very good at noticing those patterns and reacting appropriately.
For example, one of the algorithms that Rosalind Picard and Paola Pedrelli (Who work at MIT as a machine-learning expert and a clinician, respectively) have developed is an attempt to help those with major depressive disorder recognize when they are struggling with their mental health and figure out what might be the best step forwards.
The Algorithm
Some of the data that might reveal an underlying negative pattern includes:
The heart rate of a patient getting faster over time
Their skin temperature and skin conductance (electrodes that access a flight or fight response) levels getting higher
The activity levels of a patient going down, for example, staying inside more and exercising and socializing less
Their sleep activity increasing or decreasing more than usual
Their biweekly self assessment showing a negative trend
Among others!
The algorithm takes the data that it collects and deciphers any patterns that may be significant. If they tend to show that the patient is doing badly or starting to show signs of feeling down, the patient will be alerted as to what actions have been negatively impacting their mental health, and will hopefully be able to adjust accordingly.
The Barriers
However, while these algorithms are very promising and will be able to provide a service that has never been available before, there are still a number of challenges to how these algorithms can be used in the most helpful way. For example, if the algorithm senses a negative trend and it lets the patient know that their mental health is getting worse, this could end up actually triggering them even more, leading to a deeper depression. Picard and Pedrelli have been using precise language in their notifications that approaches the negative trends in a factual way, not blaming or attacking the patient, but instead encouraging them with ways to combat those trends.
Another complicated issue that these algorithms run into is that of privacy. Is it truly ethical to run a patient’s data, as trivial as it may seem, through an algorithm without them being fully aware of what is going on? Informed consent is an important part of the process of the implementation of algorithms and AI into sectors like healthcare.
All in all, the algorithm designed by Picard and Pedrelli is an exciting development in the fields of healthcare and machine learning. It suggests the start of an approach to mental health counseling and care that will provide instant help and support, and will hopefully be able to help those unable to access in-person care.
Written by Molly Pribble, Arielle Michelman, and Hope McKnight
Edited by Molly Pribble