Hello!
Welcome to our 24th edition of Hold the Code.
This week we cover potential advancements to encoding AI ethics, an algorithm that can predict and map car crashes, and NATO’s new strategy for using AI. Our weekly feature covers humanity’s role in the age of advancing AI.
An AI-Powered Moral Compass
Should you help a friend when they break the law? Are you obligated to hold the door for others? Is pineapple on pizza acceptable? Ethical questions such as these have been asked and re-asked because they are not easily answered. But what if AI could answer these tough questions for us?
Introducing Delphi
Trained on over 1.7 million examples of people’s ethical judgements, Delphi is an AI that decides whether a certain scenario is okay, wrong, weird, bad, or understandable. Delphi uses data from sources like the “Am I The Asshole” sub-Reddit to extrapolate ethical conclusions to new scenarios.
Delphi uses supervised learning to determine if its judgements are appropriate or correct. Human arbitrators contribute their own conclusions on certain scenarios. The average or majority ruling then becomes the “correct” judgement and is compared to Delphi’s own results. These arbitrators are screened for morality and patterns of sexism or racism prior to being hired.
Judging the Judgments
Currently, Delphi’s judgements match the arbitrator’s judgements 92% of the time. However, the system is far from a perfect moral compass. A previous version of Delphi said genocide is okay if it “makes everyone happy.” Delphi has since been updated and now answers that genocide is bad.
Encoding Ethics or AI Authority?
Yenji Choi, a University of Washington researcher working on Delphi, makes it clear that this project does not intend to create an AI moral authority. Instead, the goal is to help AI work better with humans.
“We have to teach AI ethical values because AI interacts with humans. And to do that, it needs to be aware of what values humans have,” Choi says.
Many past AIs have had ethical challenges (such as Microsoft’s Tay or OpenAI’s GPT-3), so the concept of encoding morality into AI is fascinating and complex. To learn more about Delphi, you can read the full study here.
Crash Mapping
Self-driving cars are a hot topic, both for the news and the AI industry. This is for good reason — reaching a high-level of self-driving be a huge technical accomplishment, but it would improve our society as well. Worldwide, traffic accidents are the leading cause of death in children and young adults.
While getting a car to drive you from point A to point B completely autonomously would be a tremendous achievement, we're probably still a ways away from a high-level self-driving car — let alone making that technology available to everyone.
Crash maps
In lieu of this, researchers have focused their attention to safety. While self-driving methods try and safely navigate from point A to point B, researchers at MIT's CSAIL laboratory discovered a way to improve our ability to predict when and where crashes could happen.
Based on a combination of historical crash data, satellite imagery, and GPS trajectories, researchers were able to create state-of-the-art crash maps that convey the expected number of crashes over a period of time in the future, identifying high-risk areas for future crashes.
What's new?
These maps have existed in some capacity for some time. Maps of the past, however, have been relatively basic: they use basic, low-resolution mapping software to depict high-risk areas, and (because they don't use the same amount of data that deep learning does) they're less accurate. These new maps directly improve the maps in those two areas:
The maps are rendered at much higher resolutions.
The maps are, crucially, more accurate than prior models. For example, the maps were able to show that highway on-ramps display one of the highest risks for accidents.
What's next?
The model is so sophisticated that even though it was trained on crash data from 2017 and 2018, it identified high-risk areas that saw crashes in the future, while not having lots of data on present crashes in the datasets.
One of the core ways that this technology could be used is in helping smartphone applications display warnings for high-risk areas on your commute. It also remains to be seen how this technology could be applied to self-driving cars directly.
You can read the original paper here.
NATO’s AI Strategy
NATO launched an initiative that aims to help the alliance invest in new technologies and released a summary of its first-ever artificial intelligence strategy on October 22.
NATO Innovation Fund
Member nations signed an agreement that established the NATO Innovation Fund, which would aid private companies developing dual-use technologies. Secretary-General Jens Stoltenberg hopes to invest €1 billion ($1.16 billion USD) into academic partners and companies developing new technologies.
“[These] technologies are reshaping our world and our security,” Stoltenberg said. “NATO’s new innovation fund will ensure allies do not miss out on the latest technology and capabilities that will be critical to our security.”
Artificial Intelligence Strategy
The summary of the strategy outlines four distinct sections:
Principles of Responsible Use of AI in Defence
Ensuring the Safe and Responsible Use of Allied AI
Minimising Interference in Allied AI
Standards
Stoltenberg also said NATO will organize a data and artificial intelligence review board that aims to ensure NATO’s “operationalization” of its strategy.
“The principles are all great, but they only mean something if we’re able to actually translate that into how the technology is being developed, and then used.”
Weekly Feature: Searching For Humanity’s Role in the Age of AI
I think, therefore I am.
But if AI thinks, what are we?
Throughout history, humans have devoted our lives to understanding our role in the world. We seek to understand the myriad realities in the world and to explain and add to the world through our explorations, experiments, and inventions. To many, this has always been our role on earth. Artificial intelligence, a product of human exploration and invention, is now challenging this role that we humans perceive ourselves to play. The Wall Street Journal explores these topics and more in a recently published opinion piece.
From Chess-master to Chemist
Take DeepMind’s AlphaZero, for instance. This AI program is capable of learning the game of chess in an extremely short time and beating human chess masters, commanding the game with its self-developed strategies that are fast, efficient, and distinctly non-human-like. On the other side of the country, MIT researchers discovered hacilin using an AI algorithm, a discovery made only possible by AI’s ability to (cheaply) compute undiscovered and unexplained methods of killing bacteria.
AI promises to explore, understand, and contribute to the world in ways that a human never could. If AI can reason, explore, experiment, discover, and invent just as well (or even better) than humans can, then what is our role in the world? Should we redefine our place in an AI-enabled world?
Inevitable Advancement?
The article states that “[o]ne should consider not only the practical and legal implications of AI but the philosophical ones: If AI perceives aspects of reality humans cannot, how is it affecting human perception, cognition and interaction? Can AI befriend humans? What will be AI’s impact on culture, humanity and history?”
Here’s what some believe: instead of deferring to AI or resisting it, we need to seek a middle ground. As humans in an AI-enabled world, our role is to create and enforce AI with human values and morals, “including the dignity and moral agency of humans.”
What do you see for the future of AI’s role in our lives and consequently our place in such a world? Is AI’s advancement inevitable? And do we even have control over where this advancement leads us?
Written by Molly Pribble, Mason Secky-Koebel, Ian Lei, and Larina Chen
Edited by Molly Pribble
Does AI think or just conclude from ascertaining functions? I think we still need humans to think.