Hello!
Facebook may be looking to rebrand, but Hold the Code is still going strong, bringing you updates on the latest news in technology and computing ethics.
In our 23rd edition, we explore AI that can compose music, write code, and teach robots to walk. Our weekly feature discusses how to improve content regulation on Facebook and other social media platforms.
…I mean, did you expect us to not talk about Facebook this week? 🤔
Computing a Concerto
Can AI compose music? This question caused a stir in the world of classical music, when experts were searching for a way to reconstruct Ludwig van Beethoven’s unfinished 10th Symphony. Beethoven left only a few sparse notes on the piece, which made past attempts to finish it nearly impossible. This all changed in 2021, when a team of musicologists and computer engineers transformed the notes that Beethoven left behind into a completed piece using AI.
This task was a daunting challenge for Dr. Ahmed Elgammal, who led the AI side of the project. Past AIs could only generate a few seconds of new music. Never before had AI constructed an entire symphony from a handful of notes. Elgammal’s job involved figuring out how to extend a short phrase into a longer musical structure, how to create harmonies, how to combine different sections of music, how to orchestrate a piece, and how to do it all in the way that Beethoven might have done.
“At every point, Beethoven’s genius loomed, challenging us to do better. As the project evolved, the AI did as well.”
It took over two years, but in 2021, the AI finally completed the 10th Symphony. The music was presented to a group of music scholars at the Beethoven House Museum in Bonn, Germany. When the audience couldn’t tell where Beethoven’s notes ended and where the AI’s additions began, the team knew it had succeeded in accomplishing what had previously been impossible: completing Beethoven’s 10th and using AI to do it.
Although some might believe that AI has no place in the arts, Elgammal argues that AI is not a replacement, but a tool for artists to express themselves in new ways. Above all else, this project illuminates new potential applications for AI and the ways in which its possibilities are constantly evolving. Who knows? Maybe your next favorite song will have AI in the credits.
AI Writing Code?
A new artificial intelligence program — Codex, built by OpenAI — is now able to generate solutions to coding challenges that are often used in technical software engineering interviews and can do so almost instantaneously. Additionally, this program is even able to translate between coding languages, a feat that requires considerably more time for any human programmer due to the various intricacies between each language.
What does this mean for software engineers?
Codex is far from being efficient enough to take the place of professional software engineers given that even the creators themselves estimate that Codex produces the right code in 37% of its attempts. Debugging is key for this program which means that tweaks often need to be made in certain lines of code for them to even be run properly.
“This is a tool that can make a coder’s life a lot easier.”
Rather than viewing this breakthrough in AI as a threat to their livelihoods, many other experts, like Tom Smith, consider it a method of increasing productivity within the workspace. Codex even assists with teaching new programmers the basics of coding by providing example codes, demonstrating the possibility of an era of AI involvement within computer science education.
Virtually Real Robots
What?
A swarm of 4000 virtual robots wiggle, jump, and slide across an obstacle course filled with stairs, blocks, and slopes. While their movements may look…awkward, these virtual simulations of robots are paving the way for how we train real-life intelligent machines.
How?
In the simulation, the machines, whom the researchers named ANYmals, confront challenges to navigate a variety of obstacles in a virtual terrain. As a robot moves, a reinforcement learning algorithm judges how this affects the robot’s ability to walk and adjusts itself to improve performance according to the feedback, helping the robot improve its walk. This virtually trained and tweaked algorithm can then be used on a real-world copy of an ANYmal robot to enable intelligent movements in real world environments.
Why?
The simulation, which ran on specialized chips, can reduce the time needed to train the algorithm to one-hundredth of the time normally required. This approach cast light on a new and much more resource efficient method to use reinforcement learning to train machines to complete daily and industry tasks such as sewing clothes and harvesting crops.
Weekly Feature: Regulating Facebook (*ahem* Meta)
Roddy Lindsay, a former Facebook data scientist, argues in his New York Times op-Ed that Congress should pass a simple reform: hold social media companies accountable for the content their algorithms promote. His urge comes in light of Frances Haugen’s Senate testimonies, in which the former Facebook product manager revealed that the tech giant prioritizes profits over user safety.
What’s wrong?
Lindsay writes that two technological developments adopted by social media companies are largely responsible for the fringe content that appears on feeds.
Personalization: the “mass collection of user data through web cookies and Big Data systems.”
Algorithmic amplification: “the use of powerful artificial intelligence to select the content shown to users.”
While personalization and algorithmic amplification can be beneficial to consumers, they are harmful because they “perpetuate biases and affect society in ways that are barely understood by their creators, much less users or regulators.”
Engagement is central to social media companies’ business models, which incentives feeds to promote inflammatory and harmful content.
What is Section 230 and why is it important?
To address personalized algorithmic amplification, Lindsay suggests changing Section 230 of the Communications Decency Act of 1996.
Section 230 states that “[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”
The law essentially allows social media companies to host user-generated content without consequences. These platforms are not legally responsible for the actions of their users. According to the Electronic Frontier Foundation, a nonprofit dedicated to protecting digital privacy and free speech, Section 230 is “perhaps the most influential law to protect the kind of innovation that has allowed the Internet to thrive since 1996.”
What’s the solution?
During her testimony, Haugen said reforming Section 230 to hold Facebook accountable would cause the social media company to remove engagement-based ranking. Removing this system, Lindsay argues, would address political bias, reduce fringe content, and still allow social media companies to be successful and profitable.
“Social media feeds would be free of the unavoidable biases that AI-based systems often introduce. Any algorithmic ranking of user-generated content could be limited to non-personalized features like ‘most popular’ lists or simply be customized for particular geographies or languages.”
Lindsay admits reforming Section 230 will be challenging, but the urgency of creating reform outweighs the difficulties.
Written by Michelle Zhang, Dwayne Morgan, Larina Chen, and Ian Lei
Edited by Molly Pribble