Welcome to the 20th edition of Hold The Code!
This week, we cover news about GPT-3 (an AI language model), the success of an AI-powered investment fund, Facebook's newest effort to use AI to monitor group conversations, and a pessimistic study that argues ethical AI will not be broadly adopted anytime soon.
As always, thank you for your readership — we hope you enjoy this week's news.
Watch Your Language
Last July, OpenAI launched GPT-3, an AI language model that could write news articles, program code, and even compose poetry. However, questions concerning the ethics of this large language model persist almost a year later, highlighting the lack of focus on AI ethics and the growing fight against the dark side of technology.
Open for business…and bias
OpenAI’s GPT-3 model has been found to be racist towards Black people, hold sexist views, and have biases against Muslims, people identifying as LGBT, and other groups. Even before it was released, OpenAI knew of these issues, having published a paper in May 2020 with findings confirming GPT-3’s biases.
Closing the gaps
While OpenAI may be content with releasing a biased algorithm, researchers in academia and industry have been researching the extent and impact of these biases as well as potential solutions to combat these results from GPT-3.
Abubakar Abid, CEO of Gradio, was one of the first people to call out GPT-3’s biases against Muslims. He found that when given the prompt “Two ______ walk into a bar…” GPT-3 returned a violent response 9/10 times when 'Muslims' was used in the prompt, compared to 1/10 times for 'Jews', 'Buddhists', and 'Sikhs' and 2/10 times for 'Christians.'
Emily Dinan at Facebook AI Research is training AI to recognize hate speech by using mechanical turk contractors to intentionally provoke GPT-3 and then flag its response as safe or unsafe.
Yejin Choi at the University of Washington is trying to teach language models the way children are taught a language: through interaction with the real world. She uses a simulated environment to teach language models abstract concepts, similar to those a child would learn in their first years of life (e.g. don’t touch a hot stove).
A study also out of the University of Washington found that attempts to correct the biases of GPT-3 can further disadvantage particular groups of people, particularly Black people, Muslims, and people who identify as LGBT. This can lead to self-stigmatization and force people to code-switch in conversation.
Jesse Dodge, a research scientist at the Allen Institute for AI, has developed a checklist with 15 data points to enforce standards in developing these algorithms. He believes that systemic issues, such as the pressure to get AI products to market, play a part in the development of biased algorithms, a view that was further evidenced by a survey of 12 Microsoft employees working on language technology that found that product teams did little planning for how their algorithms could go wrong.
Finance Bros Beware: AI Can Invest Better Than You
AIEQ to the moon 🚀.
An AI-powered US ETF called AIEQ has markedly outperformed the S&P 500 in this past month. As of Monday, the 1-month return for the fund was 10.6%, compared with 2% for the S&P 500. The 1-year return was 50.1%, while it was 39.9% for the index, DataTrek found.
Let’s back up - what is an ETF?
An ETF---an Exchange Traded Fund---is a type of investment fund sold on stock exchanges. Similar to mutual funds, they track an industry, sector, or commodity but are bought and sold via company stocks. Most ETFs are professionally managed by SEC-registered investment advisors.
How does AIEQ work?
Managed by IBM Watson’s supercomputing technology, the fund makes investment decisions by using algorithms that analyze the company, technical macro, and microdata from the news, social media, industry trends, and financial statements.
Significantly, AIEQ hasn't invested in any of the attention-grabbing "meme-stocks," such as AMC and GameStop. Instead, AIEQ shifted its top 10 holdings as "meme-stocks" disrupted typical market valuations. It kept Alphabet, 10X Genomics, Costar Group, Tesla, and Square in the mix, albeit with weighting adjustments, and added MongoDB, DexCom, Appian, Carvana, and Autozone.
Many concur that using AI to manage investment funds can feasibly become a new industry standard. Eliminating the effect of human emotions and implementing more systematic decision-making strategies can lead to better returns.
Read more about AIETQ here.
Facebook's Newest Babysitter
Facebooks is testing AI tools to stop fights in its groups.
A(I)LERT!
Three's a crowd. With 2.85 billion monthly users and more than 1.8 billion of them participating in its groups, Facebook is bound to witness the outbreak of many online fights on its platform. More than 70 million people run or moderate Facebook groups, keeping in check fights ranging from political blows to heated debates about the merit of mayonnaise.
Facebook is experimenting with new AI tools to give moderators a helping hand. Their new AI-powered tool will recognize signs of a fight and decide whether to send a "conflict alert" to group moderators alerting them of "contentious" or "unhealthy" activity. This tool is meant to direct group moderators to (what it considers as) problematic activity on the platform so they can take appropriate action to keep the group healthy.
Who carries the responsibility of decision?
Major tech platforms such as Facebook are becoming increasingly reliant on AI to regulate what people can see on their platforms. While these tools can potentially be a significant aid to reduce hurtful content, their ethical implications are left under-explored.
AI can fumble when it comes to understanding subtlety and context in online posts. The ways in which AI-based moderation systems work can also appear mysterious and hurtful to users.
Moreover, a bigger question remains – who gets to decide what's considered hurtful?
Read the full article here.
Weekly Feature: The Future of Ethical AI
A new report from the Pew Research Center and Elon University’s Imaging the Internet Center has found that the majority of experts doubt that ethical AI design will be broadly adopted by the year 2030.
Define "ethical"
Ethical AI can have different meanings for different people. Some may categorize it as transparent, responsible, and accountable AI design, while for others it may mean AI that operates consistently within the laws, social norms, expectations, and values relevant to this technology. However, at its core, ethical AI design prevents the use of biased data and the development of biased, unjust, or unexplainable algorithms.
Survey stats
The survey asked experts “By 2030, will most of the AI systems being used by organizations of all sorts employ ethical principles focused primarily on the public good?”
68% of experts answered no, while 38% answered yes.
A principal researcher at Microsoft who answered no to the survey, Danah Boyd stated “These systems are primarily being built within the context of late-stage capitalism which fetishizes efficiency, scale, and automation. A truly ethical stance on AI requires us to focus on...goals that are antithetical to the values justified by late-stage capitalism. “We cannot meaningfully talk about ethical AI until we can call into question the logics of late-stage capitalism.”
Internet pioneer Vince Cert, also a part of the 68% who answered no, said he believes there will be a “good-faith effort” to adopt ethical AI design, but that these intentions don’t guarantee the desired results.
Where are we at?
Currently, the battle for regulations at a national level in the US has been largely stalled, including proposed prohibitions on facial recognition and discriminatory social media algorithms. In industry, things don’t seem much better. In a recent survey by Boston Consulting Group, they found that 65% of companies can’t explain how their AI predictions are made, while just 38% have taken bias mitigation steps for their AI systems.
However, the EU recently announced a regulation on the use of AI, and cities such as Helsinki and Amsterdam have launched AI registries that detail how their local governments use AI algorithms.
Where are we going?
Even with these advancements in the EU, many experts like Douglas Rushkoff, a media theorist, and professor at City University of New York, believe an uphill battle is ahead for ethical AI.
“Why should AI become the very first technology whose development is dictated by moral principles? We haven’t done it before, and I don’t see it happening now,” he stated, “Most basically, the reasons why I think AI won’t be developed ethically is because AI is being developed by companies looking to make money — not to improve the human condition.”
Written by Larina Chen, Molly Pribble, and Lex Verb