Hello!
In this edition, we discuss how AI can be applied to supply chains, increasing vaccination rates, and unemployment.
We also review a recently published recommendation for promoting AI ethics in the private sector.
Oh, and we did it all with lots of puns.
But before we dive into this week's stories, we'd like to remind our readers that RAISO is hosting its first event this week, on April 9th: Juyoun Han on AI Fairness and Data Privacy. RSVP here.
Supply Ch-AI-ns
(I told you there'd be lots of puns).
Supply chains have suffered in the past year. From the pandemic to other major events, like the blockage of the Suez Canal, supply chains have been forced to adapt to extremely dynamic situations.
Creating adaptive supply chains
AI has proven itself to be an essential tool to creating adaptive supply chain technology. AI was essential in determining how best to re-route ships from the Suez Canal last week, and it has even helped companies evaluate how best to plan for extreme events by building redundancy into business operations, such as knowing where and when to build new distribution facilities.
How does it work?
These AI systems use vast amounts of data to make supply chain predictions. They leverage customer purchasing trends and transportation network data to know how to distribute products appropriately and effectively. This technology has really paid off during the pandemic. Forecasting models have become arguably more crucial than ever for companies, especially those who had to change their manufacturing practices to match the changing demand for vital products such as ventilators.
R-AI-sing the Minimum Wage
(Maybe we should start outsourcing our pun generation to AI)?
For the past years, the conversation surrounding raising the minimum wage to $15 has gained traction. In the elections last November, Floridians joined the growing list of states in favor of the wage increase. However, there are interesting trends in AI that are impacting the move to a $15 minimum wage.
Effects of AI in the workplace
The arrival and popularization of AI in the workplace have pushed many skilled workers downward in the job market. This forces them to take lower-paying jobs.
Rick Grimaldi, an attorney and author of the new book FLEX: A Leader’s Guide to Staying Nimble and Mastering Transformative Change in the American Workplace, says “My thinking has always been that the jobs that paid $7.25 an hour were designed to be starter jobs for people just entering the workforce. But, many have argued that after the Great Recession of 2008, higher-skilled employees were forced to start taking low-wage jobs to earn a living. It’s nearly impossible to earn a living on such a small salary. So, the argument goes, a minimum wage increase would give people at the bottom a fighting chance and pull many of them out of poverty.”
While many follow this logic, some argue that an increase in the minimum wage may push companies to automate more work, thereby decreasing the number of available jobs and increasing poverty levels.
So, will a robot take my job?
Experts say that the increase to a $15 minimum wage probably won’t push employers to automate more jobs and put more people out of work. For many workplaces, a combination of human and automated systems seems to be the best option.
“People who are going to invest in technology are likely going to do it anyway,” says Grimaldi. “Furthermore, even with AI upgrades, employers will still need good people to help run their companies.”
Can Technology Increase COVID-19 Vaccination Rates?
The pandemic has — to a large extent — exposed systematic biases in our health care system. Representation of minority ethnic groups in vaccine efficacy has been disproportionately low—and, in turn, minority groups show greater reluctance to getting vaccinated. Studies show that this mistrust is deep-seated: even among minority healthcare workers who also show lower vaccination rates in the UK and US.
Yet, researchers are optimistic that AI can be applied in meaningful ways to encourage trust among minority populations. In fact, the MHRA recently awarded 1.5 million pounds to develop AI tools intended to do just that.
How AI can help
The most common reason for vaccine hesitancy among minority groups is concern about its adverse effects. Therefore, rapid and transparent reporting on potential adverse effects could go a long way in improving communication and addressing hesitancy.
For example, apps—such as The Yellow Card and V-Safe—allow users to self-report vaccine side effects. To manage this wealth of information, AI can then be used to identify genuine adverse effects. These datasets and findings can be shared, ensuring transparency and accessibility to vaccine information.
Some concerns
Though experts have applauded efforts to apply AI to this issue, some concerns remain. Using limited self-reported data and electronic health records means that transparent auditing of clinical AI tools is necessary to combat bias in the datasets and algorithms.
Weekly Feature: “If Your Company Uses AI, It Needs an Internal Review Board”
Reid Blackman recently published a piece in the Harvard Business Review where he argues that companies using AI should develop Internal Review Boards modeled off those within the medical field.
In Blackman’s view, companies using AI generally know they need to worry about ethics; but when it comes to actually implement strategies, they fall short. Internal Review Boards can help businesses save not only money and brand reputation, but, more significantly, they can promote better, more responsible applications of AI.
Why companies fail with AI ethics
According to Blackman, discussions of AI ethics follow a similar flawed pattern between many organizations. It begins with a narrow definition of AI ethics as an issue rooted in “fairness,” rather than a more holistic approach mindful of the complex, interrelated concerns related to using AI.
Additionally, companies tend to search for technical tools and quantitatively-based bias-mitigation strategies after they identify potential AI ethics issues. While this isn’t inherently problematic, Blackman argues that “the truth is that many ethical issues are not reducible to quantitative metrics of KPIs.” Even further, Blackman suggests that technical tools do not cover all types of bias; there are some cases of bias where no technical tool exists.
A better solution: IRBs
Internal Review Boards (IRBs) were introduced in the medical field to mitigate the ethical risks of conducting research on human subjects. IRBs carry out their function by approving, denying, and suggesting changes to proposed research projects—proactively, not reactively, promoting the idea of “do no harm.”
Blackman argues that there are similar ethical risks in medicine and in AI. In both, there is potential for harming individuals and groups, imposing physical and mental distress, invading privacy, and undermining autonomy.
Moreover, IRBs can promote AI ethics by systematically and exhaustively identifying ethical risks before AI tools are even created. In addition to approving and rejecting proposals, perhaps most importantly, IRBs can help researchers and product developers by making ethical risk-mitigation recommendations.
Read the full piece here.
Written by Lex Verb and Molly Pribble