Hello!
Welcome to Hold The Code, edition #13.
In this newsletter, we cover news about France's use of AI in the realm of global intelligence, the state of the "AI Healthcare Revolution," news about AI regulations, and we review a recent essay about algorithmic nudging.
But if that isn't enough AI news for you, Forbes recently published "The AI 50: Top AI Companies to Watch".
Exciting AI applications are all around us; your writers at Hold the Code cannot wait to continue sharing news developments with you.
'Till next week.
Grand Frère en France
France is looking to expand their use of AI and other surveillance technologies in the wake of a recent letter stating that the spread of Islamism and other ideologies was pushing France toward civil war. The letter was published by a group of retired generals in the far-right magazine Valuers Actuelles and was soon supported by Marine Le Pen, the leader of France’s anti-immigrant National Rally party.
Government response
Prime Minister Jean Castex stated, “I condemn in the strongest terms this initiative, which is contrary to our republican principles and to the honor and duty of the army… These generals represent no one but themselves.”
However, last Wednesday Castex also announced a new bill that would expand surveillance technology in France to include internet data, allowing the government to track individuals who visited certain websites in real-time.
What is proposed?
With this new counterterrorism bill, the government will be able to track users who visit certain URLs. Not only this, but French officials also say they plan to expand this in the future with the use of AI technology. One part of this new bill allows French intelligence agencies to use old intelligence data, including some data the government is not legally allowed to retain, to train AI algorithms to find patterns and deploy predictive intelligence tools.
Bastien Le Querrec, a member of the litigation group for La Quadrature du Net (a French digital rights group), says “The objective is to gather as much data as possible. That is the definition of mass surveillance.”
Where AI Stands in Healthcare Today
By 2026, the market size of AI in global healthcare is expected to expand from $5 billion to $45.2 billion. We are already on our way to implementing healthcare in better ways as seen in the distribution of the COVID-19 vaccine and new ways of patient care, but how can we make sure the money is funding the right initiatives?
The new 2021 Healthcare AI Survey from Gradient Flow aims to research the less-talked-about aspects of AI that are vital to the process to understand where we actually stand when it comes to healthcare AI. The survey reports that natural language processing (NLP) (36%), data integration (45%), and business intelligence (BI) (33%) are the three most wanted technologies in the healthcare field. And they solve the biggest problems from serving as a connection between data sources in electronic health records to making sure personal information is safeguarded. The survey also produced criteria that are most important in AI to healthcare users: privacy, trainability, and accuracy.
According to a report from the Journal of General Internal Medicine, there is another huge criterion necessary for healthcare professionals:
"Collection of data on race, ethnicity, and language preference is required as part of the 'meaningful use' of electronic health records (EHRs). These data serve as a foundation for interventions to reduce health disparities."
The future of AI in healthcare is going to rely on accuracy. Hopefully, with this new research and increased funding, we can get one step closer to understanding and developing better systems for the healthcare industry.
AI Regulations Are Coming. How Should Businesses Prepare?
In recent weeks, government bodies — including US financial regulatory agencies, and the European Commission — have announced guidelines or proposals for regulating artificial intelligence. Although the nature of AI regulation is increasingly evolving, according to the Harvard Business Review, there are several things private sector users of AI can do to prepare.
Recent developments in AI regulation
In late March, the five largest federal financial regulators in the United States released a request for information on how banks use AI, signaling that new guidance is coming for the finance sector.
Then, just a few weeks later, the U.S. Federal Trade Commission (FTC) released an uncharacteristically bold set of guidelines on “truth, fairness, and equity” in AI — defining illegal use of AI broadly as any act that “causes more harm than good.”
The European Commission followed suit on April 21 and released its own proposal for the regulation of AI, which includes fines of up to 6% of a company’s annual revenues for noncompliance.
Conducting risk assessments
Above all, companies using AI should conduct assessments of AI risks and document possible mitigation strategies. In the AI-world, these types of assessment are often referred to as “Impact Assessments,” or IA
Using these tests, now, would be advantageous to businesses not only because it can help companies identify potential shortcomings in their AI ethics, but it also will set companies ahead of the curve: many new laws being enacted that likely will require businesses to conduct these types of assessments anyway. For instance, Virginia’s Consumer Data Protection Act — signed into law last month — requires assessments for certain types of high-risk algorithms.
The greatest challenge facing AI regulation: defining AI
Yet, some scholars argue that we are still in the very early stages of AI regulation. Namely, lawmakers and regulators have still not arrived at a broad consensus on what “AI” is.
Some definitions, for example, are tailored so narrowly that they only apply to sophisticated uses of machine learning, which are relatively new to the commercial world.
Other definitions (such as the one as in the recent EU proposal) appear to cover nearly any software system involved in decision-making, which would apply to systems that have been in place for decades.
Weekly Feature: "Algorithmic Nudges Don’t Have to Be Unethical"
Mareike Mohlmann of the Harvard Business Review recently published a piece in which she explains the concept of "nudging," its relevance to AI, and how companies can benefit from it in ethical ways.
What is “nudging?”
Nudging: a concept popularized by University of Chicago economist, Richard Thaler, which refers to the strategy of changing users’ behavior based on how apparently free choices are presented to them.
Algorithmic nudging
In the AI era, nudging has taken a deeper form; with so much data about individual users and with the AI to process it, companies are increasingly using algorithms to manage and control individuals — and in particular, employees.
Some examples:
Uber's psychological trick of rewarding badges to incentivize their more than 3 million independent, autonomous drivers to work longer hours without forcing them to do so.
Deliveroo's strategy of using push notifications to their food delivery workers’ smartphones to nudge them into working faster.
Amazon’s use of employee wristbands, which can vibrate to point warehouse workers in the direction of a product...but which also track the employee’s every move.
Ethical concerns
These practices are of increasing concern to the regulators and the broader public. Challenges to these practices come largely in the form of attention to privacy violations, accusations that nudges manipulate unwitting individuals to their disadvantage, and concern about algorithmic transparency and bias.
In fact, in July 2020, British Uber drivers filed a lawsuit against Uber claiming that the company is failing to fulfill its legal obligations under Europe’s data protection regulations (GDPR), citing its lack of transparency about its algorithms.
Can algorithmic nudging be used responsibly?
Harvard Business Review writer, Mareike Mohlmann, believes so. According to her, here's how:
(1). Create Win-Win Situations
Research by Thaler and Sunstein indicates that nudging can encourage individuals to improve their own health, wealth, and happiness through positive reinforcement of their decisions.
In the context of AI, Mohlmann writes that "organizations should seek to implement AI-powered and personalized reward systems that also benefit the worker."
(2). Share Information About Data Collection and Storage
Algorithmically-driven nudging is dependent on access to vast amounts of fine-grained data. For AI nudging to be ethical, companies need to be transparent about the collection and storage of user data.
(3). Explain the Algorithm’s Logic
Mohlmann writes: "Individuals who are significantly affected by the outcomes of machine learning models are due an accounting of how a particular decision is made."
Investing in explainable AI solutions and employing techniques to ensure that complex computational outcomes can be understood by key stakeholders is integral for promoting transparency and reducing bias.
Written by Sophie Lamb, Molly Pribble, and Lex Verb