Welcome to the 41st edition of Hold the Code! Today's edition dives into AI in the Kitchen and hiring done with algorithms. The weekly feature alludes to the start of something big within AI so be sure read.
Happy reading!
The Next Big Kitchen Gadget
A new robot developed by researchers at Cambridge University can “taste” food and tell if it is good or bad. The robot is trained to detect the optimal levels of saltiness in a dish and taste different ingredients at different stages of the chewing process.
How does a robot “taste”?
Researchers first trained the robot to make omelets. Then, it tastes nine different versions of the fish at three stages in the chewing process. The team simulated the chewing process by blending the food (kinda ew) before the robot tried it again.
Researchers are hopeful that this new method can provide a better measure of how food actually tastes -- and therefore result in tastier food products. Current methods involve electronically testing the salinity of the food.
Grzegorz Sochaki, a member of the engineering team at Cambridge, told the BBC, “In the end it’s just a single sensor which wouldn’t be able to do two different ingredients normally. But thanks to chewing, we see all the different changes through mechanical processing.”
Robot Home Cooks
Currently, the robot consists of an arm that makes the food and is designed to be used in kitchens or chain restaurants. In the next few years, this technology could be further developed for home use. Additionally, researchers hope to be able to adapt the robot to individual users’ tastes and preferences.
“This result is a leap forward in robotic cooking,” says Muhammad Chughtai, a senior scientist at Beko, a domestic appliance company, “By using machine- and deep-learning algorithms, mastication will help robot chefs adjust taste for different dishes and users.”
Hiring with AI
What if the outcome of your next job interview was determined by AI? Figuring out which candidates to hire has always been a difficult task for companies, and with the widespread shift to remote workspaces that came with the COVID-19 pandemic, employee behavior patterns have become even harder to predict.
Applications
Applying AI to this problem may seem like a natural solution to some. After all, AI has been shown time and time again to excel at predicting what people want. For instance, we often use streaming service algorithms to help us pick which shows to watch next. However, others worry about the extent to which we can entrust AI with decisions about employment, especially when many don’t fully understand how it works.
On the other hand, what if machines are able to make decisions in a way that is less prone to biases than we can? Frida Polli, the CEO and founder of Pymetrics, a leading AI-driven recruitment platform, notes;
“I completely understand the concerns around AI. It’s incumbent on tech providers like ourselves to prove we’re equitable … but once we can do that, it's critical for societies to start leveraging some of these platforms.”
For example, Pymetrics has put effort into using AI to assess soft skills – such as a person’s empathy or decision-making ability. These skills are hard for humans to fairly quantify, but AI may be able to assess them more accurately, which could help companies make better decisions about who to hire for which roles. “You have to recruit the right talent [and] it's important therefore to bring in people not based on their degrees but on their skills,” says Tan Moorthy of Infosys, a client of Pymetrics.
This new method of using AI to aid in hiring decisions could prove critical in a world where machines and automation are rapidly making certain jobs redundant, while opening up doors for people to take on different, possibly more rewarding roles. In this way, AI-assisted skills assessment has the potential to completely reshape the workforce of tomorrow.
Weekly Feature: Toeslagenaffaire: AI scandal in Netherlands ruins lives
Chermaine Leysner was one of the tens of thousands of Dutch citizens whose life was ruined by the “toeslagenaffaire”–the child care benefits scandal. One day, in 2012, she received a letter from the Dutch tax authority demanding she pay over €100,000 (~$105,000 USD) for child care allowance dating back to 2008. A student and a mother of three children at the time, Leysner spiraled into depression and burnout–caused by the stress of the tax bill.
“I thought, ‘Don’t worry, this is a big mistake.’ But it wasn’t a mistake. It was the start of something big,” Leysner said.”
What was the issue?
Dutch tax authorities had implemented a self-learning algorithm to generate risk profiles to identify child care benefits fraud. Authorities relied too heavily on the system’s risk indicators and punished families over a mere suspicion of fraud.
As a result:
Tens of thousands of families, particularly those that ethnic minorities or with lower incomes, were pushed in poverty
Victims committed suicide
More than a thousand children were taken into foster care
Taking a closer look at the algorithm
Dutch tax authorities developed the criteria for the risk profile factors. These indicators include:
Dual nationality
Low income
A "non-Western" appearance
In addition to these criteria, authorities also devised a secret blacklist that tracked credible and substantiated signs of fraud. Citizens had no ability to understand why they were on the list or defend themselves.
Furthermore, a report from the Dutch Parliament found several institutional biases of the tax agency and that tax authorities hid information.
“There was a total lack of checks and balances within every organization of making sure people realize what was going on,” said Pieter Omtzigt, an independent member of the Dutch parliament who helped uncover the child care benefits scandal.
What does this mean for the future?
Omtzigt said he worries that the Dutch government hasn’t “taken even vaguely enough preventive measures” to prevent a future scandal.
As governments around the world increasingly turn to algorithms and AI-automated systems, the Dutch scandal, Toeslagenaffaire, demonstrates how, without the proper measures and safeguards, automated systems can have devastating consequences.
The European Union plans to introduce the AI Act, an ambitious and encompassing law that seeks to restrict the use of “high risk” AI systems and ban certain “unacceptable” uses. To learn more about the AI Act, here’s a guide from MIT Tech Review.
Love HTC? ❤️
Follow RAISO (our parent org) on social media for more updates, discussions, and events!
Instagram: @Raisogram
Twitter: @raisotweets
RAISO Website: https://www.raiso.org
Written by Molly Pribble, Michelle Zhang, and Ian Lei.
Edited by Dwayne Morgan