This piece is a summary of a recent RAISO education event held at Northwestern University. During this event, RAISO explored the topic of AI personalization and its potential impacts on consent, influence, and free will.
During our discussion this past Wednesday, RAISO dived into the topic of personalization of AI tools and considering the limitations of where machine assistance changes into administration. The world of AI is growing, and news headlines of data privacy concerns and the societal impacts caused by technology are following suit. As worries rise, we are led to ask ourselves to what extent should technology hold power over our lives?
Our discussion began with a variation of the question in the poll below:
Transparency & Consent
A large majority of participants willingly relinquished their “free will” when given the opportunity to be manipulated by an AI tool for a productive task yet scepticism regarding the transparency of the intentions behind the program was a big concern.
Being aware of what information is being collected and how it is being used allows for that higher sense of free will — as in you can only willingly allow yourself to be coerced into something that you have a full understanding of. A major problem, however, is that AI strongly relies on data to become more efficient. The more data a program has to learn from, the more effective it will become. Subsequently, as we lessen the amount of data used, we sacrifice what the software is able to do for us.
Another issue with this idea of manipulation is that how would the software be able to distinguish between shifts in consent from users. Would it give up control the moment someone considers it intrusive or would it influence us so that our consent doesn’t change? This uncertainty of how the technology could be designed makes the idea of consent much more complex. Ideally, a system could be created that changes behavior for the better but you still remain in control of specific system operations, allowing for more transparency.
Potential Impact
Many students within the discussion were wary of both the AI tool itself and its human developers. Afterall, how can we trust a software designed on the premise of our mind? With AI algorithms aiming to replicate human thought, manipulation of its users becomes a central concern. There are already countless examples of less sophisticated programs that are capable of implicit manipulation, such as Youtube, which evaluates the content you enjoy and continually suggests more of that same content. The effects would be significantly greater within the hands of an AI tool, shifting from “personalized content” to “personalized reality” and leading users “down the gradient.”
“The way AI interacts is not introducing any new phenomena, it is simply taking advantage of an existing one,” said Sachin Srivastava, RAISO’s education program chair.
As the discussion unfolded, we began to question even human nature itself, as some students acknowledged that people are intrinsically drawn to seek other content and ways of thought that confirm their own opinion. Adding AI into the equation allows for this “personalized reality” to be potentially reached with ease. The monetization of faults in human nature certainly isn’t a new concept but the impact that AI would have is something important to consider.
Written by Dwayne Morgan
Edited by Molly Pribble