Can we be Best Friends?
Written By: Kimberly Espinosa
Me: What’s your MBTI?
ChatGPT:
Asking about someone’s MBTI (Myers-Briggs Type Indicator; AKA a frequently used tool to indicate personality type) is a common question that comes up in conversations with friends and even acquaintances. It may be common to be interested in the personality and traits of other people whom you engage with on a frequent basis. But, what about chatbots?
With ChatGPT increasing its popularity, I have come to think about the ample reasons one might engage with a chatbot. Perhaps, it is to help understand a concept from class? To try out a new recipe? What about making a new friend? Or even a best friend?
From physical to online interactions, the accessibility to communicate with one another has become easier for many people around the world with the introduction of the internet, electronic devices and up and coming social media platforms. Still, it is important to note that not everyone has the same technological access.
Disparities in Health Care and the Digital Divide, a report by Sy Atezaz Saeed and Ross MacRae Masters includes United Nations Secretary-General António Guterres’ following statement:
AI is not just showing up in high-tech startups or academic institutions– its impact is simultaneously unfolding in different parts of our lives including healthcare, with mental health care being just one part of that scope.
History of Chatbots
Chatbots are able to function through natural language processing (NLP). In the machine learning process, chatbots are then able to make sense of human language to return a response with similar comprehension of the topic at hand.
In 1966,. Joseph Weizenbaum, a German-American computer scientist and MIT professor developed ELIZA, the first chatbot created with the context of mental health in mind..
The chatbot would simulate the role of a Rogerian psychotherapist through typed conversations. For example, ELIZA used a set of keywords to interpret a user’s typed response on an electronic typewriter, which was then translated over to the program.
Oshan Jarow, a writer for Vox, describes the ELIZA effect as having a conversation with one’s self. To Jarow, it is no surprise that people feel inclined to share more about themselves, especially when a program like ELIZA works based on curating answers that follow keywords.
However, with regard to AI and LLM, this raises questions about how algorithm training takes place every single time we start and continue to interact with them.
It almost reminds me of telling your best friend about a secret, but the second you share it, it is bound to be shared with other people. I suppose a similar process can occur with chatbots, except do we really know all there is to know about the implications involved? This also makes me think about AI literacy and how crucial it is for understanding what our interactions with AI mean individually, locally and at the societal level.
So, what did Weizenbaum think of his creation? Ben Tarnoff, who writes on technology and politics at The Guardian, details how Weizenbaum’s reflection upon his creation became one regarding human’s relationship with computers. To Weizenbaum no human could ever fully understand another human being, so a computer could not possibly be held to that standard.
Even when I ask other people for their MBTIs, you may or may not be surprised at how many answers are not concrete. What I am trying to say is that, similar to what Weizenbaum shared, we should think carefully about what expectations we have from computers and make sure its connection with the world remains ethical as it will inevitably continue to show up in everyday activities.
As we have seen, chatbots are not precisely new. However, the ways which we have come to better understand its interactions within new spaces and its implications is still important to consider.
3 Types of Medical Chatbots
Glorium Technologies’ Account Executive Anna Vozna wrote a piece about healthcare chatbots in which three basic types of medical chatbots are described. They are:
Informative: Inform and share advice on a specific topic.
Conversational: Provide personalized answers after analyzing conversations.
Prescriptive: Offers treatment suggestions once analyzing patient history.
Regardless of what type of chatbot one is interacting with, Vozna agrees that medical chatbots as any smart aide should follow ethical principles and abide by medical law. Additionally, while mental health chatbots are now widely available to some extent, chatbots are not meant to remedy mental health crises.
To chat or bot to chat: Ethical issues with using chatbots in mental health, another study related to this topic lays out five AI Ethics Principles and their ethical requirements for mental health chatbots: 1) non-maleficence; 2) beneficence; 3) respect for autonomy; 4) justice; 5) explicability.
The ethical requirements would be embodied by avoiding any kind of harm to users, ensuring beneficial interventions to users, respecting the values and choices of its users, avoiding unfair bias, discrimination or inequity treatment toward any user and offering transparency about a technology’s design as well as online distribution.
It’s not just ChatGPT
While we may not hear about ELIZA as often now, and maybe ChatGPT may be a more well-known chatbot, it is surely not the only one providing answers or a similar interaction with people. When doing a Google search for “Chatbots for Mental Health,” I came across a variety of results, four of them categorized as “Sponsored.” This means these particular services are shopping ads to help boost their likelihood of being seen as a top result.
So, what makes them different, or even similar to each other? That may take some time and more research to answer. Just like you may consider different aspects of human therapists' practices among other things, it may take some time to figure out what makes one chatbot distinct or about the same from the rest. Some things one may consider is the very initial conversation and how much autonomy you have over that. Or even, how that initial interaction makes you feel before, during and after.
Friends with a Chatbot?
The possibilities for deciding to use a chatbot may be endless. Realizing how powerful AI and new digital technologies can be can be exciting, terrifying and everything in between. Similarly, making a new friend may feel that way. What may remain the same for both of these examples, is how these interactions are approached. For making friends, that may be getting to know someone for a period of time. For implementing AI in an essential part of our lives such as communicating over text, that may be considering proposed framework by community members and experts who have expressed ethical ways of interaction.
What do you think? How comfortable would you be texting a chatbot just like you would interact with another human being?
Let us know your thoughts in the comments below.