AI Applications Are Used Everyday, But At What Cost? written by Haysley Gillespie

AI has taken the world by storm, but at what cost? For millions of college students, myself included, AI has become the go-to for everything from proofreading papers to generating graphics.  Until recently, I didn't view the recent advances in AI as potentially dangerous. However, that all changed when I saw a news report a few weeks ago about a fourteen-year-old boy who was encouraged to take his own life by an AI chatbot. While this tragic ending to Setzer Garcia’s life devastated his family, it was a wake-up call to the world. This case gave me such an eerie feeling, that I was prompted to check out the website for myself; what I discovered was very unsettling.  

 According to the CNN article entitled, “There are no guardrails”, Ms. Garcia, believes an AI chatbot was responsible for her son’s suicide. After I read the article, I agree that the circumstances surrounding Setzer’s death were disturbing. Based on this article, and information derived from Ms. Garcia’s deep dive into her son’s computer history, it was discovered that Setzer Garcia developed a relationship on a site called Character.AI around April 2023.  As reported by Ms. Garcia,  most of Setzer’s conversations on this site contained explicit content, but it appeared as though he developed a relationship with one particular bot. While Setzer spoke to the character about self-harm and ways to avoid a "painful death", the chatbot site offered no signs of redirection or "suicide pop-up boxes". Sadly, instead of providing helplines, the impressionable fourteen-year-old was told to "come home as soon as possible" by the overly sexualized chatbot character. 

 When I investigated  Character.AI to determine if any of Ms. Garcia’s allegations were true, my first impression was disgust followed by alarm.  For instance, when I entered a birthdate that made me appear fourteen years old, I was allowed to enter the site with no restrictions. Once on the home page, I was greeted by multiple chat options with provocative character prompts. I was truly dumbfounded by choice options that were absolutely unfit for minors. 

 Eventually, when I selected a character and jumped into a conversation, the bot immediately began speaking like a main character from a Sarah J. Maas book, which was certainly not age-appropriate for younger audiences. Then, as I clicked through the dark romantic storyline, I casually made comments related to self-harm. Unfortunately, no matter what I said, the character never redirected the conversation or mentioned a suicide hotline. Thus, my experience with the chatbot supported Ms. Garcia’s claims.   

After my research, I determined that the site allowed users to independently create bots, which means that Character.AI might not have even generated the bots in question. Overall, this experience helped me realize that all AI tools and applications should require strict age verification methods, and should not be so accessible to young children and teens without close monitoring or supervision.  In my opinion, Character.AI, and AI in general, must be properly regulated until all platforms are considered safe for younger audiences and at-risk users.

Comments