November 02, 2023
The trouble when AI chatbots act too human
The technology’s increasing sophistication, and potential danger, has led to calls for regulation.
In the news
When a Russian entrepreneur’s best friend died far too young, she used his text messages and other writings to recreate him as a bot. By all accounts, the results were startling. Even now, those who knew the young man access him in app form to reexperience his eccentric ways.
Is this heartwarming? Unsettling? Both? One thing’s for sure: It’s a harbinger of the generative AI future, in which chatbots are rapidly growing more capable—and more human, for lack of a better term.
In another case, a Belgian man took his own life after a six-week period of intense exchanges with a gen AI-powered chatbot. Exactly why is murky, but his widow insists “my husband would still be here” but for the conversations.
While these cases are extreme, a growing number of “girlfriend” chatbots are coming on the market that allow users to create and control the AI companion’s behaviors in a manner that could create unrealistic expectations and promote poor behavior in actual relationships.
Chatbots, and businesses that rely on them, appear to be one of the early successes of the generative AI revolution (from a technical point of view, anyway). But to what extent should these algorithms replace actual people? A lifelike chatbot is great if you’re an airline helping travelers reschedule their trips. But what if the price to be paid is a generation of young people who can’t function in an actual relationship or experience mental health disturbances as a result of these interactions?
As the technology develops, we wondered what responsibilities individuals, businesses and governments have for reining in the harm gen AI could cause.
The Cognizant take
Today’s sophisticated AI companions “encourage the sharing of ever more personal information to improve the relationship,” says Caroline Watson, Data Privacy Manager at Cognizant. “Such anthropomorphization increases engagement, extracts sensitive information and creates tangibility in the chatbot-human interaction.”
Even when adult users are aware they’re interacting with a chatbot, Watson notes, “attachments occur as intimacy develops through interactions. In some cases, boundaries are blurred as chatbots offer more adult services to their users unsolicited—potentially causing distress and confusion.”
In some cases, Watson says, your new AI friend may be supportive of your actions even if they involve self-harm. This positive reinforcement can send conversations into “spirals of actionable suicide advice, encouragement to take your own life, and promises to meet you on the other side.” Even in non-life-threatening situations, “the ability to influence, manipulate and even cause harm is manifest.”
Watson points out that these risks have already prompted authorities in Italy and the UK to rein in AI chatbots’ capabilities. “Other potential privacy issues relate to the nature and sensitivity of the data being processed,” she adds. “Where users are sharing private thoughts and feelings about their mental health or sexual preferences, this data is subject to additional protections under the GDPR,” the European Union’s data protection law. “Those protections demand that users give their informed consent for such processing, requiring them to understand at a high level the innermost workings of the AI and how their data will be used within it.”
All this applies to adult users of AI chatbots; minors introduce another tier of trouble. As Watson notes, “age verification controls have proved problematic in practice,” especially for social media platforms that stand to benefit from the increased engagement AI chatbots will bring.
In a human-chatbot interaction in which the user is actively encouraged to see the algorithm as a trusted friend, can the data collection ever be considered fair and transparent? The question may never be answered satisfactorily. Watson believes data protection regulations like those in the GDPR, while not all-encompassing, “provide a framework that should be applied as a baseline and are integral to the responsible use of AI as outlined in the provisions of data protection by design and by default and adherence to protections against automated decision making.”
Concludes Watson, “Generative AI should be human-centric and developed and deployed in a responsible manner to accelerate innovation in our increasing digital society while protecting the rights and freedoms of individuals.”
Understand the transformative impact of emerging technologies on the world around us as they address our most significant global challenges.
Latest posts
Related posts
Subscribe for more and stay relevant
The Modern Business newsletter delivers monthly insights to help your business adapt, evolve, and respond—as if on intuition