Would you let a chatbot mess with your mind?

Robot looking caring
Share on facebook
Share on Facebook
Share on twitter
Share on Twitter
Share on linkedin
Share on LinkedIn

Today, BBC News ran a piece on chatbots being used to help people with mental health problems after dealing with cancer. 

I’m glad the patient felt better.  Alexa Jett had recovered from cancer a few years before a friend died of the disease, which triggered depression. She found the chatbot helpful: 

“She pulled me out of that really dark place and I started functioning again.” 

However, my alarm bells began to ring when I saw that the patient called the chatbot a ‘she’. 

Viewing machines as human is a mistake, albeit an understandable one. Especially when the bot in question is deliberately designed to have sympathetic characteristics. Scholar Joanna Bryson has written about this. There are feminist concerns, too, on the gendering of subservient or caring technologies, as Sarah Kember, among others, has argued.

Expert Peter Diamandis, quoted in BBC News story, says virtual assistants will be everywhere ture, just like Jarvis, the fictional virtual asstant to superhero Tony Stark (Iron Man):

“…who will do our administrative tasks, such as read our emails or answer our phone calls; a Jarvis that will sense a depressed mood in our house and reverse it by playing our favourite movie, or a song it knows uplifts our spirits; a Jarvis that will study us 24/7 and learn us in many ways we don’t even know ourselves.”

How could a chatbot be dangerous? Surely, in the BBC News story, the bot is wholly benign?

Maybe it is. But there’s a good chance that the army of emotionally affective chatbots marching behind this one won’t be coming from such an innocent place. 

I would have liked to see this story ask what could happen if the same powerful technology were in the hands of an organisation that isn’t so altruistic? What if they want to manipulate you in other ways? To sell you things you don’t need? To trigger addictions? To influence how you vote? Or make you feel worse instead of better?

Regulation of emotional AI is, at best, in early draft stages.

Meanwhile, people are already activating – and being subjected to – this powerful technology in real life. 

Image by Pete Linforth from Pixabay

Leave a Reply

Your email address will not be published. Required fields are marked *

twelve + fourteen =