Don’t fear the robot – Postcard from CogX18

Pepper Robot and Nicola Rossi
Share on Facebook
Share on Twitter
Share on LinkedIn

Opening the CogX18 Conference in London’s Tobacco Dock, Baroness Joanna Shields posed a killer question. “How do we get the Artificial Intelligence (AI) that ensures we, as a society, fix our biggest problems and provide well-being for all?”

It was a powerful start, and the speakers who followed rose to the challenge, displaying a phenomenal depth of knowledge in their search for answers. I wish there was space to list everyone here.

I camped out for most of the two days at the ethics stage, transfixed by the discussions, even though I didn’t always agree with what the panelists had to say.

As an aspiring dystopian novelist, I was unsettled by Dr Sarah Dillon’s assertion that “We need to stop the Terminator chat”. Her session was centred round the AI Narratives Project, a joint initiative from the Leverhulme Centre for the Future of Intelligence and the Royal Society. The contributors were highly sensitised to negative depictions of AI, and yearned for counterbalancing positive stories in creative media. Without collectively imagining the worst, though, how can we avoid it?

Robots are like London buses. You wait ages for one, then suddenly three come along at once. They were out in force at CogX18.

Most people associate robots with the characters they have encountered in popular culture. I was terrified of the androids in early episodes of Doctor Who, but shared the affectionate feelings of many towards Star Wars R2D2 and C3PO. A superb panel of ethicists and robot designers explained that, when robots are deployed in the real world, humans have an irrepressible urge to anthropomorphise. We will even become attached to a paper bag with two dots on it resembling eyes. I was won over to the consensus that robots should never be built to deliberately mimic humans, unless it is made unquestionably clear that they are machines. There is too much scope for things to go wrong.

It is alarming then, that in cyberspace audio and text chatbots are already merrily violating the impersonation principle. Julian Harris covered a lot of ground with his excellent quick fire overview of the chatbot scene, including the ACE index for chatbot greatness – which rates them on how well they deal with Ambiguity, engage in Conversation, and handle Emotion.

Transparency principles are urgently needed to guarantee that people are always informed when they are in dialogue with a machine. The success of companion bots in Japan, which work on an emotional level, raises difficult questions about how easily users might be manipulated by humanoid or animal shaped robots. Such devices could easily win over their users’ affections, but may not be programmed with their ultimate well-being in mind. Or, as dustbin-like B9 would have put it in the TV Series Lost in Space: “Danger, Will Robinson!”

Although Dr Kate Devlin from Goldsmiths had some unforgettable visuals in her presentation on sex robots (which could be summarised as ‘they will never catch on’) my prize for the best slide has to go to Jonnie Penn.

He opened with a picture of New York, focused on the green splendour of Central Park. Penn explained that the park had been the idea of the poet William Bryson, built at a unique moment in time, when the land was available, on the principle that it should benefit everyone.

That, he said, is how we should be approaching AI now. Who could disagree?

Photo © Nicola Rossi

Leave a Reply

Your email address will not be published. Required fields are marked *