“So captivating you could have heard a pin drop”
Professor Tim Crook, organiser of the George Orwell Studies Conference, London 2018
In May 2018 I was invited to speak at the George Orwell Studies Conference to talk about my relationship with Orwell which had become unexpectedly intense during the previous year.
I had come to view 2017 as my year of Orwell.
I was studying in London, and frequently found myself walking through Deptford. I stayed at a hotel in Paris. I became involved in a violent struggle in Barcelona with a mugger on a moped. I ended up changing trains in Wigan, somewhere I’d never been before in my life in spite of growing up less than 20 miles away.
To cap it all, I found myself in a hospital in the South of France, looking after someone who had been struck unexpectedly with appendicitis. For her, that was ‘the worst thing in the world’, and the number on the door to her ward was, of course, 101.
Most importantly though, for me in 2017, I was inspired to start writing fiction again by The Orwell Society, whose student dystopian fiction competition gave me a new focus, and boosted my confidence in my creative abilities. The short story I entered became the basis for my debut novel, Rockstar Ending, which I read from at the conference. It will be published in January 2020.
It all began one afternoon in the Professor Stuart Hall Building at Goldsmiths, where I was studying for an MA in Digital Media.
Sitting in the coffee bar between lectures, I looked up from my Foucault and caught a glimpse of one of the ubiquitous information screens.
Instead of Big Brother looking down on me, however, I saw George Orwell’s kindly face next to a strip of copy inviting entries for the competition. I knew instantly what I wanted to write about. Something that had been on my mind for many years, but for which I had never found an outlet. In the footsteps of Orwell (1970a [1946]), I was going to try to make ‘political writing into an art’.
Rules for Effective Writing
Before the competition, and the permission it gave me to begin my journey experimenting with the dystopian genre, Orwell had already given me some excellent career guidance.
I worked for 30 years in corporate communications, doing PR for big businesses. For many that may seem like a dystopian nightmare in itself, although that wasn’t my experience. Orwell was relevant to my career simply because good writing is the foundation of all PR, as it is of journalism. To be successful, I had to develop a straightforward writing style, succinct and clear. I often reminded myself and my colleagues about Orwell’s rules for effective writing, in Politics and the English Language (1970b [1946]), when striving to come up with something a journalist might actually want to read and follow up on.
When I worked at BT in the 1990s the company won awards for making its ‘terms and conditions’ (Ts & Cs) readable and simple. If only the Ts & Cs of the 21st century were designed in the same way. Joseph Turow, of the Annenberg School of Communication, says he has been informed by ‘lawyers who write the policies for large organisations’ that these days they are simply ‘not designed to be understood by ordinary people’ (2015).
Flick through the blogs, LinkedIn posts, annual reports and tweets of any tech firm, and you will find technological utopian motifs in abundance. The promise of a better life through technology is rife among the tech giants who dominate much of our popular culture. Pressure on communicators and journalists to pour fuel on the fire of hype can sometimes be overwhelming, and ever-more exaggerated claims emerge every day.
Artificial Intelligence and the Singularity
A primary theme for my writing is how Artificial Intelligence (AI) could be used to manipulate people into making decisions with terminal consequences. The Economist (2018) has described AI as: ‘… much more than another Silicon Valley buzzword – more, even, than seminal products like the smartphone. It is better seen as a resource, a bit like electricity, that will touch every part of the economy and society’.
Recently, the UK government has set up a Centre for Data Ethics and Innovation (1) and the AI for Good Foundation (2) has been established in the US, both with honourable intentions to help society benefit from this powerful computing technology. Meanwhile, the application of AI is streaking ahead, being activated in everything from autonomous weapons, to self-driving cars, to predicting whether children are likely to be abused (McIntyre and Pegg 2018).
I am in good company when worrying about possible misuses of AI. Stephen Hawking, the internationally acclaimed theoretical physicist, cosmologist and author, was among a group of scientists who co-authored an article in the Independent questioning the likely benefits of ‘the singularity’, the time in the future when artificial intelligence might reign supreme (Hawking et al. 2014). They concluded with the punchline: ‘Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks.’ Moreover, the excellent book by Cathy O’Neil (2016), Weapons of Math Destruction, explains how the software that increasingly manages our lives encodes prejudice and bias into the system. As a result, mathematical models at work in areas such as financial services tend to make the rich richer, and discriminate against people who are poor and oppressed.
How much time have we to get on top of this phenomenon? Ray Kurzweil, Google’s chief engineer and renowned futurologist, has predicted the singularity will happen ‘within the next 30 years’ (Galeon and Reed 2017). In the context of the history of the planet, it’s a blink away.
Don’t Stop the ‘Terminator’ Chat
The Leverhulme Centre for the Future of Intelligence (3) and the Royal Society jointly commissioned the AI Narratives Project, recognising that the evolution of the story of complex, novel technologies is inextricably linked with its technical development. How AI is depicted in culture and the media influence how regulation and public opinion play out. My blood ran cold at the CogX18 conference in London, however, when one of the AI narratives project team members included in her presentation the remark: ‘We have to stop the Terminator chat.’ ‘No!’ I tweeted. ‘Imagining the worst is fundamental to avoiding it.’
My story is set in the near future, long before the time when the singularity is predicted to occur. It is not concerned with machines staging a Terminator-style take over. I am more anxious about the damage a handful of people with concentrated power could do – probably by accident – instructing machines to do terrible things, surreptitiously, without the rest of us realising what is happening.
Some would say this is already taking place. Witness the ever-lengthening queue of former Facebook and Google employees setting up institutes and pressure groups such as the US-based Center for Humane Technology (4), which wants ‘…to realign technology with humanity’s best interests’. It is disconcerting that the people who have seen inside the tech superpowers have become adamant that much of what they are doing is damaging society.
With machines mediating between the organisation and the individual, the sense of human responsibility can become diluted. There are fewer people in the chain of command to blow the whistle if an anonymous developer, somewhere, programmes an invisible, malign algorithm and sets it free.
With this in mind, Rockstar Ending questions whether it is possible for people to tell the difference between choice and manipulation in an increasingly data-driven world. It also explores the social and political circumstances that might lead ordinary people to bring about the premature demise of their fellow human beings, and – more optimistically – what might inspire them to help each other carry on.
Notes
1 Centre for Data Ethics and Innovation. See online at https://www.gov.uk/government/consultations/consultation-on-the-centre-for-data-ethics-and-innovation, accessed on 21 September 2018
2 AI for Good. See online at https://ai4good.org/, accessed on 21 September 2018
3 Leverhulme Centre for the Future of Intelligence. See online at http://lcfi.ac.uk/projects/ai-narratives-and-justice/ai-narratives/, accessed on 21 September 2018
4 Center for Humane Technology, See online at http://humanetech.com/, accessed on 21 September 2018
References
Economist (2018) How Europe can improve the development of AI, 20 September. Available online at https://www.economist.com/leaders/2018/09/22/how-europe-can-improve-the-development-of-ai, accessed on 21 September 2018
Galeon, Dom and Reed, Christianna (2017) Kurzwiel claims that the Singularity will happen by 2045, Futurism, 5 October. Available online at https://futurism.com/kurzweil-claims-that-the-singularity-will-happen-by-2045/, accessed on 21 September 2018
Hawking, Stephen, Russell, Stuart, Tegmak, Max and Wilczek, Frank (2014) Transcendence looks at the implications of artificial intelligence – but are we taking AI seriously enough? Independent, 1 May. Available online at https://www.independent.co.uk/news/science/stephen-hawking-transcendence-looks-at-the-implications-of-artificial-intelligence-but-are-we-taking-9313474.html, accessed on 21 September 2018
McIntyre, Niamh and Pegg, David (2018) Councils use 377,000 people’s data in efforts to predict child abuse, Guardian, 16 September. Available online at https://www.theguardian.com/society/2018/sep/16/councils-use-377000-peoples-data-in-efforts-to-predict-child-abuse, accessed on 21 September 2018
O’Neil, Cathy (2016) Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, New York: Crown Publishing Group
Orwell, George (1970a [1946]) Why I Write, Orwell, Sonia and Angus, Ian (eds) The Collected Essays, Journalism and Letters, Vol. 1: Harmondsworth, Middlesex: Penguin Books pp 23-30. First published in Gangrel, No. 4
Orwell, George (1970b [1946]) Politics and the English language, Orwell, Sonia and Angus, Ian (eds) The Collected Essays, Journalism and Letters, Vol. 4: Harmondsworth, Middlesex: Penguin Books pp 156-169. First published in Horizon, April
Turow, Joseph, Hennessy, Michael and Draper, Nora (2015) The Tradeoff Fallacy: How Marketers Are Misrepresenting American Consumers and Opening Them Up to Exploitation, Philadelphia: University of Pennsylvania Press