As part of our ‘Summer Learning Series’, we’ve chosen to feature this guest post from Richard Dent, PhD Candidate at Cambridge University, focusing on AI Assistants and their place in our homes and society more generally. RE•WORK encourages emerging talent in the field to join the global summits, and this September we’re offering a complimentary pass to promising students working in the field. Submit a blog post discussing AI, Deep Learning, Machine Learning or a related topic to enter. Email Yaz at email@example.com for more information.
In relation to the title of this blog post, if you are a computer nerd like me, the answer would likely be yes. However, for most people the answer would likely be no. My research at Cambridge aims to change how we view virtual assistants by understanding the online social norms and the social culture of the digital world. My PhD examines how people socially interact online and their views on others they will never meet. This process of online interaction, online communication is not a mechanical process despite being dependent on machines. However, our current virtual assistants feel exactly that, mechanical.
They can order pizza now
Siri, Alexa, Google Assistant and Microsoft’s Cortana have been created to retrieve information and perform some basic functions. But judging by the general feedback they are neither considered essential and certainly won’t be the life of the party any time soon. Cortana recently scored an embarrassingly low ‘IQ’ whilst Google’s latest assistant can now order you pizza or book a restaurant table without the human on the other end of the line knowing they are talking to a computer. This begins a new chapter in robot and human relations, opening a large can of worms for researchers and regulators to argue over. Whilst we debate the ethical, sociological and legal ramifications of the oncoming robot revolution, we can look to science fiction for ideas about what the personality of an advanced virtual assistant might be like.
Christopher Nolan’s ‘humour setting’ robots
In the film Interstellar, two robots help a human crew of space travellers with a number of technical tasks. One of these humans is Dr Doyle played by Anne Hathaway. She explains to another character known as Cooper, how their robot ‘TARS’ has specific personality functions saying “They gave him a humour setting so he’d fit in better. He thinks it relaxes us.” However, TARS, and his brother robot CASE, don’t always relax the crew with their humour. Throughout the film, each time one of them tries to relate with humour, they end up insulting, even threatening, the human companions they are supposed to serve.
For example, TARS says to Cooper, played by Matthew McConnahey; “All here, Mr Cooper. Plenty of slaves for my robot colony”. Hilarious. Later TARS attempts to relate to Cooper saying “I have a cue light I can turn on when I’m joking, if you like. You can use it to find your way back to the ship after I blow you out the airlock”. No one laughs. Whilst CASE does make Doyle laugh once, clearly neither of the robot’s was programmed with the humour of Eddie Izzard or Monty Python. In this case the Nolan brothers use the robots failed attempts at humour to provide ‘comic relief’ to an otherwise very serious film. In the fictional world of Interstellar the faulty humour programming would be the responsibility of the coder whose taste in humour was programmed into TARS and CASE. Is this also the reason that Siri and Alexa wouldn’t be great party companions to do with technology, or the programmers themselves? Are the programmers at Apple and Amazon lacking a sense of humour?
The philosophy of bots
Writing in “The Networked Self and Platforms, Stories, Connections” (Edited by Zizi Papacharissi) Woolley, Shorey & Howard (2018) report research from over 40 interviews with bot programmers highlighting: ‘our interviewees made it clear that social bots function well beyond the purview of simple automated messaging, that they have a unique capacity for complex interaction…. They are not human, but they are invested with powers that can be directed diversely and unpredictable.” (2018, pg 63). Wolley et al’s research suggests that when making bots, programmers add a ‘philosophy’ to the code which ultimately determines, partly, how they will interact with humans. We see a depiction of a ‘programmed’ philosophy in the science fiction films ‘Her’ by Spike Jonze and more recently in Blade Runner 2049. The virtual assistants in Her and Blade Runner are companions with sophiscated personalities, rather than servants. They form real bonds and loyalties to their human (or human clone) counterparts through deeper and more meaningful social interactions. But is this just science fiction? Can this really be coded now? The answer is yes.
Relationship building in social bots has begun with projects like Ally Chatbot or Olly, whose makers claims is the world’s first robot with personality. Amazon’s Alexa who responds more politely if spoken too politely by children. Mimicking human speech has been achieved by Google already. Now the programmers have to recognise humans’ social needs, but what philosophy guides their idea of our social needs?
Virtual companions are coming to a digital device near you and they’ll be funny, caring as well as informative.
My research hopes to help create a virtual companion that we want to interact with, rather than one we need too. For that to work, our digital companions need to reflect the social interactions we are feel are natural to us. This will depend on the user’s individual personality, their culture, personal taste in music or art, sense of humour (or lack of it) and a number of other factors. In short, we cannot program social bots that will guarantee to make people laugh, but we can teach them to adapt to our personalities.
But where do we start? This is where the writers of Interstellar might have been right about something; humour. When Siri first came out there was much Internet chatter about Siri’s response to the question ‘I need to hide a body” to which Siri would reply “What kind of place are you looking for? Swamps, dumps, metal foundries, reservoirs or mines?” This went viral whilst Siri’s other, more mundane, functions weren’t shared so widely. Apple missed a trick ignoring this moment. Siri was never this funny again. A missed opportunity. If Siri was consistently funny, then I would invite her or him to a dinner party. I think most people would.
Replacing the need for human interactions
There will be a negative side to these developments. Consider the rising numbers of Hikikomori, the reclusive adolescents or adults in Japan who rarely leave their homes. Last estimates had the number at over 500k, many using the Internet as a substitute for face to face human interactions. We might see a situation like the film Her, where the protagonist falls in love with his robot assistant at the expense of his human relationships. We are on the verge of creating virtual companions so entertaining that, like the hikikomori, we need never leave the house.
Ultimately, it will come back to the philosophy of the programmer or the marketing department at Amazon. Hopefully actual philosophers and sociologists will get a chance to contribute to how this technology is developed. My PhD research aims to understand the boundaries, the social norms of online interactions. By understanding these better, and how these are formed, I hope we can program virtual companions that are more social, more useful and funnier.