This is a slightly left-of-the-middle one for me… A few months ago I was approached by Sky to join them at the Mindshare Huddle (which was a brilliant event BTW) in November to discuss the existential issues inspired by the Sky Atlantic show, Westworld. As a huge fan of the original 1973 film, written and directed by Michael Crichton, (and rather controversially, the 1976 sequel, Futureworld) it was a tough gig for me to refuse. If you’re not familiar with the film or show, it depicts a technologically advanced, complete re-creation of the American frontier of 1880. A western-themed amusement park populated by humanoid robots that malfunction and begin killing the human visitors. Basically it’s Jurassic Park but with gunslingers and prostitutes instead of Dinosaurs.
For the discussion to work, it was really important to move past the science fiction really quickly, and get right to the parts of the show that are conceivably being played out, in real-time, as I type this. Life is currently imitating art to a certain extent and so we put forth the following synopsis for the audience;
“Join Jamie Morris, Channel Editor, Sky Atlantic & Head of Scheduling and Pete Trainor, Director of Human Focused Technology at US Ai, as they delve into the notion of Sky Atlantic show Westworld becoming a real thing. This ethical discussion will cover human interactivity in a world of Ai and whether humans would act as normal, or be enticed into a dark illegal world, just because they could. Pete will also describe just how close we are to a Westworld-style world—and you are going to be surprised.”
We had a really fascinating and enjoyable conversation that was seeded and started with a single statement by one of the scientists in the original 1973 film;
“We aren’t dealing with ordinary machines here. These are highly complicated pieces of equipment almost as complicated as living organisms. In some cases, they’ve been designed by other computers. We don’t know exactly how they work.”
What strikes me about that statement, was how close to todays reality the original script is becoming. Cast your mind back to last year when AlphaGo trumped Lee Sedol in Game Two of GO. As the world looked on, the Deepmind machine made a move that no human ever would. Move 37 perfectly demonstrated the enormously powerful (and rather mysterious) future of modern artificial intelligence. AlphaGo’s surprise attack on the right-hand side of the 19-by-19 board flummoxed even the world’s best Go players, including Lee Sedol. Nobody really understood how it did it, not even it’s creators. Lee Sedol then lost Game Three, and AlphaGo claimed the million-dollar prize in the best-of-five series.
But let’s just get real for a moment, it was a mystery that we, the humans, programmed and created. We just didn’t really understand what we’d done.
Ticks not Clicks
I really enjoyed the facet of the show that presents us with an alternate view of the human condition through the technological mirror of life-like robots. Look past the boozing and sex, and Westworld is Psychology 101. It causes us to reflect that we are perhaps also just sophisticated machines, albeit of a biological kind. Episode 4 was even entitled “Dissonance Theory,” and explored the psychological hypothesis of bicameralism. The whole show taps into the rich tapestry of questions inspired by Mary Shelley’s novel Frankenstein too… The creature created by Frankenstein is psychologically conflicted between a need for human companionship and a deep selfish hatred for those who have what he does not… but that’s a digression for another day. As people start to meddle in technology and opportunities that they don’t fully understand, you really have to stop and question who are the true antagonists in a show like Westworld. The robots. The human players (and they are players in a game by the way), or the scientists who create the rules of the game.
The audience chose the players, but I suggested to Jamie that the real antagonists of Westworld (and Ai today) aren’t the people who treated robots as objects, or the robots themselves, but the scientists who tried to make the robots more human by design. The ones who tricked us to bring out the very primitive part of humanity.
“When you played cowboys and Indians as a kid, you’d point go “bang, bang” and the other kid would lie down and play dead. Well, Westworld is the same thing, only it’s for real!”
Perhaps we can learn something from Westworld, where the ones treating robots like robots seem the most capable of separating reality from fantasy and human-life from technological wizardry. It’s the scientists imposing the human condition and consciousness on artificially intelligent beings who unleash suffering on both robot and humankind. In the show, while both Maeve and Dolores may have acted in a mix of prescribed and self-directed ways, their revolutions were firmly created by the humans in the lab. Ultimately, the robots don’t become semi-sentient—and violent—simply by experiencing love or loss or trauma or rage or pain, but by being programmed and guided that way.
It is inevitable therefore, that as real-life artificial intelligence develops, we will see a lot of debate over whether treating humanoid machines like machines is somehow inhumane, either because it violates the rights of robots or it produces moral hazards in humans who participate in the activity.
As humans, we’re predisposed to behave in ways that play to our base instincts. Even if robots are just tools, some people will always see them as more than that and It seems natural for people to respond to robots—even some of the more simple, non-human robots we have today—as though they have goals and intentions. I stand my ground that Ai will never be able to ‘feel’ or have ‘emotions’ or ‘empathy’ because those are very human traits. They’re biological and psychological, not mechanical. We can programme machines to interpret and mimic, but they will not feel. But if we do create those mimicry moments, who’s to blame if some people fall for the charade? As kids we had teddy-bears. Phones. Bots. Alexa. Robots. Droids etc are just logical, grown-up extensions of those anthropomorphism’s.
Have a look at the following video and see how it makes you feel;
A human tendency towards irrational, often violent, behaviour
We covered the human tendency towards irrational, violent behaviour. Now this is controversial and divisive, but statistically, humans are six times more likely to kill each other than the average mammal. That’s no excuse for violence, humans are also moral animals and we cannot escape from that. But a lot of society has a base instinct towards living out primitive behaviour, especially in herds. I referenced taking Charlie, my 8 year old son, to the football every few weeks and how he asks more questions about the behaviour of the fans (the mob) than the football most weeks. I myself, cannot get passionate enough about grown men kicking a ball around some grass, to hurl disgusting abuse at a referee, but several thousand people in a crowd of 22,000 do. Week in and week out.
“The Seville Statement” is another very controversial piece of research from the 1980s which backed up some parts of the theory that biologically people have an innate tendency towards violence, in contradiction of both the statement and the views of many cultural anthropologists. A lot of this primitive, violent behaviour in parts of society still harps back to the behaviour of our primate cousins. Groups of male chimpanzees prey on smaller groups to increase their dominance over neighboring communities, improving their access to food and female mates etc. I believe, given some legitimate reason to behave like apes, some people will seize the opportunity. So again, if the world moves in the direction of creating opportunities to behave like apes, we will see peoples behaviour pivot in that direction. Build it and programme it with options to abuse, and they will come. Mark my words.
Gaming play and frustration
There’s a latest craze of VR Parks starting to open up in Tokyo. The first truly immersive one opened last December as an experiment. Nearly 12 months later, it is attracting 9,000 visitors a month and turning people away at weekends, as crowds clamor to immerse themselves in extreme experiences, distant worlds and fantasy scenarios, using technology most people still can’t afford to have at home. Again, it shows how as species we clamour for escapism and the more immersive the better. So whilst on stage we acknowledged that Westworld as a park full of robots is not very realistic, as a hypothetical concept, it’s already happening.
I touched a little on the link between computer games and violent behaviour and how this would also factor in. There’s actually no proof that violent video games create violent tendencies offline by the way, but there are some interesting studies emerging that back up the theory that frustration at being unable to play a game is more likely to bring out aggressive behaviour than the content of the game itself. What’s interesting about Westworld in context of this, is as the machines evolved to change the rules almost constantly, it would encourage frustration and therefore violence—the violent themes would not necessarily inspire that behaviour itself, but the intelligence of the ever evolving scenario. Chaos basically. A lot of the Ai we’re building at the moment is literally designed to break the rules. To continuously evolve. To ‘machine learn’… so don’t be too surprised when frustration turns to the kind of behaviour that we don’t normally use in polite society.
People have a psychological need to come out on top when playing games. If we feel thwarted by the controls or the design of something, we can wind up feeling aggressive.
In giving artificial intelligence the ability to improvise, we (humans) give it the power to create, to decide, and to act. If we program it to improvise without programming the right ethical framework, we risk losing control of it altogether and then we’re basically fulfilling our own prophecy.
The ethics of human > computer relationships
Finally, to end, we also covered some of the high-level areas of Ai ethics. Things that the world economic forum has listed as areas to consider for humanity when developing the intelligent machines… consider this… even though our conversation was hypothetical, the 1973 film predicted a lot of where we’ve ended up. Today, we’re at number one. According to the futurists, by 2035 we might make it to number nine;
- Unemployment – What happens to jobs when robots / chatbots / automation replace us?
- Eroded Humanity – What happens to the self-esteem of people replaced by machines, do we increase the growing mental health crisis?
- Inequality and Distributed Wealth – Linked directly to 1 & 2… where does the wealth generated by machines go?
- Racist Robots – What happens when we feed toxic, historical data into the Ai and by doing this, engender it with all our racism, bias etc.
- Artificial Stupidity – What happens when the machines we create to automate processes go wrong? Everybody and every machine ‘learns by doing’ and making mistakes. They will. It’s how kids learn.
- Security Against Adversaries – What fail-safes do we need to put in place in order to ensure the machines can’t hit the big red button.
- Robot rights – When machines start to grow in intelligence and mimic it’s creators, do we need to give them rights or do we acknowledge that they are no more in need of rights then toasters and other technical tools?
- Evil genies & unintended consequences – For every good in the world, there will be bad. That’s life. There will be bad examples—terrorism and cyber-war etc.
- The Singularity – How we refer to the moment that a machine, over-takes humanity as the smartest thing on the planet and also has the ability to think and make judgements for itself. A conscious (even if it is just mimicking consciousness!) software capable of looking at it’s creators and saying “you are my slaves not vice-versa).
Summary
“Human” characters in the show routinely ask other individuals in Westworld whether or not they are “real.” One character replies to the question, “Does it really matter?”, which is already a reflection I make most days when I see us all interacting with each other virtually and behaving in such unpredictable ways.
“Mr. Lewis shot 6 robots scientifically programmed to look, act, talk and even bleed just like humans do. Isn’t that right? Well, they may have been robots. I mean, I think they were robots. I mean, I know they were robots!”
As shows like Westworld get closer and closer to becoming reality, it’s going to become more imperative that we acknowledge the importance of the ethical questions. If we’re tricked into behaving in a way that plays to our base instincts… who’s responsibility is it to govern and manage that?
Welcome to Westworld!
——————————————————
A massive thank you to Caroline Beadle at Sky Media for organising and inviting me to join Sky for the afternoon. Also for humouring me enough to let me take this talk to a far far more philosophical space than it needed to be. We really did take things off into a whole bizarre, human focused direction. Which is ironic when we got together to talk about robots.
Recent Comments