Westworld is a fictional place where you can do anything with human-looking machines (hosts). But should you? Is it morally wrong only if the machines are conscious? And what is consciousness anyway?
Let’s a do a thought experiment. You are texting with someone you've never met. There is a chance it's just a really advanced chatbot. Can you tell the difference? What if you're wrong?
British WWII code-breaking genius and computer pioneer Alan Turing, frustrated with the same issue decades ago (they had computers back then?), came up with a test now known as the Turing Test to solve this problem.
The test is meant to figure out whether your texting buddy acts, reacts and thinks like a human being, regardless of whether he or she is really an 'it' (think Blade Runner).
The problem is that some AIs can now pass the Turing Test and sometimes real people fail — so it's no longer a successful measure of human consciousness (if it ever was).
So if we can't tell the difference between a robot and a person because they both exhibit signs of consciousness, how do we even define consciousness?
A simple dictionary definition tells us that consciousness can be defined as "the state of being aware of and responsive to one's surroundings." Not only does that definition not help us but it also puts self-driving cars and possibly futuristic toasters into the 'conscious beings' category as well.
HBO's Westworld is tackling this issue as well. Based on the 1973 film of the same name, the setting of Westworld is an amusement park where people go to entertain themselves with the 'hosts' in any way they want to. And I mean any.
The hosts are human-looking machines programmed by the HBO Westworld’s creator, Robert Ford. They have each been encoded with a narrative and their reactions to events are always the same.
The argument in Westworld is that these hosts can only get onto the track of consciousness through memories ('reveries' as they're called on the show) and the improvisations they start making.
Some of these hosts, like Dolores, start to act strangely and randomly. They begin to act as if they now have their own personal agenda, which freaks the owners of Westworld out. It's not enough for them to have brains and make decisions, now they need to find themselves and endure suffering in order to gain consciousness.
I know it sounds creepy but it's even creepier on the show. Westworld raises questions like whether it's acceptable to kill hosts (the machines) since they don’t have sentience despite looking exactly like humans. Technically speaking, they are simply machines wearing skin — aren't they?
You don’t have to go as far as a futuristic world where robots are perfect replicas of real people — the question of consciousness is super complex even in the world we live in.
Take us, people. Let’s assume for the sake of argument that we, and only we, are sentient in the universe. When do we start being this way? As a foetus in the womb? Does that mean abortion is murder? Do we become conscious at birth? What about a C-section?
And what happens to our consciousness while we sleep? What about dreams or drugs? Is consciousness gradient or absolute?
If you're not concerned about AI safety, you should be. Vastly more risk than North Korea. pic.twitter.com/2z0tiid0lc— Elon Musk (@elonmusk) August 12, 2017
As technology continues to develop faster than we can philosophically and artistically make sense of it, major worries and fears follow the advancements. An evergreen hot topic these days is whether robots will surpass us and AI will outsmart us, ultimately leading to our destruction at their hands.
But this assumes non-human intelligence to be not just intelligent, but sentient. Is that really where we're headed? Or, if the Turing Test truly is a test of consciousness, are we not almost, if not already there? However, if we're already there, does it logically follow that the machines must then destroy us? While you're thinking about this, maybe just chill out with some Westworld.