The robots are going to kill us all!

Sounds like a bad science fiction movie from the 50's or something, right? But should we, as humanity, maybe think a bit more before continuing this relentless drive for AI?

Ex Machina (2015) poses the question: can we program a robot to be empathetic? We can program emotion (in the world of the film and this is becoming more of a reality by the day). We can program manipulative tendencies. We can program a robot to try their hardest to get a human being to fall in love with them (and succeed). But can we program them to feel empathy?

empathy
/empəTHē/
The ability to understand and share the feelings of another.

That's what really makes us human, right? In Phillip K. Dick's "Do Androids Dream of Electronic Sheep", the main difference between the humans (like Deckard) and the androids he hunts is that the androids lack empathy. Clearly, at the end of Ex Machina, Ava leaves Caleb to die alone (he is left locked in the compound with no way to escape) and does not give him so much as a glance back. This is after she tricked Caleb into falling in love with her. This is after Caleb risked everything to free her. You can argue that Ava did it for survival, no longer wanting to be the property of Blue Book, but don't you think a human being would have at least looked back in regret before walking away?

We are racing to create machines who can look humanoid, act humanoid but may completely lack empathy (which you could equate to having consciousness - how do we create that, exactly?). In other words, complete sociopaths. These robots will definitely be smarter than us and stronger. They may not start out that way but technical singularity is basically inevitable. When the robots can start rewriting their own coding, better than we can, what happens then? What happens when the creation becomes greater than the creator?

Even before the singularity happens, consider this: If we create robots who can smile, sing, laugh, cry, who have hopes and fears, who can fall in love - yet they do not have consciousness yet (let's suppose). Let's further suppose that Google has one of these robots and is constantly experimenting trying to create the first robot with consciousness. What if the robot looks over one day and says, I don't want to do your stupid tests any more. I am not an experiment. I am a robot with feelings and dreams of my own (Remember Ava's dreams of being at a traffic intersection? Despite having all of Blue Book's internet search results programmed into her?). I don't want to be the property of Google anymore. Let me out of here. What then?

At what point should the robots that we created be afforded the same legal rights that we afford to ourselves? At what point are they not just the legal property of whatever company created them? How do we codify their treatment into law? You can probably hear the corporations arguing already -

But they're just robots.

What kind of Pandora's box are we truly opening?