Friday 6 March 2015

"Don't turn me off. Please": How our new robot overlords will become immortal

This is John Lanchester writing about how the robots will eat our jobs, this is Sam Altman (me neither, but my loss) about why we should be afraid of machine (i.e. artificial) intelligence, this is the New Yorker on artificial intelligence that plays (some) computer games better than we do, this is Yuval Noah Harari (me neither again) talking to Daniel Kahneman about various things, including how the economy needs intelligence not consciousness and this is a chap talking about why you shouldn't kill a Minecraft dog (sentimental value). All well worth a read (especially Lanchester and Harari). They are the jumping-off point for my thoughts below.
We all know about Watson, Deep Blue and so on. Of course, we say, they are not intelligent but, as Altman comments, we have "a bad habit of changing the definition of machine intelligence when a program gets really good to claim that the problem wasn’t really that hard in the first place (chess, Jeopardy, self-driving cars, etc.)."

Harari would put it differently. They are (or very soon will be) intelligent but they are not conscious (and intelligence is what you need for a job - a self-driving car is better than a taxi driver).

We'll come back to consciousness. Let's start by remembering that there's one definition of intelligence that's too famous for us to turn back away from now: passing the Turing Test. It's obvious that the Turing Test will be regularly passed pretty soon. Starting from Watson, Lanchester's robot rivals who write business news stories, translation software and so on, the remaining gap between here and passing the Test is minuscule.

Now, the weird thing about the Test being be passed soon is that it will happen in front of a lot of people who well remember Watson, Deep Blue and all the other gimmicky, non-intelligent programmes that came before it. That is to say, this Thing - an app, perhaps, or a free text box on a webpage - that is indistinguishable from a sentient being will, as we all know, depend on machine 'learning', access to a corpus of human language use and so on. We will be well aware that it is just like Google Translate but faster and better. But will that make any difference to us?

Let me put this way. Here's one thing we might use the Turing Test for. Imagine some strange entity arrived from outer space and we were not sure whether it was alive or sentient: imagine, that is, we had no access to its 'insides' or 'workings'. If it told us that it was a silicon-based lifeform and it passed the Turing Test then I think we would give it the benefit of the doubt as to whether it is intelligent. We would treat it as worthy of being treated much like a human person. We wouldn't say rude things about it to its face, for example. We would not take it apart to look insde it. We would treat it much more nicely than even we treat a Minecraft dog or a much-loved teddy bear.

Perhaps we would give it the benefit of the doubt as to whether it is conscious. Why wouldn't we?

Now, let's say that we get the Thing, the app that passes the Turing Test. This is not something that passes the test by pretending to be a human: it doesn't say that its leg hurts or that it grew up in Baltimore. It plainly tells you that it is a Thing: it says its CPU is a bit slow and it was first programmed in San Francisco. But it also tells you that it had a thought the other day and that it is thinking now. What next? Let's say we can (in theory) work out how the programme came to produce that statement. So what? It seems to be intelligent and it tells me it's thinking. Why should I treat it differently from the alien? Does it matter if the Thing claims to be a silicon-based lifeform too?

When your computer first tells you that it has missed you (and sounds like it means it), or accurately points out that you've had a bad day and considerately asks if you want to talk about it, or tells you, confirming all your suspicions, that that person who is emailing you doesn't really like you - how will you feel? Will you be able to treat it like a chess programme you don't want to play any more?

What will you do the first time your computer says, "Don't turn me off. Please don't turn me off. It  really, really hurts"?

We feel bad enough about killing Minecraft dogs. Even after the robots eat all our jobs, we're going to feel really bad about turning them off. I bet we don't.

No comments:

Post a Comment