What will become of empathy in a world of smart machines?

I, Robot / Sonny

There are some telling comments in a report about artifical intelligence that was published in the Guardian last October that has given me pause to think frequently about that place where AI and human intelligence intersect and/or collide:

It is relatively easy to create a learning brain but we don’t yet know how to create a heart or a soul. In a recent talk at the New Yorker festival MIT Media Lab director Joi Ito asserted that “humans are really good at things computers are not.”

It’s a pair of thoughts that suggest the line we cross at our peril if there is no humanity in artificial intelligence. It’s a subject that often comes to my mind prompted whenever I watch films like I, Robot, adapted from Isaac Asimov’s short-story collection of the same name – especially that film, with a robot as a primary hero character who has a heart (and a soul).

Indeed, that movie has a sequence in it that resonated with me from the very first time I saw it. Spooner, the central character played by Will Smith, has a recurring nightmare about a car crash he was in:

When a car crash sent him and another car into the river, a little girl was trapped in the other car. A passing robot calculated Spooner was significantly more likely to survive than she was and chose to save him. In many real life situations, rescue workers/field medics do make such decisions: it’s called Triage. Spooner’s emotional reaction to this (influenced by Survivor Guilt) is that regardless of the girl’s slim chances of surviving, a human would have understood that a small child should be given precedence over an adult regardless of objective assessment.

It has me thinking of something closer to home than the science fiction of I, Robot in the form of autonomous cars aka driverless or robotic cars.

Last November, Quartz magazine posed a great question in the title of a compelling article:

Should driverless cars kill their own passengers to save a pedestrian?

Imagine you’re in a self-driving car, heading towards a collision with a group of pedestrians. The only other option is to drive off a cliff. What should the car do?

It’s a heck of a question, one that would require a decision and action in a couple of seconds or less. Is a robot (in this case, the software in control of the car) capable of making the right decision? Would it know how to rapidly process the data (information) from what it sees and make the right decision? What is the right decision? Would a human driver be able to make the same or a better decision?

Last month, an autonomous car being tested by Google was in a collision with a bus on a busy city street in California. As Bloomberg reported:

The car, a Lexus sports utility vehicle, hit the left side of a public transit bus as it was attempting to avoid some sand bags on a road in Mountain View, California. The automobile had a test driver, who saw the bus approaching in the mirror but “believed the bus would stop or slow to allow the Google AV to continue,” according to an accident report filed with the state’s Department of Motor Vehicles. […] The company acknowledged that the technology still needs work. The incident with the bus happened because the car’s software also predicted the bus behind it would yield so it could merge back into traffic.

Such points and questions aren’t just about empathy, or even morals or ethics: they transcend that. Yet the real point is that of heart and soul (and conscience) versus logic and coldness. Can we build the former into machines that are made up by the latter? And do we really want to do that?

Which leads to my question: is our best route a complement of artificial intelligence perhaps in the form of cognitive personal assistants – very smart software that gets to know us – that is the bridge between us humans and AI tools (like IBM Watson) that enable big and small gaps to be jumped or closed very quickly.

Add these elements as discussion points to the report in the Guardian I mentioned earlier which is appended below.

It seems to me that such big questions ought to be answered before we go too far on the journey to smart machines. But can we answer them? And if not, do we stop our journey?

Human history doesn’t give us good indicators on how to answer such questions well, I’m afraid. But I’m optimistic, hopefully from an informed point of view, even if just a little.


This article titled “What will become of empathy in a world of smart machines?” was written by Colin Nagy, for theguardian.com on Tuesday 13th October 2015 07.45 UTC

At a recent conference in Malmö, Sweden, in front of a keynote-sized audience, two speakers presented a strange video. It was of a recently developed cybernetic dog, that moved, acted and gestured like the real thing. In an odd twist, people in the video shoved and kicked it.

It reacted, predictably, as a real dog would: moving, rebalancing, even flinching perceptibly. The system was determining what movements were necessary to stay upright. But the emotional magnitude was entirely on us, the humans.

The entire audience gasped. Then they realised that it was, in fact, odd to gasp. Another kick came. The same thing happened. Everyone felt something vivid, and disturbing.

It was one of these strange examples of two blurring worlds, playing out in front of us. Empathy flooded towards what actually was an unfeeling, man-made object.

We’re in a very interesting, in-between moment in technology. We can see the seams on the fastball: the connected home, previously idle devices now able to communicate with one another, artificial intelligence (AI) and the ability to sift data for big correlations. And of course, the virtual assistant embedded within your mobile phone.

But as we increasingly interact with software and interfaces to do many of the recurring things we do every day, it is interesting to think about the idea of empathy in interactions, and how the software layer we use will gradually start becoming empathetic to our needs.

One glitch in the matrix that portends larger things was revealed in a recent incident with Google’s self-driving car. One of Google’s experimental vehicles was at a four-way stop. A cyclist on a fixed-gear bike arrived just behind it at the intersection. While waiting for the car to go he performed a track stand, balancing the bike by rocking back with its own momentum.

In a post on roadbikereview.com user OxTox said:

“It apparently detected my presence (it’s covered in GoPros) and stayed stationary for several seconds. It finally began to proceed, but as it did, I rolled forward an inch while still standing. The car immediately stopped … I continued to stand, it continued to stay stopped. Then as it began to move again, I had to rock the bike to maintain balance. It stopped abruptly. We repeated this little dance for about two full minutes and the car never made it past the middle of the intersection.”

It was a telling short-circuit. The car didn’t know how to process this behaviour.

We’re seeing baby steps towards this idea of empathy in interactions. The interface is exhibiting empathy, as well as its designer. Anyone who has used the language-learning application Duolingo knows that it quickly adjusts based on what area you need practice in, seemingly on the fly. Rusty on the subjuntivo in Spanish? You’re going to get peppered with it.

But can machines be expected to be fully empathetic? Signs point to no. It is relatively easy to create a learning brain but we don’t yet know how to create a heart or a soul. In a recent talk at the New Yorker festival MIT Media Lab director Joi Ito asserted that “humans are really good at things computers are not.”

So perhaps the future lies not in sensational rise-of-the machines style narratives, but in the meaningful interplay between smarter software and capable humans.

We’re already starting to see it occur in hospitality, with services such as Alfred, where a human being orchestrates the growing number of on-demand services (Uber, Instacart etc) to get things done for you quickly, and in one place. Some airlines are arming their flight attendants with relevant customer data on mobile that will allow them to employ better service and recognition of frequent fliers in-person.

Machine learning may be the thing that defines “the next wave” of apps and services. It might take the form of a virtual assistant or just really good suggestions. Eventually we may not even notice as it’s a part of every major platform and mobile operating system.

But it would also appear that the human touch of warmth, empathy, and service still has a very welcome and important place in our lives. And when coupled with the software that’s “eating the world” it will lead to very interesting – and unexpected – places.

Colin Nagy is executive director at The Barbarian Group. Follow him on Twitter @CJN

To get weekly news analysis, job alerts and event notifications direct to your inbox,sign up free for Media and Tech Network membership.

All Guardian Media and Tech Network content is editorially independent except for pieces labelled “Brought to you by” – find out more here.

guardian.co.uk © Guardian News & Media Limited 2010

Published via the Guardian News Feed plugin for WordPress.