Researchers say social, ethical and political concerns in the UK about artificial intelligence (AI) are mounting and greater […]
In the 2004 science fiction film I, Robot, the police detective hero played by Will Smith is in […]
Support for ‘machine learning’ depended on what it would be used for, with mass unemployment among main fears
The anthropologist explains why being scared about AI has more to do with our fear of each other than killer robots
Designed with strong technical expertise and high intelligence, it’s not so far-fetched that the robots of the future could outperform human managers
There are some telling comments in a report about artifical intelligence that was published in the Guardian last October that has given me pause to think frequently about that place where AI and human intelligence intersect and/or collide:
It is relatively easy to create a learning brain but we don’t yet know how to create a heart or a soul. In a recent talk at the New Yorker festival MIT Media Lab director Joi Ito asserted that “humans are really good at things computers are not.”
It’s a pair of thoughts that suggest the line we cross at our peril if there is no humanity in artificial intelligence. It’s a subject that often comes to my mind prompted whenever I watch films like I, Robot, adapted from Isaac Asimov’s short-story collection of the same name – especially that film, with a robot as a primary hero character who has a heart (and a soul).
Indeed, that movie has a sequence in it that resonated with me from the very first time I saw it. Spooner, the central character played by Will Smith, has a recurring nightmare about a car crash he was in:
When a car crash sent him and another car into the river, a little girl was trapped in the other car. A passing robot calculated Spooner was significantly more likely to survive than she was and chose to save him. In many real life situations, rescue workers/field medics do make such decisions: it’s called Triage. Spooner’s emotional reaction to this (influenced by Survivor Guilt) is that regardless of the girl’s slim chances of surviving, a human would have understood that a small child should be given precedence over an adult regardless of objective assessment.
It has me thinking of something closer to home than the science fiction of I, Robot in the form of autonomous cars aka driverless or robotic cars.
Last November, Quartz magazine posed a great question in the title of a compelling article:
Imagine you’re in a self-driving car, heading towards a collision with a group of pedestrians. The only other option is to drive off a cliff. What should the car do?
It’s a heck of a question, one that would require a decision and action in a couple of seconds or less. Is a robot (in this case, the software in control of the car) capable of making the right decision? Would it know how to rapidly process the data (information) from what it sees and make the right decision? What is the right decision? Would a human driver be able to make the same or a better decision?