2015: The first year AI nearly killed me

When I talk about AI ethics I usually emphasise that the AI itself is not a danger, but rather what people do with the AI.  The dangers are the changes AI affords in society.  Instead of people looking for the magic-pixie-dust feature that will turn our already super-human (in the "better and faster than human" sense) AI into Übermensch AI (in the "maintains ape-like goals and dominates the planet" sense), we should be examining how we have already altered society in subtle social ways, for example helping us slither into a more surveilled, less private society.

But a conversation on twitter made me decide to do a year-end thing: talk about the biggest moment in AI ethics to me personally that (nearly) everyone else missed.  An AI car that wasn't even driverless almost killed me!

But let's go over the conversation first.  It started a few days ago, when Miles Brundage drew attention to a Final Report on Robotics and Autonomous Systems by a military panel organised through The Dwight D. Eisenhower School for National Security and Resource Strategy.  The document is interesting for a number of reasons, not least its flagging of Google's dominance and lack of obvious "patriotic" tendency to unconditionally support US military agendas (see appendix essay 5).  But I focussed briefly on appendix essay 6, because it contributes to what I think is false confidence in formal methods to verify AI safety.  I've written about this at least twice: probably the better paper is with Bruce Edmonds, The Insufficiency of Formal Design Methods - the necessity of an experimental approach for the understanding and control of complex MAS, which appeared in The Third International Joint Conference on Autonomous Agents and Multi Agent Systems (AAMAS 2004), pp. 936–943.

But twitter happens quickly, and for some reason I remembered an earlier article I'd written with my partner & my PhD supervisor.  That lead to a short, funny conversation, though of course I was serious as well.

But today when Ed brought this line of reasoning back up, I was suddenly reminded of the first time at least that I've known AI nearly killed me.
  What is the difference between AI and a mechanical button?  Not much.  Intelligence connects sensing to action, a mechanical button transforms physical motion to action.  So the key difference is sensing, which implies some sort of processing, perhaps with memory and based on expectation.

These sorts of systems with embedded intelligent sensing are pervasive in at least American society.  It's much harder now than when I was a child to scorch clothes with an iron, or to lock your brakes if you are hydroplaning in your car.  Sensors have been built into all kinds of devices, so now humans just push hard with their foot and the car "pumps the brakes" like we were taught to, or the human just sets the type of clothing and the iron optimises the expression of steam.

But even this AI can go wrong.  And it did go wrong for me at 5am on 16 October on a pitch-black highway between Deauville and Paris Charles De Gaulle Airport.  In a torrential downpour, our windshield wipers determined that they were not needed, and nothing the driver could do would restart them.  I was riding in the front and tried to calmly explain that I'd rather miss my plane than die, but it emerged that the driver couldn't see well enough to pull off the road in the dark (especially at highway speeds), so felt obliged to keep up with the tail lights of the truck ahead.  He was so terrified that he forgot English, so it took quite a while for the others of us (Vivek Wadhwa was also in the car) to figure out why we weren't stopping.

Eventually we came to a lay by that was actually well lit,  and pulled off.  The driver was going to go out and try to fix the wipers, but Vivek said "just reboot the car."  Sure enough, after powering the car off and on again, the sensors began working again.  I neither died nor missed my plane.

I did tweet though.  I thought I was followed by enough AI ethics people that it would be one of those things where you get off the plane and your tweet has become a meme.  But no one cared!  In retrospect, I see that I erred by being too analytical about exactly what went wrong "Bad sensors or faulty logic in windshield wipers of our car endangered us on a dark highway! Boarding; blog later."  Folks, that means: "A misapplication of AI nearly killed me!  And Vivek Wadhwa!  And a really nice French guy who was driving!"  I just get really analytic when I'm in danger.  Also, I was a bit embarrassed because the car was a donation of one of the funders of the event that paid for my flight from Princeton to London as well as a very nice meeting.

But this is exactly the kind of error that is a real, present danger of AI.  Formal verification of how the circuit should have performed was not enough.  This kind of transient bug might easily have been caused by a bit on a chip getting flipped by solar radiation (even though a lot of planet Earth was between us and the Sun). Perhaps there needed to be more hours logged on empirical tests in extreme weather conditions to discover the likelihood of the bug.  Either way, there certainly should have been a manual override or a dashboard reset button for the sensor system.

Companies need to make the AI they are inserting in their products visible, so that those of us who use those products can compensate for their intelligence.  I can understand that an ideal of human-computer interaction might be to fool the user into feeling as if they are the only ones in control, but in fact increasingly our control of machines and devices is delegated, not direct.  That fact needs to be at least sufficiently transparent that we can handle the cases when components of  systems our lives depend on go wrong.

Comments