The Real World is a Messy Place

The Real World is a messy place which we need to fit into the decision processes of our autonomous vehicles. In Acceptance Testing, we would certify that the machine was instructed to perform this specific task and it did, therefore we are happy. We could make a list of these tasks and check them off one by one:
a) The vehicle travelled from point A to point B without hitting anything for example.
b) The vehicle swerved correctly to miss a person standing in the road.
c) The vehicle changed roads because there was an obstruction ahead.
And so on! There are a million scenarios to test if we want to be certain that the machine will operate correctly, regardless of the situation.
But, what if we allowed other types of testing to become involved? White Box testing would allow us to test the individual sensors or try to feed them bad data specifically to confuse the vehicle. Exploratory Testing might ask “what-if” questions like “What if it’s snowing?” or “What if visibility is low?” or “What if the road was gravel?”
The Real World rarely has only one variable at play however, just ask anyone who has used a flight simulator. Eventually, the testing is going to have to try multiple problems at once. Let’s explore Hypothetical Scenario 17:

An autonomous vehicle is driving down an icy city side-street. There is a woman bundled against the cold, carrying a baby, attempting to cross the street. There are parked cars on both sides of the road. The vehicle in the on-coming lane abruptly turns left in front of our robot, perhaps into its own driveway. Our vehicle is going to hit something. What does the robot hit? How does it decide?
So far, all of our tests have been based on the idea that there is a “correct” answer for the robot to pick. What is that isn’t true? What will the robot do when faced with having to select a “bad” choice?

If the human was driving, s/he might reasonably select the target least likely to be damaged by their vehicle. They might select whatever is directly in front of them. They might swerve or perform another action specifically to avoid the pedestrian. The human would make this choice, potentially based on experience, their own value judgements, or some other consideration. What does the robot do?
If we are concerned about this scenario, perhaps we need to design an algorithm for the robot. An auto insurance company might suggest hitting the thing with the least value, relatively speaking. A Yugo parked on the side of the road is much cheaper to repair than a Bentley. A moving vehicle with one occupant is potentially cheaper than a vehicle with four passengers, medical claims-wise. We’d have to come up with whatever our algorithm is and build in a way for the robot to learn the information which leads to the decision.
One potential solution is to allow/require the vehicles to talk to each other. Car A reports that it has three passengers but good airbags. Car B reports that it is empty and a rental car. The pedestrian reports nothing. What does the robot hit now? Worse, if there is communication between the vehicles, how long until an after-market device is introduced claiming its vehicle is “expensive” by whatever scale? If everyone uses this device, we’re back to the original problem of having to make a bad decision with no data. Even worse, how long until there are websites (or whatever) which show backyard mechanics how to hack the decision-making process to protect their own vehicles/families? And even more exciting, if the cars are reporting on themselves, why can’t the police poll all the cars going by to see who’s speeding and not wearing their seatbelts and who hasn’t done timely “proper maintenance”?
And an even more troubling question is, who’s in trouble if the robot chooses a “bad” solution? The “driver” or the owner of the vehicle? Perhaps the builder, or the developer, or the tester? How will we know why the robot made the choice it did? Does it need to log the information in some way? If a human would have made a different decision, does that trump whatever the robot actually did? This is akin to telling Developers to make it work like the existing system/prototype without defining the actual requirements.
There’s going to be a period of transition as we bring these vehicles into reality. There will be some period of time, perhaps decades, where there will be autonomous, semi-autonomous, and human-driven vehicles all interacting on the same roads. We’re working hard to bring these vehicles to our streets, both in the US and elsewhere. So far, it seems that we’re excited that the robots can do simple tasks in controlled environments. But the Real World is not a controlled environment and maybe we need to think more about integrating messy, fragile, and litigious humans into the mix.