{"id":103,"date":"2015-05-27T04:25:25","date_gmt":"2015-05-27T04:25:25","guid":{"rendered":"http:\/\/futureoftesting.net\/?p=103"},"modified":"2015-05-27T04:26:30","modified_gmt":"2015-05-27T04:26:30","slug":"the-real-world-is-a-messy-place","status":"publish","type":"post","link":"https:\/\/futureoftesting.net\/?p=103","title":{"rendered":"The Real World is a Messy Place"},"content":{"rendered":"<p>The Real World is a messy place which we need to fit into the decision processes of our autonomous vehicles.  In Acceptance Testing, we would certify that the machine was instructed to perform this specific task and it did, therefore we are happy.  We could make a list of these tasks and check them off one by one:<br \/>\na) The vehicle travelled from point A to point B without hitting anything for example.<br \/>\nb) The vehicle swerved correctly to miss a person standing in the road.<br \/>\nc) The vehicle changed roads because there was an obstruction ahead.<br \/>\nAnd so on!  There are a million scenarios to test if we want to be certain that the machine will operate correctly, regardless of the situation.<br \/>\nBut, what if we allowed other types of testing to become involved?  White Box testing would allow us to test the individual sensors or try to feed them bad data specifically to confuse the vehicle.  Exploratory Testing might ask \u201cwhat-if\u201d questions like \u201cWhat if it\u2019s snowing?\u201d or \u201cWhat if visibility is low?\u201d or \u201cWhat if the road was gravel?\u201d<br \/>\nThe Real World rarely has only one variable at play however, just ask anyone who has used a flight simulator.  Eventually, the testing is going to have to try multiple problems at once.  Let\u2019s explore Hypothetical Scenario 17:<\/p>\n<blockquote><p>An autonomous vehicle is driving down an icy city side-street.  There is a woman bundled against the cold, carrying a baby, attempting to cross the street.  There are parked cars on both sides of the road.  The vehicle in the on-coming lane abruptly turns left in front of our robot, perhaps into its own driveway.  Our vehicle is going to hit something.  What does the robot hit?  How does it decide?<br \/>\nSo far, all of our tests have been based on the idea that there is a \u201ccorrect\u201d answer for the robot to pick.  What is that isn\u2019t true?  What will the robot do when faced with having to select a \u201cbad\u201d choice?  <\/p><\/blockquote>\n<p>If the human was driving, s\/he might reasonably select the target least likely to be damaged by their vehicle.  They might select whatever is directly in front of them.  They might swerve or perform another action specifically to avoid the pedestrian.  The human would make this choice, potentially based on experience, their own value judgements, or some other consideration.  What does the robot do?<br \/>\nIf we are concerned about this scenario, perhaps we need to design an algorithm for the robot.  An auto insurance company might suggest hitting the thing with the least value, relatively speaking.  A Yugo parked on the side of the road is much cheaper to repair than a Bentley.  A moving vehicle with one occupant is potentially cheaper than a vehicle with four passengers, medical claims-wise.  We\u2019d have to come up with whatever our algorithm is and build in a way for the robot to learn the information which leads to the decision.<br \/>\nOne potential solution is to allow\/require the vehicles to talk to each other.  Car A reports that it has three passengers but good airbags.  Car B reports that it is empty and a rental car.  The pedestrian reports nothing.  What does the robot hit now?  Worse, if there is communication between the vehicles, how long until an after-market device is introduced claiming its vehicle is \u201cexpensive\u201d by whatever scale?  If everyone uses this device, we\u2019re back to the original problem of having to make a bad decision with no data.  Even worse, how long until there are websites (or whatever) which show backyard mechanics how to hack the decision-making process to protect their own vehicles\/families?  And even more exciting, if the cars are reporting on themselves, why can\u2019t the police poll all the cars going by to see who\u2019s speeding and not wearing their seatbelts and who hasn\u2019t done timely \u201cproper maintenance\u201d?<br \/>\nAnd an even more troubling question is, who\u2019s in trouble if the robot chooses a \u201cbad\u201d solution?  The \u201cdriver\u201d or the owner of the vehicle?  Perhaps the builder, or the developer, or the tester?  How will we know why the robot made the choice it did?  Does it need to log the information in some way?  If a human would have made a different decision, does that trump whatever the robot actually did?  This is akin to telling Developers to make it work like the existing system\/prototype without defining the actual requirements.<br \/>\nThere\u2019s going to be a period of transition as we bring these vehicles into reality.  There will be some period of time, perhaps decades, where there will be autonomous, semi-autonomous, and human-driven vehicles all interacting on the same roads.  We\u2019re working hard to bring these vehicles to our streets, both in the US and elsewhere.  So far, it seems that we\u2019re excited that the robots can do simple tasks in controlled environments.  But the Real World is not a controlled environment and maybe we need to think more about integrating messy, fragile, and litigious humans into the mix.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>The Real World is a messy place which we need to fit into the decision processes of our autonomous vehicles. In Acceptance Testing, we would certify that the machine was instructed to perform this specific task and it did, therefore we are happy. We could make a list of these tasks and check them off [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[5,6],"tags":[],"_links":{"self":[{"href":"https:\/\/futureoftesting.net\/index.php?rest_route=\/wp\/v2\/posts\/103"}],"collection":[{"href":"https:\/\/futureoftesting.net\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/futureoftesting.net\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/futureoftesting.net\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/futureoftesting.net\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=103"}],"version-history":[{"count":3,"href":"https:\/\/futureoftesting.net\/index.php?rest_route=\/wp\/v2\/posts\/103\/revisions"}],"predecessor-version":[{"id":106,"href":"https:\/\/futureoftesting.net\/index.php?rest_route=\/wp\/v2\/posts\/103\/revisions\/106"}],"wp:attachment":[{"href":"https:\/\/futureoftesting.net\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=103"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/futureoftesting.net\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=103"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/futureoftesting.net\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=103"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}