What’s In a Name?

As I was walking back to my car from parking the float I had been driving at the Labor Day parade, I happened across an R2 Unit that had been on the side of the parade route. And I got to talk to its owners/builders. Turns out they were visiting from Mountain States Droid Builders, a fan club that builds robot replicas from the Star Wars universe. They were happy to demonstrate the way they built their ‘droid and invited me to join them at their work parties. We discussed that this R2 unit was technically more of an RC vehicle than a proper robot but that they had plans for adding robot-skills to their device as time and money became available.
This led me to ponder, yet again, how to define the difference between the machines we’ve been talking about. I see there being 5 or 6 kinds of machines at the moment; RCs and Drones, autonomous vehicles, human-assisted, industrial and other kinds of robots, and ‘Droids.

RC vehicles and drones are machines which are piloted by humans who are remote from the machine itself. The humans rely on information from sensors or visual observation of the craft itself. The craft may have components which aid the human in various tasks much like cruise-control and auto-pilot do today.

Autonomous vehicles would be able to control their own actions, using sensors and other inputs to navigate safely from point-to-point. These vehicles may coordinate with external sources such as road- or other vehicle-feedback

Hybrid vehicles (I suppose we need a new word for this since Hybrid already means multiple fuel sources) would be semi-autonomous and able to make some decisions on its own but would require a human in the loop to control the vehicle in certain situations, perhaps mandated by law, or when the computer can’t make a valid decision. Above I referred to these as “human-assisted.”

Industrial robots can repeat a set of tasks that it has been “trained” to do and adding sensors to it allows the machine to receive feedback on its actions and make corrections. The robot knowing that the part that it should be working on is misaligned can cause the robot to re-align the part or ask for help instead of completing a task which is not going to be correct at the end.

Robots are considered semi-autonomous at this point, depending on their decision-making skills.

Then there are Droids. All science fiction robots fit into this category, at least initially. Short for Android, these machines seem to have personalities and are able to perform their tasks with no supervision.

So here are my questions. Are these enough categories? Where does a typical Roomba fit into all of this? How about the current set of vehicles which can park themselves or brake to avoid accidents? What about a machine that can assist rescuers in a unsafe environment? How solid should the lines be between the categories? If the machine can be over-ridden by a human, does that mean it’s human-assisted? I don’t know the answer to this question either but I’m starting to research it because we need a better vocabulary to talk about these machines.

My Hometown

Today I participated in a Labor Day Parade in a small community south of where I live. The community is a typical small town kind of place with mostly older “Americana” kinds of streets. A big fire station on one end of the town, a largish city hall where the road turns, and lots of little mom-and-pop kinds of stores in-between. The parade included the kinds of things you’d expect, people running for office, marching bands, old cars, 4H and Scouting, and variety of local businesses showing their civic pride. Our local Shriners community is pretty active and fielded a number of bands, a float, and a variety of car clubs.
I had been offered a ride in one of the Model T’s that the Shrine was going to parade in but I was asked to fill in driving for the Shriner who normally drives the float with the band on it. And this dichotomy is what I want to talk about today.
It started when the Model T met me at my house. It was decided that I would follow them to the parade route in my personal car so that I could leave again after the parade since the driver of the Model T wanted to stay in town and enjoy the after-parade festivities. So I’m following the Model T which has spent its entire life in Colorado. It’s a gorgeous machine. And there are parts of both towns that still look the same as they did when the vehicle was new. The pavement may be a different color and the traffic lights are new, but the downtown buildings have been here for as long as the car has.
I followed the Model T in my 2015 rental car with all digital readouts, listening to satellite radio. Their driver wasn’t sure how to get back to the main road from my house so his passenger called me on the cell phone and I used the GPS to route him where he wanted to go.
It was then that it struck me how many anachronisms we were dealing with here, and how many more we’re going to have to handle in the next “few” years as the autonomous cars begin actively driving on our roads and in our communities.
There’s going to be “some period” of time while both kinds of vehicles are on the streets. This period may be mandated by law or may just occur because some people will cling to their “classic” vehicles. Right now, “classic” means 25 years or older but will there be a new term for human-controlled vehicles and autonomous? Will collectors help keep the “Golden Age” of automobiles alive?
The robot vehicles are going to have to compensate for the humans driving around them in some fashion. Right now, vehicles have lights and other devices to let the other human drivers know what the vehicle is likely to do. The cars of the future will probably talk to each other directly, allowing cars to safely operate closer to each other for instance. Right now we have spacing on the streets designed for human reaction time and comfort. If the vehicles can coordinate with each other, there’s no real reason they couldn’t be operating only a few inches from each other which might give us extra “lanes” at some point. Will we even need lanes in the future?
Conversely, if a vehicle breaks down in the middle of the road, will the autonomous vehicles be better able to handle maneuvering around the obstacle than the humans are today? Or will they do like so many tests show, if the lead car stops, the cars following it stop, too?
I don’t know the answer to these questions yet, but that’s what I’m hoping to help define. I know I’m not the only one thinking about these things and that makes me feel better already.

The Best Laid Plans of Mice…

It’s been a strange, stressful week. I’ll beg your indulgence for a moment here. I still want to consider how robots will be developed and tested but, today I want to go in a slightly different direction. I want to quote from the Hitchhiker’s Guide to the Galaxy.
In Douglas Adams’ story, hyper-dimensional beings, who appear to humans as lab mice, attempted to sort out the “wretched question” of Life, the Universe, and Everything! They developed a computer called Deep Thought which would calculate the answer to the question once and for all. After generations, the computer came up with an answer, “42!” It then suggested that no one actually knew what the question was and that a bigger computer would have to be built. A computer so large, in fact, that it would require organic organisms to be part of the computational matrix. Deep Thought named the new computer “The Earth.” (Adams, 1983, p 171)
The Earth was built and after 2 million years processing, just before it could complete it’s calculations, an alien species destroyed it to make way for a hyper-space by-pass.
The reason I bring this up, besides the fact that I’m a geek and it’s my nature, is that companies do this kind of thing all the time. The company may develop on a shorter time-scale, for instance only worrying about this quarter’s results. If it plans longer term, the further out the resolution is, the more planning has to be done. And the more monitoring the company has to do to make sure the goal is still being reached. In the story for example, had the mice been diligent about making sure the computer program was working, they would have noticed the plan to put in the hyper-space bypass and fought it, at least until their calculations had concluded.
The other thing to notice here is that laws may change while a product is being developed or another team may develop something that is at cross-purposes to the project we are working on. In this case, government planning functionally rezoned an area that the mice were using but didn’t bother to inform the mice. This implies that the risk review must be a constant process in a project, especially if it runs for any length of time.

References
Adams, D. (1979). The Hitchhiker’s Guide to the Galaxy. New York: Random House

It’s a Metaphor!

Imagination-Celebration
A couple years ago, I participated in the Imagination Celebration in downtown Colorado Springs. My team from Colorado Tech had fielded a group of SumoBots to show off STEM activities. A SumoBot is a small cubic robot about 4 inches on a side which is outfitted with a couple IR sensors whose programming drives it to find its opponent, then push it out of a ring drawn on the table. Interesting side note, the robots don’t do too well out in the direct sunlight, for some reason 🙂 A little bit of shade made them work much better!
sumobot
Anyway, the kids came up to watch the robots doing their thing. Since there was nothing to do but watch, the kids didn’t really get involved. I rummaged in my backpack and found a yellow and an orange sticker which I placed on top of the ‘bots. I got the kids to start cheering on the “orange” ‘bot and it won. Encouraged, we continued to cheer the orange bot which won again and again. To the kids, their cheering was helping even though the engineers in the group knew there were no sound sensors on the ‘bot. For the first hour with the new colors, the orange ‘bot won about 95% of its matches, a statistical improbability. The kids were happy, the robots were doing their thing, but the tester in me was suspicious…
This all reminds me of a fairly apocryphal story from the automotive industry (Gupta, 2007). A customer called in to GM soon after purchasing a new vehicle to complain that his vehicle “was allergic” to vanilla ice cream. Puzzled help personal established that the vehicle was not ingesting the ice cream but rather that the when the customer purchased vanilla ice cream, and no other flavors, the car wouldn’t start to take him (and the ice cream) home to his family. The engineers, understandably wrote the guy off as a kook, knowing there was no way for the vehicle to know, much less care, about the ice cream purchase.
The customer continued to call in and complain. Intrigued, the manufacturer sent an engineer out to investigate, and hopefully calm the customer. Interestingly, the engineer was able to confirm the problem. When the customer bought vanilla ice cream, and no other flavors, the vehicle did not start. Knowing about the make-up of the vehicle, the engineer conducted some tests and found that the real problem the vehicle was experiencing was vapor-locking which resolved itself I the customer bought non-vanilla flavors of ice cream because the store had the vanilla up front because they sold so much of it. If you bought a different flavor, you had to walk further into the store and the additional time allowed the vapor-locking to resolve itself.
therapy-seal-robot
Sherry Turkel (2012) at MIT found that senior citizens given a “companion robot” that looked similar to a stuffed seal would talk to it and interact with it as though it were alive. Researchers found that the residents often placed the robot in the bathtub, thinking its needs were similar to a seals. Though it has been debunked by Snopes, the car owner determined that the vehicle “didn’t like” vanilla ice cream. We found similar behavior with the kids and the SumoBots. Cheering the orange one led it to win. Investigation showed the orange robot had the attack software installed where the yellow bot had line following software installed instead. In all these instances, the humans interacted with the machines using a metaphor they understood, other living beings.
The lesson? The snarky answer is that the customer doesn’t know what’s going on and they are trying to describe what they see. They often lack the vocabulary to explain what the machine/software is doing. But they are describing behavior that they believe they see. The engineer needs to pay attention to the clues however. Sometimes the customer does something that seems really reasonable to the customer that the product designer didn’t think of. And sometimes the metaphor just doesn’t stretch to what is being done.

References:
Gupta, N. (October 17, 2007). Vanilla ice cream that puzzled general motors. Retrieved from http://journal.naveeng.com/2007/10/17/vanilla-ice-cream-that-puzzled-general-motors/
Snopes.com (April 11, 2011). Cone of silence. Retrieved from http://www.snopes.com/autos/techno/icecream.asp
Turkle, S. (2012). Alone Together: Why we expect more from technology and less from each other. New York: Basic Books.

Testing, Testing, Planning, Planning…

This week at OR, we’ve been discussing “scenario planning,” a method for visualizing alternatives which allows planners to perform risk management type activities. Wade (2014) says scenario planning asks two fundamental questions “What could the landscapes look like?” and “What trends will have an impact (on us) and how will they develop?”. In Wade’s model, two or more trends are considered, using an “either … or …” framework for each then pairing the various endpoints in a matrix. For example, a researcher might decide that oil price and electrical stability are the two factors that will impact our future plans. The researcher would set two extremes for the “oil price,” say ‘higher than now’ and ‘lower than now,’ and two extremes for the stability, say ‘cheap and plentiful’ and ‘expensive and scarce.’ By combining those four extremes, the researcher could define a set of four scenarios which would allow for planning.
It turns out that much of what’s written about scenario planning is based on financial forecasts. One notable failure of the model involved the Monitor Group, which ironically performed scenario planning for other companies. The problem they faced was akin to a mechanic getting into an accident because his vehicle had faulty brakes. (Hutchinson, 2012) Monitor Group got into trouble when they began to experience negative cash flow. They trimmed the employees by 20% and assumed they would weather the coming storm until the market picked back up. (Cheng, n/d)
It didn’t. They didn’t.
Victor Cheng (ibid) opined that Monitor fell because they spent too much time in denial that their models were not working. Desperate for cash, they contracted with Libyan Dictator Moammar Gadhafi in an attempt to improve his image which ironically hurt theirs. He suggested that Monitor should have “protected its reputation” to be able to borrow during their downturn. They should have monitored (no pun intended) their Pride and Ego which told them they couldn’t be having these problems since they solve them for others. Hutchinson (ibid) also suggested that this is a common problem within organizations, where collectively held beliefs can be spectacularly wrong. And finally Cheng warns, a “strategy” is only as good as the paper it’s written on. Execution of the strategy is hard and needs to be monitored closely to prevent ending up alone and in the weeds.
So why am I writing about this you ask? To me, this scenario planning is at the core of software testing. How do we know which scenarios to test? How do we find interesting combinations of actions? How do we validate that complex behaviors are working in a way that makes sense?
Many of the same tools can be used to define the scenarios for testing the autonomous vehicles. Delphi, affinity grouping, brainstorming are all well known ways of collecting requirements. Each can be used to help define scenarios we would be interested in. Once we have the many thousands of ideas for what a vehicle must do in a specific situation, we can start grouping them together by process or mechanical grouping to find overlaps.
I recently began a project to define the tests that were necessary for a small cash register program I was brought in to test. After playing with the application for a week, I started writing test cases. I came up with nearly 1400 of them before I got to talk to the development team. I showed them my list and told them that I would need their help to prioritize the list since we obviously would not be able to test them all. Their eyes widened. Then they asked me a question that shocked both them and me. How much of this have you already tested while working this week?
I set down my sheet and said, “Before I defined these tests, I felt I had a fairly good grasp of your product. I would have said I was working at about 80% coverage. Now, looking at all these paths, I might have been up to about 10% coverage.”
I believe the automotive testers such as Google have only begun to scratch the surface (no pun intended) of their required testing, even with all the miles they have under their belts.
Next post, we’ll start looking at some of those possible scenarios and we’ll start trying to define a priority for them as well…

References

Cheng, V. (n/d). Monitor group bankruptcy-the downfall. Retrieved from http://www.caseinterview.com/monitor-group-bankruptcy

Hutchinson, A. (November 13, 2012), Monitor group: a failure of scenario planning. Retrieved from http://spendmatters.com/2012/11/13/monitor-group-a-failure-of-scenario-planning/

Roxburgh, C. (November 2009). The use and abuse of scenarios. Retrieved from http://www.mckinsey.com/insights/strategy/the_use_and_abuse_of_scenarios

Wade, W. (May 21, 2014) “Scenario planning” – thinking differently about future innovation (and real-world applications). Retrieved from http://e.globis.jp/article/000363.html

RIP HitchBOT

RIP HitchBOT
I’m not sure what to make of this. Researchers in Port Credit, Ontario created a “robot” based on the Flat Stanley principle and turned it loose about a year ago. Like Flat Stanley, HitchBOT required strangers to pick it up and transport it to a new destination. Like Flat Stanley, people took their pictures with it at interesting events and people wrote about their experiences travelling with it.
HitchBOT travelled more than 6000 miles across Canada, visited Germany and the Netherlands then began it’s journey across the United States with the goal of reaching San Francisco one day. The robot was built to help researchers answer the question “can robots trust humans?” Brigitte Deger-Smylie (Moynihan, 8/4/2015), a project manager for the HitchBOT experiment at Toronto’s Ryerson University says they knew the robot could become damaged and had plans for “family members” to repair it if needed.
The three foot tall, 25 pound robot was a robot in name only as it was literally built out of buckets with pool noodles for arms and legs. Because of privacy concerns, the machine did not have any real-time surveillance abilities. It could however respond to verbal input and take pictures which it could post to it’s social media site along with GPS coordinates. Researchers could not operate the camera remotely.
Starting July 16, HitchBOT travelled through Massachusetts, Connecticut, Rhode Island, New York, and New Jersey. When it reached Philadelphia though, it was decapitated and left on the side of the road on August 1st. Knowing how attached people get to objects which have faces, I wonder how the person who dropped it off last feels, knowing they were the last to interact with it. As a tester, I wonder how HitchBOT responded to being damaged since it had as least rudimentary speech processing abilities.
The researchers say there are several ways of looking at the “decapitation.” One way is to think, “of course this happened, people are awful.” Another way is to think, “Americans are obnoxious, so of course it happened here.” Or worse, “of course this happened in Philly, where fans once lashed out at Santa Claus.” The project team suggests that the problem is an isolated incident involving “one jerk” and that we should concentrate on the distance the machine got and the number of “bucket list” items it was able to complete before it was destroyed. Deger-Smylie (ibid) says the team learned a lot about how humans interact with robots in non-restricted, non-observed ways which were “overwhelmingly positive.”
This makes me wonder. Will it be a “hate crime” to destroy robots one day? Will protesters picket offices where chips are transplanted, effectively changing the robot from one “being” to another? If the robot hurts a human, will they all be recalled for retraining? Where do we draw the line between what is “alive” and what is not? Does that question even mean anything? Sherry Turkel at MIT is researching how “alive” robots have to be to be a companion. I read a fascinating scifi novel back in the day called “Flight of the Dragonfly.” In it, a starship had a variety of AI personalities to help the crew maintain their sanity. The ship was damaged at one point and the crew had to abandon it, but was afraid to leave the injured ship on its own to die. The ship reminded the crew that the devices they used were all extensions of itself and that the different voices it used were just fictions to help the humans interact with it. How are these “fictions” going to play out with people who already name their cars?
In the meantime, the Hacktory, a Philly-based art collective is taking donations to rebuild the HitchBOT and send it back on the road.

References:
Moyniham, T. (August 4, 2015) Parents of the decapitated HitchBOT say he will live on. Retrieved from http://www.wired.com/2015/08/parents-decapitated-hitchbot-say-will-live/

The Future Is So Bright, I Gotta Wear Shades…Or Blinders

I see autonomous vehicles being introduced in a serious way in the next 15-20 years. I don’t think they will be wide-spread per se but i think they will be on the roads. I think this will spin off a fair number of competing/collaborative technologies.
I don’t think the vehicles will be like KITT in Knight Rider but I don’t think they’ll be far off, after a while. Initially, your car will be waiting for you in the drive way. When you come out, it will start itself, position itself to pick you up if necessary, and ask your destination. After you tell it, the vehicle will check road conditions and plan the optimal route for you to get to work, taking considerations such as your love of Starbucks drive-thrus or cell phone reception into account. While you take a nap or talk on the phone, or conduct a meeting, the vehicle will navigate the streets, avoiding pedestrians, other vehicles, and various obstacles. The vehicle will drop you off at work then park itself, waiting for you to call for it again.
This idea has been around for decades.
And yet, I see some potential problems with the rollout. Most people like the idea of the vehicle driving itself so that they “don’t need to.” But people seem to be in two camps about these vehicles as they currently exist: they either love the idea and can’t imagine that anything might go wrong, or they imagine the vehicles will lead to Terminator type robots in the near future. On the first hand, people are already suggesting that insurance companies will go under once the autonomous vehicles are common because accidents will be so rare. And in the other, people are afraid we may end up with a fleet of machines like the movie vehicle, Christine, bent on killing us all. The truth is somewhere in between there, I imagine. On a slightly more serious note, sooner or later the vehicles will be forced to make a bad decision when confronted with a no win scenario, such as hitting one of several objects. If an accident is required in heavy traffic on slick roads, will the vehicle hit the vehicle in front of it let the one behind it hit it, or swerve and hit an oncoming car or pedestrian? How will the vehicle make the determination which is the “best thing” to hit? One potential solution would be to allow the vehicles to talk to each other so that they know the relative worth of each player. A vehicle with four people in it might be considered more valuable than a vehicle with only one occupant, according to the insurance people who would pay out the medical claims. Of course, if the vehicles are communicating like that, you know there will be an aftermarket device/mod that reports the vehicle has 10 people on board and therefore should not be hit!
And, as the machines become more “responsible”, they may report information to waiting police sentries concerning your health/sobriety, the vehicle’s health/maintenance, or number of “registered people” in the vehicle and whether they are wearing their seatbelts. Ack!
There might be some interesting activities which may occur if the vehicles are successful. Bus transportation (and other delivery type activities) should be easier to set up and maintain. Fewer vehicles may end up on the road as individual vehicles get “loaned out” during their idle times, meaning that an individual wouldn’t need to have exclusive rights to the machine if it is acting as a taxi/chauffeur. And if the vehicles become successful, the marketing will be about how relaxed you can be in the vehicle, how comfortable the seats are, how good the internet connections are, or how nice the stereo (or equivalent) is.
Or we’ll end up with machines working like a conveyor belt following predetermined routes like busses do. Any maybe swerving to run us down since pedestrians won’t have the aftermarket “no hit” devices. 🙂

Keep Your Requirements Close But Your Crazy Ideas Closer

One of the courses I teach, which I’m quite proud of, is the capstone course for the CS and IT majors. In the two semester course, we design, build, and release something to the world. The projects are based on interests in the various groups, usually 2-4 people per group. We’ve built robots, network sniffing tools, video games (some better than others ;)), and a variety of web portal kinds of tools. This cycle, one student team built an existential video game called Unbearable which was impossible to win and whose arcade-style high score tracker published random numbers. It was a silly game on the surface but had lots of ironic and fun easter egg kinds of behavior.
The other teams collaborated to build what was functionally a tele-presence device. The intent of the device when they started was to be able to “call home” to see your dogs and make sure they were ok. They had promised “bark recognition” software that would send you a text if the dog was barking too much and a Pan-Tilt-Zoom functionality for the camera so you could watch the dog. Early designs had a ball thrower so you could play with the dog remotely but that was simply too complex for the time we had. There had been some discussion about blowing bubbles for the dog to chase instead or perhaps giving the dog “some kibble” to get it to come to the device. The “kibble launcher” became the in-joke for the two semesters but was kind of moved to “nice to have” instead of a requirement.
Last night, the team demo’d their product with the intent of showing the customer that the product should be funded to actually be built. One of the team members brought their dog in to show the reaction to the device. Generally, the dog was less than impressed 🙂 We joked that they should have included the “kibble launcher” after all to get the dog’s attention. The student responsible for the case that the telepresence device lived in reached under the desk and pulled out a small box-like object which he hooked onto the side of the device. You could see there was a place for wires and a motor though they had not actually been installed. He then reached into his backpack and pulled out a baggie of dry dog food which he loaded into the new add-on. It was manually activated but the dog was excited to get some “kibble” so at least hung around the device during the demo. Network issues in the classroom prevented the demo from working well but we had seen the process work in the past.
The reason I bring this all up is that the “kibble launcher” started as a joke with the team while we were brainstorming the functionality of the product. In brainstorming, you write down all the ideas that come up, no matter how odd they may seem. Using Affinity Diagrams and other tools, we pared down the ideas to a manageable number of requirements to build. The launcher was decided to be a “if we have time” kind of feature and it was shelved. But, we made so much fun of the idea that the cabinet builder kept thinking about how to actually do it. He mocked up a prototype which he proudly displayed last night. It helped their product and would have differentiated it on the market if we had really built it.
My point is that we often come up with crazy ideas in brainstorming sessions and we filter them out right away because they’re silly/stupid/too expensive. But sometimes, those ideas live on and we find a way to incorporate them into the product. And we should. Those “crazy ideas” are how we sometimes get new, cool products.

Testers and Developers

I started a new job today. It doesn’t involve robots (yet!) but it’s about setting up a testing lab to make sure the various devices we interact with work correctly with our products. The lead developer and I went to lunch today to discuss his plans for the team.
He’s very excited to have a professional tester on the team finally because he believes, as I do, that testers have a different mindset than the developers have. I agree with him completely. Testers want to “break” the code. They are curious to see what will happen when ___. Developers tend to want the code “to work.” Testers try entering invalid data or click the buttons out of order, Developers typically test the “happy path.”
There’s nothing wrong with these two viewpoints but we need to acknowledge them as different. And we need to realize that we are both working on the same team, it’s not Testers vs Developers. It should be that we’re both fighting the defects that will hurt/annoy/anger our customers.
We agreed on that. But, what he said next, really floored me, at first. He said he was thinking about bringing in some computer science majors as college interns. He wants them to test some basic/UAT kinds of tests and see if they want to continue with the company as developers. WHAT?! This seems diametrically opposed to what he claimed earlier about the mindsets being different. He asked me about my thoughts on this and I told him it seemed counter to his goals of having testers.
He defended his position by saying that Test Driven Development works better if the developers understand the idea of testing to begin with. He said he’s found the best way to teach testing is to have the “students” do it. I agreed with that. When I teach testing courses, one of the things we look at is the airport parking cost tool at Grand Rapids airport. http://www.grr.org/ParkCalc.php In the class we brainstorm what the requirements likely were, then test the finished product based on those requirements. We look at the site to find out what the rates are and manually calculate some of the prices, just to make sure the Calculator works correctly.
The Calculator is a simple form with a pulldown to select the parking type and a couple boxes to enter the start and stop dates and times. When you hit Submit, the Calculator should tell you how long you are staying at the airport and how much it will cost to park your car there. If you’re testing the Happy Path, all is good.
But, if you’re not… and you try February 30 as one of the dates, the Calculator accepts that and still performs the calculation. Turns out the Calculator accepts a lot of crazy dates like ‘9999999’ and ‘-9999999’ and 6/8/2015 and 2015/06/08 and “date”. Ack! Turns out I’m not the only one using this website. There are testing competitions to find out who can find the largest or the smallest payment required. Can anyone get a negative price? And it turns out the calculation is not always correct in any case, a regular functional defect that may require a fair number of test cases to find the error.
In this particular example, the fact that the Calculator is wrong is more amusing than problematic. But, what if the Calculator fed it’s number into a charge card or accounting process? Now would we care? You betcha!
So, I’m all for training folks to be testers. I’m all about showing Developers why testing matters and how they can be “more robust” in their efforts. And let’s face it, if the Developers are doing the testing and they get the idea that the same test with differing values and boundary conditions might be useful, they might start to think of testing as cyclic and that might make them think of loops and that might lead them to build some interesting tools to allow us to try a larger variety of tests than we typically do now. And I’m definitely all for that!
Especially once the robots start getting built!

The Real World is a Messy Place

The Real World is a messy place which we need to fit into the decision processes of our autonomous vehicles. In Acceptance Testing, we would certify that the machine was instructed to perform this specific task and it did, therefore we are happy. We could make a list of these tasks and check them off one by one:
a) The vehicle travelled from point A to point B without hitting anything for example.
b) The vehicle swerved correctly to miss a person standing in the road.
c) The vehicle changed roads because there was an obstruction ahead.
And so on! There are a million scenarios to test if we want to be certain that the machine will operate correctly, regardless of the situation.
But, what if we allowed other types of testing to become involved? White Box testing would allow us to test the individual sensors or try to feed them bad data specifically to confuse the vehicle. Exploratory Testing might ask “what-if” questions like “What if it’s snowing?” or “What if visibility is low?” or “What if the road was gravel?”
The Real World rarely has only one variable at play however, just ask anyone who has used a flight simulator. Eventually, the testing is going to have to try multiple problems at once. Let’s explore Hypothetical Scenario 17:

An autonomous vehicle is driving down an icy city side-street. There is a woman bundled against the cold, carrying a baby, attempting to cross the street. There are parked cars on both sides of the road. The vehicle in the on-coming lane abruptly turns left in front of our robot, perhaps into its own driveway. Our vehicle is going to hit something. What does the robot hit? How does it decide?
So far, all of our tests have been based on the idea that there is a “correct” answer for the robot to pick. What is that isn’t true? What will the robot do when faced with having to select a “bad” choice?

If the human was driving, s/he might reasonably select the target least likely to be damaged by their vehicle. They might select whatever is directly in front of them. They might swerve or perform another action specifically to avoid the pedestrian. The human would make this choice, potentially based on experience, their own value judgements, or some other consideration. What does the robot do?
If we are concerned about this scenario, perhaps we need to design an algorithm for the robot. An auto insurance company might suggest hitting the thing with the least value, relatively speaking. A Yugo parked on the side of the road is much cheaper to repair than a Bentley. A moving vehicle with one occupant is potentially cheaper than a vehicle with four passengers, medical claims-wise. We’d have to come up with whatever our algorithm is and build in a way for the robot to learn the information which leads to the decision.
One potential solution is to allow/require the vehicles to talk to each other. Car A reports that it has three passengers but good airbags. Car B reports that it is empty and a rental car. The pedestrian reports nothing. What does the robot hit now? Worse, if there is communication between the vehicles, how long until an after-market device is introduced claiming its vehicle is “expensive” by whatever scale? If everyone uses this device, we’re back to the original problem of having to make a bad decision with no data. Even worse, how long until there are websites (or whatever) which show backyard mechanics how to hack the decision-making process to protect their own vehicles/families? And even more exciting, if the cars are reporting on themselves, why can’t the police poll all the cars going by to see who’s speeding and not wearing their seatbelts and who hasn’t done timely “proper maintenance”?
And an even more troubling question is, who’s in trouble if the robot chooses a “bad” solution? The “driver” or the owner of the vehicle? Perhaps the builder, or the developer, or the tester? How will we know why the robot made the choice it did? Does it need to log the information in some way? If a human would have made a different decision, does that trump whatever the robot actually did? This is akin to telling Developers to make it work like the existing system/prototype without defining the actual requirements.
There’s going to be a period of transition as we bring these vehicles into reality. There will be some period of time, perhaps decades, where there will be autonomous, semi-autonomous, and human-driven vehicles all interacting on the same roads. We’re working hard to bring these vehicles to our streets, both in the US and elsewhere. So far, it seems that we’re excited that the robots can do simple tasks in controlled environments. But the Real World is not a controlled environment and maybe we need to think more about integrating messy, fragile, and litigious humans into the mix.