My Hometown

Today I participated in a Labor Day Parade in a small community south of where I live. The community is a typical small town kind of place with mostly older “Americana” kinds of streets. A big fire station on one end of the town, a largish city hall where the road turns, and lots of little mom-and-pop kinds of stores in-between. The parade included the kinds of things you’d expect, people running for office, marching bands, old cars, 4H and Scouting, and variety of local businesses showing their civic pride. Our local Shriners community is pretty active and fielded a number of bands, a float, and a variety of car clubs.
I had been offered a ride in one of the Model T’s that the Shrine was going to parade in but I was asked to fill in driving for the Shriner who normally drives the float with the band on it. And this dichotomy is what I want to talk about today.
It started when the Model T met me at my house. It was decided that I would follow them to the parade route in my personal car so that I could leave again after the parade since the driver of the Model T wanted to stay in town and enjoy the after-parade festivities. So I’m following the Model T which has spent its entire life in Colorado. It’s a gorgeous machine. And there are parts of both towns that still look the same as they did when the vehicle was new. The pavement may be a different color and the traffic lights are new, but the downtown buildings have been here for as long as the car has.
I followed the Model T in my 2015 rental car with all digital readouts, listening to satellite radio. Their driver wasn’t sure how to get back to the main road from my house so his passenger called me on the cell phone and I used the GPS to route him where he wanted to go.
It was then that it struck me how many anachronisms we were dealing with here, and how many more we’re going to have to handle in the next “few” years as the autonomous cars begin actively driving on our roads and in our communities.
There’s going to be “some period” of time while both kinds of vehicles are on the streets. This period may be mandated by law or may just occur because some people will cling to their “classic” vehicles. Right now, “classic” means 25 years or older but will there be a new term for human-controlled vehicles and autonomous? Will collectors help keep the “Golden Age” of automobiles alive?
The robot vehicles are going to have to compensate for the humans driving around them in some fashion. Right now, vehicles have lights and other devices to let the other human drivers know what the vehicle is likely to do. The cars of the future will probably talk to each other directly, allowing cars to safely operate closer to each other for instance. Right now we have spacing on the streets designed for human reaction time and comfort. If the vehicles can coordinate with each other, there’s no real reason they couldn’t be operating only a few inches from each other which might give us extra “lanes” at some point. Will we even need lanes in the future?
Conversely, if a vehicle breaks down in the middle of the road, will the autonomous vehicles be better able to handle maneuvering around the obstacle than the humans are today? Or will they do like so many tests show, if the lead car stops, the cars following it stop, too?
I don’t know the answer to these questions yet, but that’s what I’m hoping to help define. I know I’m not the only one thinking about these things and that makes me feel better already.

It’s a Metaphor!

Imagination-Celebration
A couple years ago, I participated in the Imagination Celebration in downtown Colorado Springs. My team from Colorado Tech had fielded a group of SumoBots to show off STEM activities. A SumoBot is a small cubic robot about 4 inches on a side which is outfitted with a couple IR sensors whose programming drives it to find its opponent, then push it out of a ring drawn on the table. Interesting side note, the robots don’t do too well out in the direct sunlight, for some reason 🙂 A little bit of shade made them work much better!
sumobot
Anyway, the kids came up to watch the robots doing their thing. Since there was nothing to do but watch, the kids didn’t really get involved. I rummaged in my backpack and found a yellow and an orange sticker which I placed on top of the ‘bots. I got the kids to start cheering on the “orange” ‘bot and it won. Encouraged, we continued to cheer the orange bot which won again and again. To the kids, their cheering was helping even though the engineers in the group knew there were no sound sensors on the ‘bot. For the first hour with the new colors, the orange ‘bot won about 95% of its matches, a statistical improbability. The kids were happy, the robots were doing their thing, but the tester in me was suspicious…
This all reminds me of a fairly apocryphal story from the automotive industry (Gupta, 2007). A customer called in to GM soon after purchasing a new vehicle to complain that his vehicle “was allergic” to vanilla ice cream. Puzzled help personal established that the vehicle was not ingesting the ice cream but rather that the when the customer purchased vanilla ice cream, and no other flavors, the car wouldn’t start to take him (and the ice cream) home to his family. The engineers, understandably wrote the guy off as a kook, knowing there was no way for the vehicle to know, much less care, about the ice cream purchase.
The customer continued to call in and complain. Intrigued, the manufacturer sent an engineer out to investigate, and hopefully calm the customer. Interestingly, the engineer was able to confirm the problem. When the customer bought vanilla ice cream, and no other flavors, the vehicle did not start. Knowing about the make-up of the vehicle, the engineer conducted some tests and found that the real problem the vehicle was experiencing was vapor-locking which resolved itself I the customer bought non-vanilla flavors of ice cream because the store had the vanilla up front because they sold so much of it. If you bought a different flavor, you had to walk further into the store and the additional time allowed the vapor-locking to resolve itself.
therapy-seal-robot
Sherry Turkel (2012) at MIT found that senior citizens given a “companion robot” that looked similar to a stuffed seal would talk to it and interact with it as though it were alive. Researchers found that the residents often placed the robot in the bathtub, thinking its needs were similar to a seals. Though it has been debunked by Snopes, the car owner determined that the vehicle “didn’t like” vanilla ice cream. We found similar behavior with the kids and the SumoBots. Cheering the orange one led it to win. Investigation showed the orange robot had the attack software installed where the yellow bot had line following software installed instead. In all these instances, the humans interacted with the machines using a metaphor they understood, other living beings.
The lesson? The snarky answer is that the customer doesn’t know what’s going on and they are trying to describe what they see. They often lack the vocabulary to explain what the machine/software is doing. But they are describing behavior that they believe they see. The engineer needs to pay attention to the clues however. Sometimes the customer does something that seems really reasonable to the customer that the product designer didn’t think of. And sometimes the metaphor just doesn’t stretch to what is being done.

References:
Gupta, N. (October 17, 2007). Vanilla ice cream that puzzled general motors. Retrieved from http://journal.naveeng.com/2007/10/17/vanilla-ice-cream-that-puzzled-general-motors/
Snopes.com (April 11, 2011). Cone of silence. Retrieved from http://www.snopes.com/autos/techno/icecream.asp
Turkle, S. (2012). Alone Together: Why we expect more from technology and less from each other. New York: Basic Books.

Testing, Testing, Planning, Planning…

This week at OR, we’ve been discussing “scenario planning,” a method for visualizing alternatives which allows planners to perform risk management type activities. Wade (2014) says scenario planning asks two fundamental questions “What could the landscapes look like?” and “What trends will have an impact (on us) and how will they develop?”. In Wade’s model, two or more trends are considered, using an “either … or …” framework for each then pairing the various endpoints in a matrix. For example, a researcher might decide that oil price and electrical stability are the two factors that will impact our future plans. The researcher would set two extremes for the “oil price,” say ‘higher than now’ and ‘lower than now,’ and two extremes for the stability, say ‘cheap and plentiful’ and ‘expensive and scarce.’ By combining those four extremes, the researcher could define a set of four scenarios which would allow for planning.
It turns out that much of what’s written about scenario planning is based on financial forecasts. One notable failure of the model involved the Monitor Group, which ironically performed scenario planning for other companies. The problem they faced was akin to a mechanic getting into an accident because his vehicle had faulty brakes. (Hutchinson, 2012) Monitor Group got into trouble when they began to experience negative cash flow. They trimmed the employees by 20% and assumed they would weather the coming storm until the market picked back up. (Cheng, n/d)
It didn’t. They didn’t.
Victor Cheng (ibid) opined that Monitor fell because they spent too much time in denial that their models were not working. Desperate for cash, they contracted with Libyan Dictator Moammar Gadhafi in an attempt to improve his image which ironically hurt theirs. He suggested that Monitor should have “protected its reputation” to be able to borrow during their downturn. They should have monitored (no pun intended) their Pride and Ego which told them they couldn’t be having these problems since they solve them for others. Hutchinson (ibid) also suggested that this is a common problem within organizations, where collectively held beliefs can be spectacularly wrong. And finally Cheng warns, a “strategy” is only as good as the paper it’s written on. Execution of the strategy is hard and needs to be monitored closely to prevent ending up alone and in the weeds.
So why am I writing about this you ask? To me, this scenario planning is at the core of software testing. How do we know which scenarios to test? How do we find interesting combinations of actions? How do we validate that complex behaviors are working in a way that makes sense?
Many of the same tools can be used to define the scenarios for testing the autonomous vehicles. Delphi, affinity grouping, brainstorming are all well known ways of collecting requirements. Each can be used to help define scenarios we would be interested in. Once we have the many thousands of ideas for what a vehicle must do in a specific situation, we can start grouping them together by process or mechanical grouping to find overlaps.
I recently began a project to define the tests that were necessary for a small cash register program I was brought in to test. After playing with the application for a week, I started writing test cases. I came up with nearly 1400 of them before I got to talk to the development team. I showed them my list and told them that I would need their help to prioritize the list since we obviously would not be able to test them all. Their eyes widened. Then they asked me a question that shocked both them and me. How much of this have you already tested while working this week?
I set down my sheet and said, “Before I defined these tests, I felt I had a fairly good grasp of your product. I would have said I was working at about 80% coverage. Now, looking at all these paths, I might have been up to about 10% coverage.”
I believe the automotive testers such as Google have only begun to scratch the surface (no pun intended) of their required testing, even with all the miles they have under their belts.
Next post, we’ll start looking at some of those possible scenarios and we’ll start trying to define a priority for them as well…

References

Cheng, V. (n/d). Monitor group bankruptcy-the downfall. Retrieved from http://www.caseinterview.com/monitor-group-bankruptcy

Hutchinson, A. (November 13, 2012), Monitor group: a failure of scenario planning. Retrieved from http://spendmatters.com/2012/11/13/monitor-group-a-failure-of-scenario-planning/

Roxburgh, C. (November 2009). The use and abuse of scenarios. Retrieved from http://www.mckinsey.com/insights/strategy/the_use_and_abuse_of_scenarios

Wade, W. (May 21, 2014) “Scenario planning” – thinking differently about future innovation (and real-world applications). Retrieved from http://e.globis.jp/article/000363.html

RIP HitchBOT

RIP HitchBOT
I’m not sure what to make of this. Researchers in Port Credit, Ontario created a “robot” based on the Flat Stanley principle and turned it loose about a year ago. Like Flat Stanley, HitchBOT required strangers to pick it up and transport it to a new destination. Like Flat Stanley, people took their pictures with it at interesting events and people wrote about their experiences travelling with it.
HitchBOT travelled more than 6000 miles across Canada, visited Germany and the Netherlands then began it’s journey across the United States with the goal of reaching San Francisco one day. The robot was built to help researchers answer the question “can robots trust humans?” Brigitte Deger-Smylie (Moynihan, 8/4/2015), a project manager for the HitchBOT experiment at Toronto’s Ryerson University says they knew the robot could become damaged and had plans for “family members” to repair it if needed.
The three foot tall, 25 pound robot was a robot in name only as it was literally built out of buckets with pool noodles for arms and legs. Because of privacy concerns, the machine did not have any real-time surveillance abilities. It could however respond to verbal input and take pictures which it could post to it’s social media site along with GPS coordinates. Researchers could not operate the camera remotely.
Starting July 16, HitchBOT travelled through Massachusetts, Connecticut, Rhode Island, New York, and New Jersey. When it reached Philadelphia though, it was decapitated and left on the side of the road on August 1st. Knowing how attached people get to objects which have faces, I wonder how the person who dropped it off last feels, knowing they were the last to interact with it. As a tester, I wonder how HitchBOT responded to being damaged since it had as least rudimentary speech processing abilities.
The researchers say there are several ways of looking at the “decapitation.” One way is to think, “of course this happened, people are awful.” Another way is to think, “Americans are obnoxious, so of course it happened here.” Or worse, “of course this happened in Philly, where fans once lashed out at Santa Claus.” The project team suggests that the problem is an isolated incident involving “one jerk” and that we should concentrate on the distance the machine got and the number of “bucket list” items it was able to complete before it was destroyed. Deger-Smylie (ibid) says the team learned a lot about how humans interact with robots in non-restricted, non-observed ways which were “overwhelmingly positive.”
This makes me wonder. Will it be a “hate crime” to destroy robots one day? Will protesters picket offices where chips are transplanted, effectively changing the robot from one “being” to another? If the robot hurts a human, will they all be recalled for retraining? Where do we draw the line between what is “alive” and what is not? Does that question even mean anything? Sherry Turkel at MIT is researching how “alive” robots have to be to be a companion. I read a fascinating scifi novel back in the day called “Flight of the Dragonfly.” In it, a starship had a variety of AI personalities to help the crew maintain their sanity. The ship was damaged at one point and the crew had to abandon it, but was afraid to leave the injured ship on its own to die. The ship reminded the crew that the devices they used were all extensions of itself and that the different voices it used were just fictions to help the humans interact with it. How are these “fictions” going to play out with people who already name their cars?
In the meantime, the Hacktory, a Philly-based art collective is taking donations to rebuild the HitchBOT and send it back on the road.

References:
Moyniham, T. (August 4, 2015) Parents of the decapitated HitchBOT say he will live on. Retrieved from http://www.wired.com/2015/08/parents-decapitated-hitchbot-say-will-live/

The cars, they are a coming!

It appears that the driverless cars are becoming a hot commodity! Sadly, it appears that the US is lagging behind China and Japan dramatically. But, it appears that many of the designs are following the Drone methodology instead, letting the vehicles be piloted by a human “in the loop.” “Our idea is not that a car should totally replace the driver, but that it will give the driver freedom,” Yu Kai, Baidu’s top AI researcher, told the South China Morning Post. (Wan, A. and Nam, W, 4/20/2015) “So the car is intelligent enough to operate by itself, like a horse, and make decisions depending on different road situations.” 

That equine analogy has been used before, notably by design guru Don Norman in a keynote speech last summer. (Ross, 12/30/2014) “Loose reins: the horse is in control and takes you home. Tight reins are to force the horse to do things that are uncomfortable—but not dangerous.”  After a hundred years of automotive progress, we’re still measuring the cars in relation to horses! 🙂

Baidu’s self-driving car project is on track, if you’ll pardon my pun, to hit Beijing highways by end of the year, just as the company’s CEO had suggested.

References
Ross, P. (12/30/2014). Europe’s Smart Highway will Shepherd Cars from rotterdam to vienna. IEEE Spectrum. Retrieved from http://spectrum.ieee.org/transportation/advanced-cars/europes-smart-highway-will-shepherd-cars-from-rotterdam-to-vienna

Wan, A. and Nan, W. (4/20/2015). Baidu’s Yu Kai talks autonomous cars, artificial intelligence and the future of search. South China Morning Post. Retrieved from http://www.scmp.com/lifestyle/article/1767799/baidus-yu-kai-talks-autonomous-cars-artificial-intelligence-and-future

But FIRST, these messages

I went to a FIRST Robotics competition in Denver recently. This was the first time I went where there were high school students so the robots were a little more complex. I usually get to judge at the FIRST Lego competitions which are for elementary and middle school students.

My first thought was “my goodness”, these are much more complex tasks (and much more expensive robots)! But after watching the competition for a while, I began to notice some issues with the approaches the different teams attempted. The process is that the teams get a six week window to design and build their machines before sending them to the competition. Since design and building often takes most of the six weeks, most robot handlers/drivers do not get to work with the machine much before arriving at the competition.

The strategy of the teams seemed to vary quite a bit. Some teams went after the one “difficult” and therefore point-laden task and did nothing the rest of the round. Other teams went after one or more of the simple tasks and really focused their robot design on that task.
For example, at this competition, the “difficult” task was to claim trash cans from a no man’s land in the center of the competition area. To be successful, the robot must be able to reach across the no man’s land and pull the trashcan back to the working area for the other robots to stack. Only one team really tried to do this and it was clear they hadn’t really tested their “hooking” process. Their robot had to be very carefully set up and aimed. It rolled forward about a foot then dropped at arm down with the intent of grabbing the can. If it missed, and it always did, the team just parked there and waited for the round to end. Since their design clearly didn’t work, they basically sat out the rest of the team competitions. Surely building a mock-up of their arm and dropping it like the robot would have would have helped. And this bit of testing might have helped them score points, even if they didn’t have a full robot to test with.

The other big task was to stack totes. Some of the designs for how to do this were amazing! Several of the teams used a tall robot to pick up 4 individual totes then set them down already in a stacked pattern instead of picking up individual totes and attempting to stack them on an existing stack of totes. It was a cleaver idea and I was surprised several of the teams used this solution

So why do I bring this all up, except, of course, that robots are cool? That’s certainly a part of it! But seriously, my early thought was that it takes a huge number of people to reset the robots each round. And these are not technically robots by my definition. More importantly, testing of the robots should not be handled in such a cavalear fashion!
FIRST Lego League, which I have participated in and which is designed for younger students, has an army of judges who interact with each team, trying to figure out what the team strategy was or encourage the teams to do better. On the other hand, each round at this competition was based on three teams, an “Alliance” though they seemed to be arbitrarily grouped, fighting three other teams. The Alliance that won best two out of three moved on to the next round. An Alliance was made up of three school teams containing what appeared to be 4-10 students and their instructor/sponsor. In between rounds a battalion of judges/resetters came out and reset the arena for the next round. Students came out and reset the robots themselves. As you can see in the video, it’s a fairly quick process but it required a swarm of folks to pull off. Perhaps the death knell for American Industry “peopled” by humans is a little further away than we have been led to believe?

Even more important to me, these machines were not robots, per se, as I would define them. The machines had a 20 second window at the beginning of the round to do something autonomous. The rest of the round, the machines were handled like RC vehicles. While drone makers would argue that RC vehicles are robots, I do not. In practice, many of the machines sat idle during the autonomous portion or positioned themselves for the RC portion of the round. This is understandable in that the build deadline is very short and the strategy for earning points is very important. Complex tasks or automated tasks are worth more but there are a lot of simple tasks that can be done. But again, it implies that humans will remain “in the loop” for the fore-seeable future.

A 6-week build time is very ambitious, even for college students! I understand that the FIRST folks don’t want to interfere with the school year, per se, but I believe we are setting the kids up for failure in the long run. This process is much like our “code-like-hell” process in IT where we sling code into the real world based on deadlines not completeness, knowing we can patch it later. Surely we don’t want to teach people to ignore testing? Surely we don’t want to reinforce the idea that any idea is a good one? With something like 60% of all software projects failing to meet customer expectations, do we really want to showcase this behavior? Can you imagine only having six weeks to practice [insert collegiate sport here] before a competition amongst rival schools? A one day event where all the athletes come together, for good or ill?

Actually, I want to compare robotics to athletics but that’s a post for another day! In the meantime, let me get off my soapbox, for now.

There’s an app for that!

Today’s post may go in a strange direction. To me, what I’m about to say is almost so obvious that it doesn’t need to be said. We’ll see.

This week I received a new piece of medical technology. It was ordered nearly a month ago by doctors who I had to wait for nearly two months to see. It was supposed to be delivered last week but the company “forgot” then labelled the equipment as patient pick-up even though it had to be installed in my home.

By time I actually received the device, I was beginning to wonder just how important it was to actually have it, a somewhat negative view I admit.

The delivery guy came out with the component and wheeled it in but did not know particularly how to attach it to the other piece of equipment that I had already received as he hadn’t been trained on the new model I had. I asked him several questions about what the purpose of the machine was and how it performed its task. The driver wasn’t terribly sure and couldn’t call back to the office for help because their communication equipment wasn’t working (you don’t have a phone?). He made some claims about the equipment which I was able to disprove while he watched and it was clear he was reading from a memorized script and didn’t know how the machine worked at all. I shoo’d him out and connected the machine myself after signing the waivers that if I died I wouldn’t sue the company or the delivery driver.

This whole fiasco reminded me of the washing machine debacle from around Thanksgiving. I had found a screaming deal on a new washer/dryer that could be stacked in my basement. I wasn’t going to be home for a few days so I had them deliver them a week later. They forgot even though I took time off to be available. They sent the machines out the next day with a driver who was not authorized to install them because they were gas. I argued with the delivery driver who decided that company policy is to just bring the machines back to the shop if there’s any problem. A week later and several calls, the same kid brought the machines back but still did not know how to install them or that they were even stackable. The guys working in my basement helped the kids figure out how to install the machines but in the meantime the kids had accidentally broken off a component of the drainage system for the washer.

wirebundle

So here we go with my questions which are relevant to this article.
1) If you don’t know what your job is, eventually a robot is going to replace you doing it. A robot cannot make decisions on its own so it’s going to need failover decisions to choose when things go wrong. The robot would have reasonably taken the washer and dryer back to the shop when confronted with a gas fixture or irate home owner. A human should be able to figure these things out.
2) Conversely, our society seems to be “dumbing things down” to the point where we can off-shore tasks that are technical in nature by defining scripts that people can read to their customers, even if they don’t understand the content. If we have to do that, again a robot can take over the task.
3) With the building back-lash against off-shored tech-support and banking and who knows what all, the robots are going to have to make major strides forward to be able to handle what we now seem to consider easily scriptable processes.

Maybe the robots aren’t going to come quite as quickly as we have been thinking!

I see I have a “futuring” course in the next year at school. I’m curious to see where that class thinks we may end up… and what part I have to play in the innovation.

Cybernetics in the Real World

I spent this weekend at COSine, a science fiction writer’s conference here in Colorado Springs. I participated in several panels about the state of the world, especially for cybernetics.

The conversations were quite lively but ended up being debates about “how cool the technology could be” interspersed with discussion on whether we should “require” people to accept the augmentations. I suggested it wasn’t terribly different from the Borg (in Star Trek: The Next Generation) meeting new species and immediately implanting them with the technology that makes the Borg hive-mind work. The panelists likened the practice to forcing all deaf children to receive the Cochlear implant. A very spirited discussion ensued.

Afterwards, I apologized to the moderator for hi-jacking the discussion like that and she said while that was an interesting discussion, she was more intrigued by my “throw away” question about how the the augmented would be considered in our society:
Right now, there’s some stigma to people with artificial limbs, pacemakers, insulin pumps and the like. People who augment themselves with drugs for performance are stricken from the record books because they aren’t “fair,” or more accurately, not purely human.

And this leads back to the robot question. How do we determine what is “beneficial” and what is “useful”? How do we differentiate between things that help but pollute for instance? These are tricky questions and I am somewhat concerned about the outcome.

Testing, testing, 1, 2, 3…

For my first story, I’d like to share something that happened to a buddy of mine. He worked the help desk for a computer manufacturer and was quite used to getting “stupid”/”strange” calls from end users.
One particular day, he had a call escalated to him. The customer was complaining that his “screen shrunk.”
Being the professional he is, he started by asking clarifying questions: “Do you mean the icons on the screen?” (No, they’re correct) “Do you mean the monitor is too small?” (No, the monitor’s size is just fine) and so on.
They rebooted the monitor and the computer several times but couldn’t resolve the issue of the “shrinkage.” They eventually decided to send the customer a replacement monitor. When the original monitor returned to the factory, it was tested and no problem could be found with it. After this had happened several times, they decided it was important to send a tech to the man’s office to see the problem first hand. They found that the customer was correct, periodically the image on the screen “shrunk” amid a flurry of pixelation and static. They were puzzled for a moment.
I’ll tell you in a minute what the problem was, but I’ll give you this hint for now: moving the monitor and the desk it was sitting on to a different location 3 feet away solved the problem.
So what have we learned from this?
1) The customer is the customer. They might not be right but they are experiencing something with our product.
2) Ask questions to understand what the customer is saying. They may not have the vocabulary to explain what they see or may misunderstand what they are seeing. Doesn’t mean they aren’t seeing it.
3) Follow up. Ask the customer if the resolution is working. If not, escalate the problem so that the customer knows you aren’t just blowing him off.
4) Test the product but know you’ll never account for everything that could happen to it. If the user finds a novel use for your product, either embrace it or don’t but know that the customer thought of your product to try to do what they needed done. That should mean something!

OK, the problem was that the monitor and computer were set against the wall of the office. And the other side of the wall was the elevator shaft where the huge iron counter-weights to the elevator zoomed past periodically. The counter-weights generated a magnetic field over time and degaussed the monitor every time they flew by. Moving the monitor away from the stray magnetic field fixed the problem.