What’s In a Name?

As I was walking back to my car from parking the float I had been driving at the Labor Day parade, I happened across an R2 Unit that had been on the side of the parade route. And I got to talk to its owners/builders. Turns out they were visiting from Mountain States Droid Builders, a fan club that builds robot replicas from the Star Wars universe. They were happy to demonstrate the way they built their ‘droid and invited me to join them at their work parties. We discussed that this R2 unit was technically more of an RC vehicle than a proper robot but that they had plans for adding robot-skills to their device as time and money became available.
This led me to ponder, yet again, how to define the difference between the machines we’ve been talking about. I see there being 5 or 6 kinds of machines at the moment; RCs and Drones, autonomous vehicles, human-assisted, industrial and other kinds of robots, and ‘Droids.

RC vehicles and drones are machines which are piloted by humans who are remote from the machine itself. The humans rely on information from sensors or visual observation of the craft itself. The craft may have components which aid the human in various tasks much like cruise-control and auto-pilot do today.

Autonomous vehicles would be able to control their own actions, using sensors and other inputs to navigate safely from point-to-point. These vehicles may coordinate with external sources such as road- or other vehicle-feedback

Hybrid vehicles (I suppose we need a new word for this since Hybrid already means multiple fuel sources) would be semi-autonomous and able to make some decisions on its own but would require a human in the loop to control the vehicle in certain situations, perhaps mandated by law, or when the computer can’t make a valid decision. Above I referred to these as “human-assisted.”

Industrial robots can repeat a set of tasks that it has been “trained” to do and adding sensors to it allows the machine to receive feedback on its actions and make corrections. The robot knowing that the part that it should be working on is misaligned can cause the robot to re-align the part or ask for help instead of completing a task which is not going to be correct at the end.

Robots are considered semi-autonomous at this point, depending on their decision-making skills.

Then there are Droids. All science fiction robots fit into this category, at least initially. Short for Android, these machines seem to have personalities and are able to perform their tasks with no supervision.

So here are my questions. Are these enough categories? Where does a typical Roomba fit into all of this? How about the current set of vehicles which can park themselves or brake to avoid accidents? What about a machine that can assist rescuers in a unsafe environment? How solid should the lines be between the categories? If the machine can be over-ridden by a human, does that mean it’s human-assisted? I don’t know the answer to this question either but I’m starting to research it because we need a better vocabulary to talk about these machines.

My Hometown

Today I participated in a Labor Day Parade in a small community south of where I live. The community is a typical small town kind of place with mostly older “Americana” kinds of streets. A big fire station on one end of the town, a largish city hall where the road turns, and lots of little mom-and-pop kinds of stores in-between. The parade included the kinds of things you’d expect, people running for office, marching bands, old cars, 4H and Scouting, and variety of local businesses showing their civic pride. Our local Shriners community is pretty active and fielded a number of bands, a float, and a variety of car clubs.
I had been offered a ride in one of the Model T’s that the Shrine was going to parade in but I was asked to fill in driving for the Shriner who normally drives the float with the band on it. And this dichotomy is what I want to talk about today.
It started when the Model T met me at my house. It was decided that I would follow them to the parade route in my personal car so that I could leave again after the parade since the driver of the Model T wanted to stay in town and enjoy the after-parade festivities. So I’m following the Model T which has spent its entire life in Colorado. It’s a gorgeous machine. And there are parts of both towns that still look the same as they did when the vehicle was new. The pavement may be a different color and the traffic lights are new, but the downtown buildings have been here for as long as the car has.
I followed the Model T in my 2015 rental car with all digital readouts, listening to satellite radio. Their driver wasn’t sure how to get back to the main road from my house so his passenger called me on the cell phone and I used the GPS to route him where he wanted to go.
It was then that it struck me how many anachronisms we were dealing with here, and how many more we’re going to have to handle in the next “few” years as the autonomous cars begin actively driving on our roads and in our communities.
There’s going to be “some period” of time while both kinds of vehicles are on the streets. This period may be mandated by law or may just occur because some people will cling to their “classic” vehicles. Right now, “classic” means 25 years or older but will there be a new term for human-controlled vehicles and autonomous? Will collectors help keep the “Golden Age” of automobiles alive?
The robot vehicles are going to have to compensate for the humans driving around them in some fashion. Right now, vehicles have lights and other devices to let the other human drivers know what the vehicle is likely to do. The cars of the future will probably talk to each other directly, allowing cars to safely operate closer to each other for instance. Right now we have spacing on the streets designed for human reaction time and comfort. If the vehicles can coordinate with each other, there’s no real reason they couldn’t be operating only a few inches from each other which might give us extra “lanes” at some point. Will we even need lanes in the future?
Conversely, if a vehicle breaks down in the middle of the road, will the autonomous vehicles be better able to handle maneuvering around the obstacle than the humans are today? Or will they do like so many tests show, if the lead car stops, the cars following it stop, too?
I don’t know the answer to these questions yet, but that’s what I’m hoping to help define. I know I’m not the only one thinking about these things and that makes me feel better already.

It’s a Metaphor!

Imagination-Celebration
A couple years ago, I participated in the Imagination Celebration in downtown Colorado Springs. My team from Colorado Tech had fielded a group of SumoBots to show off STEM activities. A SumoBot is a small cubic robot about 4 inches on a side which is outfitted with a couple IR sensors whose programming drives it to find its opponent, then push it out of a ring drawn on the table. Interesting side note, the robots don’t do too well out in the direct sunlight, for some reason 🙂 A little bit of shade made them work much better!
sumobot
Anyway, the kids came up to watch the robots doing their thing. Since there was nothing to do but watch, the kids didn’t really get involved. I rummaged in my backpack and found a yellow and an orange sticker which I placed on top of the ‘bots. I got the kids to start cheering on the “orange” ‘bot and it won. Encouraged, we continued to cheer the orange bot which won again and again. To the kids, their cheering was helping even though the engineers in the group knew there were no sound sensors on the ‘bot. For the first hour with the new colors, the orange ‘bot won about 95% of its matches, a statistical improbability. The kids were happy, the robots were doing their thing, but the tester in me was suspicious…
This all reminds me of a fairly apocryphal story from the automotive industry (Gupta, 2007). A customer called in to GM soon after purchasing a new vehicle to complain that his vehicle “was allergic” to vanilla ice cream. Puzzled help personal established that the vehicle was not ingesting the ice cream but rather that the when the customer purchased vanilla ice cream, and no other flavors, the car wouldn’t start to take him (and the ice cream) home to his family. The engineers, understandably wrote the guy off as a kook, knowing there was no way for the vehicle to know, much less care, about the ice cream purchase.
The customer continued to call in and complain. Intrigued, the manufacturer sent an engineer out to investigate, and hopefully calm the customer. Interestingly, the engineer was able to confirm the problem. When the customer bought vanilla ice cream, and no other flavors, the vehicle did not start. Knowing about the make-up of the vehicle, the engineer conducted some tests and found that the real problem the vehicle was experiencing was vapor-locking which resolved itself I the customer bought non-vanilla flavors of ice cream because the store had the vanilla up front because they sold so much of it. If you bought a different flavor, you had to walk further into the store and the additional time allowed the vapor-locking to resolve itself.
therapy-seal-robot
Sherry Turkel (2012) at MIT found that senior citizens given a “companion robot” that looked similar to a stuffed seal would talk to it and interact with it as though it were alive. Researchers found that the residents often placed the robot in the bathtub, thinking its needs were similar to a seals. Though it has been debunked by Snopes, the car owner determined that the vehicle “didn’t like” vanilla ice cream. We found similar behavior with the kids and the SumoBots. Cheering the orange one led it to win. Investigation showed the orange robot had the attack software installed where the yellow bot had line following software installed instead. In all these instances, the humans interacted with the machines using a metaphor they understood, other living beings.
The lesson? The snarky answer is that the customer doesn’t know what’s going on and they are trying to describe what they see. They often lack the vocabulary to explain what the machine/software is doing. But they are describing behavior that they believe they see. The engineer needs to pay attention to the clues however. Sometimes the customer does something that seems really reasonable to the customer that the product designer didn’t think of. And sometimes the metaphor just doesn’t stretch to what is being done.

References:
Gupta, N. (October 17, 2007). Vanilla ice cream that puzzled general motors. Retrieved from http://journal.naveeng.com/2007/10/17/vanilla-ice-cream-that-puzzled-general-motors/
Snopes.com (April 11, 2011). Cone of silence. Retrieved from http://www.snopes.com/autos/techno/icecream.asp
Turkle, S. (2012). Alone Together: Why we expect more from technology and less from each other. New York: Basic Books.

The Future Is So Bright, I Gotta Wear Shades…Or Blinders

I see autonomous vehicles being introduced in a serious way in the next 15-20 years. I don’t think they will be wide-spread per se but i think they will be on the roads. I think this will spin off a fair number of competing/collaborative technologies.
I don’t think the vehicles will be like KITT in Knight Rider but I don’t think they’ll be far off, after a while. Initially, your car will be waiting for you in the drive way. When you come out, it will start itself, position itself to pick you up if necessary, and ask your destination. After you tell it, the vehicle will check road conditions and plan the optimal route for you to get to work, taking considerations such as your love of Starbucks drive-thrus or cell phone reception into account. While you take a nap or talk on the phone, or conduct a meeting, the vehicle will navigate the streets, avoiding pedestrians, other vehicles, and various obstacles. The vehicle will drop you off at work then park itself, waiting for you to call for it again.
This idea has been around for decades.
And yet, I see some potential problems with the rollout. Most people like the idea of the vehicle driving itself so that they “don’t need to.” But people seem to be in two camps about these vehicles as they currently exist: they either love the idea and can’t imagine that anything might go wrong, or they imagine the vehicles will lead to Terminator type robots in the near future. On the first hand, people are already suggesting that insurance companies will go under once the autonomous vehicles are common because accidents will be so rare. And in the other, people are afraid we may end up with a fleet of machines like the movie vehicle, Christine, bent on killing us all. The truth is somewhere in between there, I imagine. On a slightly more serious note, sooner or later the vehicles will be forced to make a bad decision when confronted with a no win scenario, such as hitting one of several objects. If an accident is required in heavy traffic on slick roads, will the vehicle hit the vehicle in front of it let the one behind it hit it, or swerve and hit an oncoming car or pedestrian? How will the vehicle make the determination which is the “best thing” to hit? One potential solution would be to allow the vehicles to talk to each other so that they know the relative worth of each player. A vehicle with four people in it might be considered more valuable than a vehicle with only one occupant, according to the insurance people who would pay out the medical claims. Of course, if the vehicles are communicating like that, you know there will be an aftermarket device/mod that reports the vehicle has 10 people on board and therefore should not be hit!
And, as the machines become more “responsible”, they may report information to waiting police sentries concerning your health/sobriety, the vehicle’s health/maintenance, or number of “registered people” in the vehicle and whether they are wearing their seatbelts. Ack!
There might be some interesting activities which may occur if the vehicles are successful. Bus transportation (and other delivery type activities) should be easier to set up and maintain. Fewer vehicles may end up on the road as individual vehicles get “loaned out” during their idle times, meaning that an individual wouldn’t need to have exclusive rights to the machine if it is acting as a taxi/chauffeur. And if the vehicles become successful, the marketing will be about how relaxed you can be in the vehicle, how comfortable the seats are, how good the internet connections are, or how nice the stereo (or equivalent) is.
Or we’ll end up with machines working like a conveyor belt following predetermined routes like busses do. Any maybe swerving to run us down since pedestrians won’t have the aftermarket “no hit” devices. 🙂

The Real World is a Messy Place

The Real World is a messy place which we need to fit into the decision processes of our autonomous vehicles. In Acceptance Testing, we would certify that the machine was instructed to perform this specific task and it did, therefore we are happy. We could make a list of these tasks and check them off one by one:
a) The vehicle travelled from point A to point B without hitting anything for example.
b) The vehicle swerved correctly to miss a person standing in the road.
c) The vehicle changed roads because there was an obstruction ahead.
And so on! There are a million scenarios to test if we want to be certain that the machine will operate correctly, regardless of the situation.
But, what if we allowed other types of testing to become involved? White Box testing would allow us to test the individual sensors or try to feed them bad data specifically to confuse the vehicle. Exploratory Testing might ask “what-if” questions like “What if it’s snowing?” or “What if visibility is low?” or “What if the road was gravel?”
The Real World rarely has only one variable at play however, just ask anyone who has used a flight simulator. Eventually, the testing is going to have to try multiple problems at once. Let’s explore Hypothetical Scenario 17:

An autonomous vehicle is driving down an icy city side-street. There is a woman bundled against the cold, carrying a baby, attempting to cross the street. There are parked cars on both sides of the road. The vehicle in the on-coming lane abruptly turns left in front of our robot, perhaps into its own driveway. Our vehicle is going to hit something. What does the robot hit? How does it decide?
So far, all of our tests have been based on the idea that there is a “correct” answer for the robot to pick. What is that isn’t true? What will the robot do when faced with having to select a “bad” choice?

If the human was driving, s/he might reasonably select the target least likely to be damaged by their vehicle. They might select whatever is directly in front of them. They might swerve or perform another action specifically to avoid the pedestrian. The human would make this choice, potentially based on experience, their own value judgements, or some other consideration. What does the robot do?
If we are concerned about this scenario, perhaps we need to design an algorithm for the robot. An auto insurance company might suggest hitting the thing with the least value, relatively speaking. A Yugo parked on the side of the road is much cheaper to repair than a Bentley. A moving vehicle with one occupant is potentially cheaper than a vehicle with four passengers, medical claims-wise. We’d have to come up with whatever our algorithm is and build in a way for the robot to learn the information which leads to the decision.
One potential solution is to allow/require the vehicles to talk to each other. Car A reports that it has three passengers but good airbags. Car B reports that it is empty and a rental car. The pedestrian reports nothing. What does the robot hit now? Worse, if there is communication between the vehicles, how long until an after-market device is introduced claiming its vehicle is “expensive” by whatever scale? If everyone uses this device, we’re back to the original problem of having to make a bad decision with no data. Even worse, how long until there are websites (or whatever) which show backyard mechanics how to hack the decision-making process to protect their own vehicles/families? And even more exciting, if the cars are reporting on themselves, why can’t the police poll all the cars going by to see who’s speeding and not wearing their seatbelts and who hasn’t done timely “proper maintenance”?
And an even more troubling question is, who’s in trouble if the robot chooses a “bad” solution? The “driver” or the owner of the vehicle? Perhaps the builder, or the developer, or the tester? How will we know why the robot made the choice it did? Does it need to log the information in some way? If a human would have made a different decision, does that trump whatever the robot actually did? This is akin to telling Developers to make it work like the existing system/prototype without defining the actual requirements.
There’s going to be a period of transition as we bring these vehicles into reality. There will be some period of time, perhaps decades, where there will be autonomous, semi-autonomous, and human-driven vehicles all interacting on the same roads. We’re working hard to bring these vehicles to our streets, both in the US and elsewhere. So far, it seems that we’re excited that the robots can do simple tasks in controlled environments. But the Real World is not a controlled environment and maybe we need to think more about integrating messy, fragile, and litigious humans into the mix.

Who’s checking the requirements for the robots?

Comparison of Modeling Techniques to Generate Requirements for Autonomous Vehicles
The automotive industry is heavily regulated and has many years of experience defining what a vehicle is and what it should be capable of doing. For the new breed of autonomous vehicles coming to market, requirements gathering efforts center on prototype behaviors such as how a vehicle would behave if a human were operating it, perhaps slowing it down for rough terrain, or stopping to avoid hitting a pedestrian. These examples are typical “acceptance testing” examples where we can certify that the product was intended to do something and it did, therefor the product is “good.”
Prototyping is a well known tool for gathering requirements in both hardware and software development. However, there are other tools available which may allow us to build more robust understandings of the final product. Tools like use cases, a variety of diagraming tools, and a variety of testing techniques allow us to more concretely define what the system shall do and when it shall do it. It is my belief that we need a clearer understanding of how the autonomous vehicle should behave in a variety of situations including worst-case scenarios where damage will be done to the fragile and litigious human beings the vehicles will be operating around.
Overview of Topic
The automotive industry has a great deal of prototype awareness. They have been building vehicles and know how they operate in a variety of human-controlled situations. A human-in-the-loop would be able to infer problems and correct for them, often learned from bad decision making, either by themselves or through the experience of others.
The seminal document for autonomous vehicles comes from the 1939 World’s Fair. General Motors defined the vision for an autonomous vehicle which would self-navigate from your home to place of business or shopping and back again. The vehicle (GM, 1939) would allow a typical family to ride in comfort, unconcerned by the world outside their vehicle. Today, these machines are finally approaching reality with a variety of companies like GM and Google creating and testing “driverless vehicles.”
Companies overseas are taking a different tact, attempting to keep humans “in the loop.” “Our idea is not that a car should totally replace the driver, but that it will give the driver freedom,” Yu Kai, Baidu’s top AI researcher, told the South China Morning Post. (Wan, A. and Nam, W, 4/20/2015) “So the car is intelligent enough to operate by itself, like a horse, and make decisions depending on different road situations.”
So far, these vehicles have mostly operated on test tracks with controlled environments, even at the DARPA level. So far, vehicle tests have mainly be conducted to show that a group of vehicles can operate “together” or can avoid simple obstacles. So far, less than half of the states in the US have contemplated laws for allowing such a vehicle on limited roads. As these vehicles approach the public, we will need to certify them as road-worthy and safe.
Importance to the field (Level 2 Heading)
Since these vehicles are intended to some day operate on public streets around fragile and litigious humans, they must be tested fully. To do that, the testers must have a set of requirements to work with which handles the myriad capabilities of the vehicle so that the tests can be exhaustively run. Prototyped requirements often leave out details of execution which could be fatal in this case, pun intended.
Is a Vehicle a Robot?
In the literature, we dream of robots that will clean the house (The Jetsons), drive us to work (I, Robot), run our errands (Red Dwarf), wage war (Terminator), and be our faithful companions (Star Wars). In real life, there are machines that do all these tasks, at least a prototype level. Companies like iRobot, Boston Robotics, and many others are working on robots to fill these needs and many others. But, the term “robot” really covers a variety of machine types and functions. Are all mechanical “beings” robots?
For the purposes of this discussion, let us define robots as a state between a mechanical construct like a 1980 Ford Pinto that requires full human interaction to perform and a presumably self-realized construct like R2D2 in Star Wars. A robot should also be able to make some decisions on its own but be governed by programming logic and rules.
This rather vague opening allows us to further discuss the parameters that define these machines. To be useful, the robot must do something. Let us further define that the robot must move and be able to do so under its own power and direction. Baichtal (2015) suggests that the robot must interact with its environment though sensors, programmed instructions, or human interaction.
At the moment, constructs like industrial manufacturing machines, the “sandwich robot” (JRK, 2011), and the next generation of automobiles which operate without direct human intervention fit this general viewpoint while machines like smart phones and vending machines do not.
To be useful, the robot must do something that benefits either a human or a process. This purpose generally defines the kinds of robots that exist. There are animatronic robots who entertain us, cleaning robots like the Roomba, assembly robots that build our cars and perform other industrial needs, combat robots which aid our soldiers, Drones and ROVs which allow a human operator to remain safe behind the lines while directing its mission, food and beverage robots who make, inspect, and assemble our foods, and of course, robots that are designed to interact as companions to people.
A robot is typically made up of a body placed over a chassis that contains mounting equipment for sensors, control systems, motors, manipulators, and a power supply. The body can be rigid or flexible and acts as a cover to protect the moving and/or fragile components of the robot. This “skin” layer can act as the barrier between the robot and the environment. If the body is considered the skin, the chassis could be considered the skeleton supporting the other components that make up the robot. Sensors work to being stimulation from the environment to the robot and may include optical, tactile, or aural capabilities. The sensors feed into the control systems that determine how the robot will behave based on the environmental inputs. The behaviors may be to move the robot, manipulate something in the environment, interact with a human or other input source, or any of a myriad of other tasks.
What is Testing?
Though testing was once the culmination of a successful development career, Juristo, Moreno, and Strigel (2006) lamented that the state of software testing is less advanced than other software techniques, which may be due in part to the psychological desire to gain satisfaction for creating a new product as opposed to testing something that already exists. They described how it is a common practice in software companies to relegate software testing tasks to junior personnel. Juristo, Morneo, Vegas, and Shull (2009) documented 25-years worth of software engineering testing techniques. Software testing techniques were grouped into five categories: randomly-generated test cases, where test administrators guess at potential errors and design test cases to cover them; functional testing, also known as Black Box Testing, in which testers examine possible input values to determine sets for which a behaving system should produce those sets; control-flow testing, which is one variety of White Box Testing, where test designers exploit knowledge of the internal order in which the code executes; data-flow testing, which is another variety of White Box Testing, where test cases are created to explore different executable paths, variable definitions, and the like; and mutation testing, in which versions of the code are generated to include deliberately injected faults—if the test cases detect the known faults in the “mutants” then the same tests should detect natural faults as well. Their testing of testing techniques revealed that none of the five techniques was significantly more effective than any of the others.
What I hope to learn/ Problem Statement or Description (Level 1 Heading)
Since we are using prototyped experience for defining what vehicles should do, how do we ensure we have gathered all the requirements? There are a variety of tools for requirements gathering. Taken together, tools like use cases, design diagrams, and questions poised by testing techniques allow us to have a more complete image of the end result we hope to achieve.
Use Cases
Each one of these sections can contain multiple paragraphs that obviously belong under this heading. Blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah….
Blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah….
Diagramming Tools
Each one of these sections can contain multiple paragraphs that obviously belong under this heading. Blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah….
Testing Techniques
Juristo, Morneo, Vegas, and Shull (2009) documented 25-years worth of software engineering testing techniques. Software testing techniques were grouped into five categories: randomly-generated test cases, where test administrators guess at potential errors and design test cases to cover them; functional testing, also known as Black Box Testing, in which testers examine possible input values to determine sets for which a behaving system should produce those sets; control-flow testing, which is one variety of White Box Testing, where test designers exploit knowledge of the internal order in which the code executes; data-flow testing, which is another variety of White Box Testing, where test cases are created to explore different executable paths, variable definitions, and the like; and mutation testing, in which versions of the code are generated to include deliberately injected faults—if the test cases detect the known faults in the “mutants” then the same tests should detect natural faults as well. Their testing of testing techniques revealed that none of the five techniques was significantly more effective than any of the others.
James Bach (2003) introduced a new concept into the Testing lexicon by defining the “Exploratory School” of testing. Using it, he believes testers should be free to define ad hoc tests based on results seen and find answers to “I wonder what would happen if…” kinds of questions that seasoned testers may ask. His method is to be used with the other tools, not instead of the other tools.
AI algorithms have been developed for test case generation, execution, and verification by Alsmadi (2002). He explained that the test cases are selected to ensure test adequacy with the least amount of test cases possible, and the user interface components are serialized to an XML file. This model transformation allows utilizing the user interface as a substitute for the requirements, which are often missing or unclear, especially for legacy software (Alsmadi, 2002).
Khoshgoftaar, Seliya, and Sundaresh (2006) noted that business resources allocated for software quality assurance and improvement have not kept pace with the complexity of software systems and the growing need for software quality. They claimed that a targeted software quality inspection can detect faulty modules and reduce the number of faults that occur during operation. They presented a software fault prediction modeling approach using case-based reasoning (CBR) by quantifying the expected number of faults based on similar modules previously developed for new modules under development.

References
Alsmadi, I. (2002). Using AI in Software Test Automation [Electronic version]. Department of Computer Science and Information Technology, Yarmouk
University, Jordan.
Bach, J. (4/16/2003). Exploratory Testing Explained. Retrieved from http://www.satisfice.com/articles/et-article.pdf
Baichtal, J. (2015). Robot Builder: The Beginner’s Guide to Building Robots. Que: Indianapolis, IN
GM (1939). New Horizons. Retrieved from http://www.1939nyworldsfair.com/worlds_fair/wf_tour/zone-6/general_motors.htm
JRK (September 29, 2011). PR2 getting a sandwich. Retrieved from https://www.youtube.com/watch?v=RIYRQC2iBp0
Juristo, N., Moreno, A., Vegas, S. & Shull, F. (2009). A Look at 25 Years of Data. IEEE Software, 26(1), \5-\7.
Khoshgoftaar, T., Seliya, N. & Sundaresh, N. (2006). An Empirical Study of Predicting Software Faults with Case-Based Reasoning. Software Quality Journal, 14, 85-110.
Wan, A. and Nan, W. (4/20/2015). Baidu’s Yu Kai talks autonomous cars, artificial intelligence and the future of search. South China Morning Post. Retrieved from http://www.scmp.com/lifestyle/article/1767799/baidus-yu-kai-talks-autonomous-cars-artificial-intelligence-and-future

Spare No Expense…

Dr John Hammond described the safety measures employed in the original Jurassic Park as “sparing no expense” but admitted when things went wrong that no one reviewed the reviewers. The public deserves this and is going to demand it.

My dissertation topic has remained focused around the idea that the coming generation of robots needed to be tested more robustly than the tests we were currently performing. It was, and is, my conjecture that by applying James Bach’s Exploratory approach, we could define new tests which were needed , to complement the more traditional decision-tree kinds of approaches that are typically used in Acceptance Testing. When I started this project, that was a hard sell. Even my Dean said it was a silly topic with too much science fiction content. That was the better part of 5 years ago.

Nowadays, several companies including Google, Boston Dynamics, and GM, are showcasing possible robots that may soon be interacting with us, though most hasten to add that the robots are to be “service oriented.” Today, more voices are joining mine. More voices are beginning to appear in the main-stream media asking just how safe these robots are going to be. So far, the response has amounted to “don’t worry, we’ve got this.”

So far, vehicle tests have been mostly conducted to show that a group of vehicles can operate “together,” while safely maneuvering a test track. There’s a smart-road project being developed in Europe this year that steps beyond the test track and a new vehicle being released on the roads of Beijing this year as well.

The Cooperative ITS Corridor in Europe will ”harmonize smart-road standards” (Ross, P. 12/30/2014) and allow researchers to explore how vehicles of the future will interact. This road relies on wifi signals to communicate to all the cars on the road. Since so few vehicles are actually set up to receive these signals, the road project will allow the researchers to test theories about how differently equipped vehicles will interact.

The Chinese solution, on the other hand, assumes that a human will remain “in the loop.” “Our idea is not that a car should totally replace the driver, but that it will give the driver freedom.” (Wan, A. and Nam, W. 4/20/2015) Yu Kai, Chinese manufacturer Baidu’s top AI researcher suggests “…the car is intelligent enough to operate by itself, like a horse, and make decisions depending on different road situations.” (ibid)

Earlier this week, Dubai announced it was looking at rolling out “robo-cops” some time in the next few years, hopefully in time for Expo 2020. The Chief Information Officer and General Director of Dubai Police HQ Services Department says the move will help them deal with an ever-increasing populace. The robots will be part of the Smart City Initiative and will allow organizers to provide “better services” without hiring more people. Genra Khalid Nasser Alrazooqi says the robots will initially be used in malls and other public areas to increase police presence. At first, “the robots will interact directly with people and tourists. They will include an interactive screen and microphone connected to the Dubai Police call center. People will be ble to ask questions and make complaints, but they will also have fun interacting with the robots.” (Khaleej Times, 4/28/2015)

Alrazooqi says he hopes the robots will evolve quickly and that four or five years from now, the Dubai Police will be able to field autonomous robots that require no input from humans. “These will be fully intelligent robots that can interact with people, with no human intervention at all.” (ibid).

To me, this is scary stuff! I get the desire to move these things to the public sphere because there’s money to be made there, especially if you’re “first to market.” But there’s also a huge chance of killing this nacent industry if something goes wrong. So far, Acceptance Testing has been “good enough” for the kinds of experiences the vehicles have had. As we move these machines into the public arena and allow them to interact with “fragile and litigious” human beings, the testing must get more robust and the public made aware of the kinds of testing that have been done.

References:
Khalej Times. (4/28/2015). Smart policing to come Dubai robo-cops. Retrieved from https://en-maktoob.news.yahoo.com/smart-policing-come-dubai-robo-cops-055522487.html

Ross, P. (12/30/2014). Europe’s Smart Highway will Shepherd Cars from rotterdam to vienna. IEEE Spectrum. Retrieved from http://spectrum.ieee.org/transportation/advanced-cars/europes-smart-highway-will-shepherd-cars-from-rotterdam-to-vienna

Wan, A. and Nan, W. (4/20/2015). Baidu’s Yu Kai talks autonomous cars, artificial intelligence and the future of search. South China Morning Post. Retrieved from http://www.scmp.com/lifestyle/article/1767799/baidus-yu-kai-talks-autonomous-cars-artificial-intelligence-and-future

But FIRST, these messages

I went to a FIRST Robotics competition in Denver recently. This was the first time I went where there were high school students so the robots were a little more complex. I usually get to judge at the FIRST Lego competitions which are for elementary and middle school students.

My first thought was “my goodness”, these are much more complex tasks (and much more expensive robots)! But after watching the competition for a while, I began to notice some issues with the approaches the different teams attempted. The process is that the teams get a six week window to design and build their machines before sending them to the competition. Since design and building often takes most of the six weeks, most robot handlers/drivers do not get to work with the machine much before arriving at the competition.

The strategy of the teams seemed to vary quite a bit. Some teams went after the one “difficult” and therefore point-laden task and did nothing the rest of the round. Other teams went after one or more of the simple tasks and really focused their robot design on that task.
For example, at this competition, the “difficult” task was to claim trash cans from a no man’s land in the center of the competition area. To be successful, the robot must be able to reach across the no man’s land and pull the trashcan back to the working area for the other robots to stack. Only one team really tried to do this and it was clear they hadn’t really tested their “hooking” process. Their robot had to be very carefully set up and aimed. It rolled forward about a foot then dropped at arm down with the intent of grabbing the can. If it missed, and it always did, the team just parked there and waited for the round to end. Since their design clearly didn’t work, they basically sat out the rest of the team competitions. Surely building a mock-up of their arm and dropping it like the robot would have would have helped. And this bit of testing might have helped them score points, even if they didn’t have a full robot to test with.

The other big task was to stack totes. Some of the designs for how to do this were amazing! Several of the teams used a tall robot to pick up 4 individual totes then set them down already in a stacked pattern instead of picking up individual totes and attempting to stack them on an existing stack of totes. It was a cleaver idea and I was surprised several of the teams used this solution

So why do I bring this all up, except, of course, that robots are cool? That’s certainly a part of it! But seriously, my early thought was that it takes a huge number of people to reset the robots each round. And these are not technically robots by my definition. More importantly, testing of the robots should not be handled in such a cavalear fashion!
FIRST Lego League, which I have participated in and which is designed for younger students, has an army of judges who interact with each team, trying to figure out what the team strategy was or encourage the teams to do better. On the other hand, each round at this competition was based on three teams, an “Alliance” though they seemed to be arbitrarily grouped, fighting three other teams. The Alliance that won best two out of three moved on to the next round. An Alliance was made up of three school teams containing what appeared to be 4-10 students and their instructor/sponsor. In between rounds a battalion of judges/resetters came out and reset the arena for the next round. Students came out and reset the robots themselves. As you can see in the video, it’s a fairly quick process but it required a swarm of folks to pull off. Perhaps the death knell for American Industry “peopled” by humans is a little further away than we have been led to believe?

Even more important to me, these machines were not robots, per se, as I would define them. The machines had a 20 second window at the beginning of the round to do something autonomous. The rest of the round, the machines were handled like RC vehicles. While drone makers would argue that RC vehicles are robots, I do not. In practice, many of the machines sat idle during the autonomous portion or positioned themselves for the RC portion of the round. This is understandable in that the build deadline is very short and the strategy for earning points is very important. Complex tasks or automated tasks are worth more but there are a lot of simple tasks that can be done. But again, it implies that humans will remain “in the loop” for the fore-seeable future.

A 6-week build time is very ambitious, even for college students! I understand that the FIRST folks don’t want to interfere with the school year, per se, but I believe we are setting the kids up for failure in the long run. This process is much like our “code-like-hell” process in IT where we sling code into the real world based on deadlines not completeness, knowing we can patch it later. Surely we don’t want to teach people to ignore testing? Surely we don’t want to reinforce the idea that any idea is a good one? With something like 60% of all software projects failing to meet customer expectations, do we really want to showcase this behavior? Can you imagine only having six weeks to practice [insert collegiate sport here] before a competition amongst rival schools? A one day event where all the athletes come together, for good or ill?

Actually, I want to compare robotics to athletics but that’s a post for another day! In the meantime, let me get off my soapbox, for now.

What’s a robot?

Baichtal (2015) reminds us that “Pretty much everyone loves robots. It’s a fact!” In the literature, we dream of robots that will clean the house (The Jetsons), drive us to work (I, Robot), run our errands (Red Dwarf), wage war (Terminator), and be our faithful companions (Star Wars). In real life, there are machines that do all these tasks, at least a prototype level. Companies like iRobot, Boston Robotics, and many others are working on robots to fill these needs and many others. But, the term “robot” really covers a variety of machine types and functions. Are all mechanical “beings” robots?
For the purposes of this discussion, let us define robots as a state between a mechanical construct like a 1980 Ford Pinto that requires full human interaction to perform and a presumably self-realized construct like R2D2 in Star Wars. A robot should also be able to make some decisions on its own but be governed by programming logic and rules.
This rather vague opening allows us to further discuss the parameters that define these machines. To be useful, the robot must do something. Let us further define that the robot must move and be able to do so under its own power and direction. Baichtal (2015) suggests that the robot must interact with its environment though sensors, programmed instructions, or human interaction.
At the moment, constructs like industrial manufacturing machines, the “sandwich robot” (JRK, 2011), and the next generation of automobiles fit this general viewpoint while machines like smart phones and vending machines do not.
To be useful, the robot must do something that benefits either a human or a process. This purpose generally defines the kinds of robots that exist. There are animatronic robots who entertain us, cleaning robots like the Roomba, assembly robots that build our cars and perform other industrial needs, combat robots which aid our soldiers, Drones and ROVs which allow a human operator to remain safe behind the lines while directing its mission, food and beverage robots who make, inspect, and assemble our foods, and of course, robots that are designed to interact as companions to people.
A robot is typically made up of a body placed over a chassis that contains mounting equipment for sensors, control systems, motors, manipulators, and a power supply. The body can be rigid or flexible and acts as a cover to protect the moving and/or fragile components of the robot. This “skin” layer can act as the barrier between the robot and the environment. If the body is considered the skin, the chassis could be considered the skeleton supporting the other components that make up the robot. Sensors work to being stimulation from the environment to the robot and may include optical, tactile, or aural capabilities. The sensors feed into the control systems that determine how the robot will behave based on the environmental inputs. The behaviors may be to move the robot, manipulate something in the environment, interact with a human or other input source, or any of a myriad of other tasks.

References

Baichtal, J. (2015). Robot Builder: The Beginner’s Guide to Building Robots. Que: Indianapolis, IN
JRK (September 29, 2011). PR2 getting a sandwich. Retrieved from https://www.youtube.com/watch?v=RIYRQC2iBp0

Testing Our New Robot Overlords

I was confronted with the scope of what I am talking about recently when discussing my general research efforts:

The robots will be “self learning” and we will want to stop them from learning things that are not relevant or that we don’t want them to know. We might want a robot who makes sandwiches to know about foods and allergies and presentation and cultural items related to food, but we probably don’t want it to know about politics or voting or travel or whatever… Whatever we do to “focus” the robot’s learning will eventually be applied to our children, which might bring us back to the days of tests that determine your career futures…

If the robot makes a “bad decision” based on what it has learned or because it had to choose between two bad choices, who is to blame? The tester? the coder? the builder? the owner? the robot? How would we even punish the robot? And how is this really different from blaming the parents or the teachers when kids do dumb things?

If the robot is a car and it has to decide which other car to hit because it’s going to be in an accident, how does it determine which car is a better target? Will the robot know something about the relative “worth” of the other vehicles? their cargos? Is a car with 4 people in it worth more than a car with 1? The insurance people would argue yes I imagine. If the robot has this worthiness data, how long will it be before an after-market device is sold which tells the robot that this particular vehicle is “high value” in an effort to protect individual property?

I realize this is all outside the scope of what I’m doing and that greater minds than mine have to address these issues. Especially on issues which address the human condition! But, it’s sobering to see how large this field could become. This is not just testing a new phone app!