Who’s checking the requirements for the robots?

Comparison of Modeling Techniques to Generate Requirements for Autonomous Vehicles
The automotive industry is heavily regulated and has many years of experience defining what a vehicle is and what it should be capable of doing. For the new breed of autonomous vehicles coming to market, requirements gathering efforts center on prototype behaviors such as how a vehicle would behave if a human were operating it, perhaps slowing it down for rough terrain, or stopping to avoid hitting a pedestrian. These examples are typical “acceptance testing” examples where we can certify that the product was intended to do something and it did, therefor the product is “good.”
Prototyping is a well known tool for gathering requirements in both hardware and software development. However, there are other tools available which may allow us to build more robust understandings of the final product. Tools like use cases, a variety of diagraming tools, and a variety of testing techniques allow us to more concretely define what the system shall do and when it shall do it. It is my belief that we need a clearer understanding of how the autonomous vehicle should behave in a variety of situations including worst-case scenarios where damage will be done to the fragile and litigious human beings the vehicles will be operating around.
Overview of Topic
The automotive industry has a great deal of prototype awareness. They have been building vehicles and know how they operate in a variety of human-controlled situations. A human-in-the-loop would be able to infer problems and correct for them, often learned from bad decision making, either by themselves or through the experience of others.
The seminal document for autonomous vehicles comes from the 1939 World’s Fair. General Motors defined the vision for an autonomous vehicle which would self-navigate from your home to place of business or shopping and back again. The vehicle (GM, 1939) would allow a typical family to ride in comfort, unconcerned by the world outside their vehicle. Today, these machines are finally approaching reality with a variety of companies like GM and Google creating and testing “driverless vehicles.”
Companies overseas are taking a different tact, attempting to keep humans “in the loop.” “Our idea is not that a car should totally replace the driver, but that it will give the driver freedom,” Yu Kai, Baidu’s top AI researcher, told the South China Morning Post. (Wan, A. and Nam, W, 4/20/2015) “So the car is intelligent enough to operate by itself, like a horse, and make decisions depending on different road situations.”
So far, these vehicles have mostly operated on test tracks with controlled environments, even at the DARPA level. So far, vehicle tests have mainly be conducted to show that a group of vehicles can operate “together” or can avoid simple obstacles. So far, less than half of the states in the US have contemplated laws for allowing such a vehicle on limited roads. As these vehicles approach the public, we will need to certify them as road-worthy and safe.
Importance to the field (Level 2 Heading)
Since these vehicles are intended to some day operate on public streets around fragile and litigious humans, they must be tested fully. To do that, the testers must have a set of requirements to work with which handles the myriad capabilities of the vehicle so that the tests can be exhaustively run. Prototyped requirements often leave out details of execution which could be fatal in this case, pun intended.
Is a Vehicle a Robot?
In the literature, we dream of robots that will clean the house (The Jetsons), drive us to work (I, Robot), run our errands (Red Dwarf), wage war (Terminator), and be our faithful companions (Star Wars). In real life, there are machines that do all these tasks, at least a prototype level. Companies like iRobot, Boston Robotics, and many others are working on robots to fill these needs and many others. But, the term “robot” really covers a variety of machine types and functions. Are all mechanical “beings” robots?
For the purposes of this discussion, let us define robots as a state between a mechanical construct like a 1980 Ford Pinto that requires full human interaction to perform and a presumably self-realized construct like R2D2 in Star Wars. A robot should also be able to make some decisions on its own but be governed by programming logic and rules.
This rather vague opening allows us to further discuss the parameters that define these machines. To be useful, the robot must do something. Let us further define that the robot must move and be able to do so under its own power and direction. Baichtal (2015) suggests that the robot must interact with its environment though sensors, programmed instructions, or human interaction.
At the moment, constructs like industrial manufacturing machines, the “sandwich robot” (JRK, 2011), and the next generation of automobiles which operate without direct human intervention fit this general viewpoint while machines like smart phones and vending machines do not.
To be useful, the robot must do something that benefits either a human or a process. This purpose generally defines the kinds of robots that exist. There are animatronic robots who entertain us, cleaning robots like the Roomba, assembly robots that build our cars and perform other industrial needs, combat robots which aid our soldiers, Drones and ROVs which allow a human operator to remain safe behind the lines while directing its mission, food and beverage robots who make, inspect, and assemble our foods, and of course, robots that are designed to interact as companions to people.
A robot is typically made up of a body placed over a chassis that contains mounting equipment for sensors, control systems, motors, manipulators, and a power supply. The body can be rigid or flexible and acts as a cover to protect the moving and/or fragile components of the robot. This “skin” layer can act as the barrier between the robot and the environment. If the body is considered the skin, the chassis could be considered the skeleton supporting the other components that make up the robot. Sensors work to being stimulation from the environment to the robot and may include optical, tactile, or aural capabilities. The sensors feed into the control systems that determine how the robot will behave based on the environmental inputs. The behaviors may be to move the robot, manipulate something in the environment, interact with a human or other input source, or any of a myriad of other tasks.
What is Testing?
Though testing was once the culmination of a successful development career, Juristo, Moreno, and Strigel (2006) lamented that the state of software testing is less advanced than other software techniques, which may be due in part to the psychological desire to gain satisfaction for creating a new product as opposed to testing something that already exists. They described how it is a common practice in software companies to relegate software testing tasks to junior personnel. Juristo, Morneo, Vegas, and Shull (2009) documented 25-years worth of software engineering testing techniques. Software testing techniques were grouped into five categories: randomly-generated test cases, where test administrators guess at potential errors and design test cases to cover them; functional testing, also known as Black Box Testing, in which testers examine possible input values to determine sets for which a behaving system should produce those sets; control-flow testing, which is one variety of White Box Testing, where test designers exploit knowledge of the internal order in which the code executes; data-flow testing, which is another variety of White Box Testing, where test cases are created to explore different executable paths, variable definitions, and the like; and mutation testing, in which versions of the code are generated to include deliberately injected faults—if the test cases detect the known faults in the “mutants” then the same tests should detect natural faults as well. Their testing of testing techniques revealed that none of the five techniques was significantly more effective than any of the others.
What I hope to learn/ Problem Statement or Description (Level 1 Heading)
Since we are using prototyped experience for defining what vehicles should do, how do we ensure we have gathered all the requirements? There are a variety of tools for requirements gathering. Taken together, tools like use cases, design diagrams, and questions poised by testing techniques allow us to have a more complete image of the end result we hope to achieve.
Use Cases
Each one of these sections can contain multiple paragraphs that obviously belong under this heading. Blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah….
Blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah….
Diagramming Tools
Each one of these sections can contain multiple paragraphs that obviously belong under this heading. Blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah….
Testing Techniques
Juristo, Morneo, Vegas, and Shull (2009) documented 25-years worth of software engineering testing techniques. Software testing techniques were grouped into five categories: randomly-generated test cases, where test administrators guess at potential errors and design test cases to cover them; functional testing, also known as Black Box Testing, in which testers examine possible input values to determine sets for which a behaving system should produce those sets; control-flow testing, which is one variety of White Box Testing, where test designers exploit knowledge of the internal order in which the code executes; data-flow testing, which is another variety of White Box Testing, where test cases are created to explore different executable paths, variable definitions, and the like; and mutation testing, in which versions of the code are generated to include deliberately injected faults—if the test cases detect the known faults in the “mutants” then the same tests should detect natural faults as well. Their testing of testing techniques revealed that none of the five techniques was significantly more effective than any of the others.
James Bach (2003) introduced a new concept into the Testing lexicon by defining the “Exploratory School” of testing. Using it, he believes testers should be free to define ad hoc tests based on results seen and find answers to “I wonder what would happen if…” kinds of questions that seasoned testers may ask. His method is to be used with the other tools, not instead of the other tools.
AI algorithms have been developed for test case generation, execution, and verification by Alsmadi (2002). He explained that the test cases are selected to ensure test adequacy with the least amount of test cases possible, and the user interface components are serialized to an XML file. This model transformation allows utilizing the user interface as a substitute for the requirements, which are often missing or unclear, especially for legacy software (Alsmadi, 2002).
Khoshgoftaar, Seliya, and Sundaresh (2006) noted that business resources allocated for software quality assurance and improvement have not kept pace with the complexity of software systems and the growing need for software quality. They claimed that a targeted software quality inspection can detect faulty modules and reduce the number of faults that occur during operation. They presented a software fault prediction modeling approach using case-based reasoning (CBR) by quantifying the expected number of faults based on similar modules previously developed for new modules under development.

References
Alsmadi, I. (2002). Using AI in Software Test Automation [Electronic version]. Department of Computer Science and Information Technology, Yarmouk
University, Jordan.
Bach, J. (4/16/2003). Exploratory Testing Explained. Retrieved from http://www.satisfice.com/articles/et-article.pdf
Baichtal, J. (2015). Robot Builder: The Beginner’s Guide to Building Robots. Que: Indianapolis, IN
GM (1939). New Horizons. Retrieved from http://www.1939nyworldsfair.com/worlds_fair/wf_tour/zone-6/general_motors.htm
JRK (September 29, 2011). PR2 getting a sandwich. Retrieved from https://www.youtube.com/watch?v=RIYRQC2iBp0
Juristo, N., Moreno, A., Vegas, S. & Shull, F. (2009). A Look at 25 Years of Data. IEEE Software, 26(1), \5-\7.
Khoshgoftaar, T., Seliya, N. & Sundaresh, N. (2006). An Empirical Study of Predicting Software Faults with Case-Based Reasoning. Software Quality Journal, 14, 85-110.
Wan, A. and Nan, W. (4/20/2015). Baidu’s Yu Kai talks autonomous cars, artificial intelligence and the future of search. South China Morning Post. Retrieved from http://www.scmp.com/lifestyle/article/1767799/baidus-yu-kai-talks-autonomous-cars-artificial-intelligence-and-future

Spare No Expense…

Dr John Hammond described the safety measures employed in the original Jurassic Park as “sparing no expense” but admitted when things went wrong that no one reviewed the reviewers. The public deserves this and is going to demand it.

My dissertation topic has remained focused around the idea that the coming generation of robots needed to be tested more robustly than the tests we were currently performing. It was, and is, my conjecture that by applying James Bach’s Exploratory approach, we could define new tests which were needed , to complement the more traditional decision-tree kinds of approaches that are typically used in Acceptance Testing. When I started this project, that was a hard sell. Even my Dean said it was a silly topic with too much science fiction content. That was the better part of 5 years ago.

Nowadays, several companies including Google, Boston Dynamics, and GM, are showcasing possible robots that may soon be interacting with us, though most hasten to add that the robots are to be “service oriented.” Today, more voices are joining mine. More voices are beginning to appear in the main-stream media asking just how safe these robots are going to be. So far, the response has amounted to “don’t worry, we’ve got this.”

So far, vehicle tests have been mostly conducted to show that a group of vehicles can operate “together,” while safely maneuvering a test track. There’s a smart-road project being developed in Europe this year that steps beyond the test track and a new vehicle being released on the roads of Beijing this year as well.

The Cooperative ITS Corridor in Europe will ”harmonize smart-road standards” (Ross, P. 12/30/2014) and allow researchers to explore how vehicles of the future will interact. This road relies on wifi signals to communicate to all the cars on the road. Since so few vehicles are actually set up to receive these signals, the road project will allow the researchers to test theories about how differently equipped vehicles will interact.

The Chinese solution, on the other hand, assumes that a human will remain “in the loop.” “Our idea is not that a car should totally replace the driver, but that it will give the driver freedom.” (Wan, A. and Nam, W. 4/20/2015) Yu Kai, Chinese manufacturer Baidu’s top AI researcher suggests “…the car is intelligent enough to operate by itself, like a horse, and make decisions depending on different road situations.” (ibid)

Earlier this week, Dubai announced it was looking at rolling out “robo-cops” some time in the next few years, hopefully in time for Expo 2020. The Chief Information Officer and General Director of Dubai Police HQ Services Department says the move will help them deal with an ever-increasing populace. The robots will be part of the Smart City Initiative and will allow organizers to provide “better services” without hiring more people. Genra Khalid Nasser Alrazooqi says the robots will initially be used in malls and other public areas to increase police presence. At first, “the robots will interact directly with people and tourists. They will include an interactive screen and microphone connected to the Dubai Police call center. People will be ble to ask questions and make complaints, but they will also have fun interacting with the robots.” (Khaleej Times, 4/28/2015)

Alrazooqi says he hopes the robots will evolve quickly and that four or five years from now, the Dubai Police will be able to field autonomous robots that require no input from humans. “These will be fully intelligent robots that can interact with people, with no human intervention at all.” (ibid).

To me, this is scary stuff! I get the desire to move these things to the public sphere because there’s money to be made there, especially if you’re “first to market.” But there’s also a huge chance of killing this nacent industry if something goes wrong. So far, Acceptance Testing has been “good enough” for the kinds of experiences the vehicles have had. As we move these machines into the public arena and allow them to interact with “fragile and litigious” human beings, the testing must get more robust and the public made aware of the kinds of testing that have been done.

References:
Khalej Times. (4/28/2015). Smart policing to come Dubai robo-cops. Retrieved from https://en-maktoob.news.yahoo.com/smart-policing-come-dubai-robo-cops-055522487.html

Ross, P. (12/30/2014). Europe’s Smart Highway will Shepherd Cars from rotterdam to vienna. IEEE Spectrum. Retrieved from http://spectrum.ieee.org/transportation/advanced-cars/europes-smart-highway-will-shepherd-cars-from-rotterdam-to-vienna

Wan, A. and Nan, W. (4/20/2015). Baidu’s Yu Kai talks autonomous cars, artificial intelligence and the future of search. South China Morning Post. Retrieved from http://www.scmp.com/lifestyle/article/1767799/baidus-yu-kai-talks-autonomous-cars-artificial-intelligence-and-future

The cars, they are a coming!

It appears that the driverless cars are becoming a hot commodity! Sadly, it appears that the US is lagging behind China and Japan dramatically. But, it appears that many of the designs are following the Drone methodology instead, letting the vehicles be piloted by a human “in the loop.” “Our idea is not that a car should totally replace the driver, but that it will give the driver freedom,” Yu Kai, Baidu’s top AI researcher, told the South China Morning Post. (Wan, A. and Nam, W, 4/20/2015) “So the car is intelligent enough to operate by itself, like a horse, and make decisions depending on different road situations.” 

That equine analogy has been used before, notably by design guru Don Norman in a keynote speech last summer. (Ross, 12/30/2014) “Loose reins: the horse is in control and takes you home. Tight reins are to force the horse to do things that are uncomfortable—but not dangerous.”  After a hundred years of automotive progress, we’re still measuring the cars in relation to horses! 🙂

Baidu’s self-driving car project is on track, if you’ll pardon my pun, to hit Beijing highways by end of the year, just as the company’s CEO had suggested.

References
Ross, P. (12/30/2014). Europe’s Smart Highway will Shepherd Cars from rotterdam to vienna. IEEE Spectrum. Retrieved from http://spectrum.ieee.org/transportation/advanced-cars/europes-smart-highway-will-shepherd-cars-from-rotterdam-to-vienna

Wan, A. and Nan, W. (4/20/2015). Baidu’s Yu Kai talks autonomous cars, artificial intelligence and the future of search. South China Morning Post. Retrieved from http://www.scmp.com/lifestyle/article/1767799/baidus-yu-kai-talks-autonomous-cars-artificial-intelligence-and-future

But FIRST, these messages

I went to a FIRST Robotics competition in Denver recently. This was the first time I went where there were high school students so the robots were a little more complex. I usually get to judge at the FIRST Lego competitions which are for elementary and middle school students.

My first thought was “my goodness”, these are much more complex tasks (and much more expensive robots)! But after watching the competition for a while, I began to notice some issues with the approaches the different teams attempted. The process is that the teams get a six week window to design and build their machines before sending them to the competition. Since design and building often takes most of the six weeks, most robot handlers/drivers do not get to work with the machine much before arriving at the competition.

The strategy of the teams seemed to vary quite a bit. Some teams went after the one “difficult” and therefore point-laden task and did nothing the rest of the round. Other teams went after one or more of the simple tasks and really focused their robot design on that task.
For example, at this competition, the “difficult” task was to claim trash cans from a no man’s land in the center of the competition area. To be successful, the robot must be able to reach across the no man’s land and pull the trashcan back to the working area for the other robots to stack. Only one team really tried to do this and it was clear they hadn’t really tested their “hooking” process. Their robot had to be very carefully set up and aimed. It rolled forward about a foot then dropped at arm down with the intent of grabbing the can. If it missed, and it always did, the team just parked there and waited for the round to end. Since their design clearly didn’t work, they basically sat out the rest of the team competitions. Surely building a mock-up of their arm and dropping it like the robot would have would have helped. And this bit of testing might have helped them score points, even if they didn’t have a full robot to test with.

The other big task was to stack totes. Some of the designs for how to do this were amazing! Several of the teams used a tall robot to pick up 4 individual totes then set them down already in a stacked pattern instead of picking up individual totes and attempting to stack them on an existing stack of totes. It was a cleaver idea and I was surprised several of the teams used this solution

So why do I bring this all up, except, of course, that robots are cool? That’s certainly a part of it! But seriously, my early thought was that it takes a huge number of people to reset the robots each round. And these are not technically robots by my definition. More importantly, testing of the robots should not be handled in such a cavalear fashion!
FIRST Lego League, which I have participated in and which is designed for younger students, has an army of judges who interact with each team, trying to figure out what the team strategy was or encourage the teams to do better. On the other hand, each round at this competition was based on three teams, an “Alliance” though they seemed to be arbitrarily grouped, fighting three other teams. The Alliance that won best two out of three moved on to the next round. An Alliance was made up of three school teams containing what appeared to be 4-10 students and their instructor/sponsor. In between rounds a battalion of judges/resetters came out and reset the arena for the next round. Students came out and reset the robots themselves. As you can see in the video, it’s a fairly quick process but it required a swarm of folks to pull off. Perhaps the death knell for American Industry “peopled” by humans is a little further away than we have been led to believe?

Even more important to me, these machines were not robots, per se, as I would define them. The machines had a 20 second window at the beginning of the round to do something autonomous. The rest of the round, the machines were handled like RC vehicles. While drone makers would argue that RC vehicles are robots, I do not. In practice, many of the machines sat idle during the autonomous portion or positioned themselves for the RC portion of the round. This is understandable in that the build deadline is very short and the strategy for earning points is very important. Complex tasks or automated tasks are worth more but there are a lot of simple tasks that can be done. But again, it implies that humans will remain “in the loop” for the fore-seeable future.

A 6-week build time is very ambitious, even for college students! I understand that the FIRST folks don’t want to interfere with the school year, per se, but I believe we are setting the kids up for failure in the long run. This process is much like our “code-like-hell” process in IT where we sling code into the real world based on deadlines not completeness, knowing we can patch it later. Surely we don’t want to teach people to ignore testing? Surely we don’t want to reinforce the idea that any idea is a good one? With something like 60% of all software projects failing to meet customer expectations, do we really want to showcase this behavior? Can you imagine only having six weeks to practice [insert collegiate sport here] before a competition amongst rival schools? A one day event where all the athletes come together, for good or ill?

Actually, I want to compare robotics to athletics but that’s a post for another day! In the meantime, let me get off my soapbox, for now.

What’s a robot?

Baichtal (2015) reminds us that “Pretty much everyone loves robots. It’s a fact!” In the literature, we dream of robots that will clean the house (The Jetsons), drive us to work (I, Robot), run our errands (Red Dwarf), wage war (Terminator), and be our faithful companions (Star Wars). In real life, there are machines that do all these tasks, at least a prototype level. Companies like iRobot, Boston Robotics, and many others are working on robots to fill these needs and many others. But, the term “robot” really covers a variety of machine types and functions. Are all mechanical “beings” robots?
For the purposes of this discussion, let us define robots as a state between a mechanical construct like a 1980 Ford Pinto that requires full human interaction to perform and a presumably self-realized construct like R2D2 in Star Wars. A robot should also be able to make some decisions on its own but be governed by programming logic and rules.
This rather vague opening allows us to further discuss the parameters that define these machines. To be useful, the robot must do something. Let us further define that the robot must move and be able to do so under its own power and direction. Baichtal (2015) suggests that the robot must interact with its environment though sensors, programmed instructions, or human interaction.
At the moment, constructs like industrial manufacturing machines, the “sandwich robot” (JRK, 2011), and the next generation of automobiles fit this general viewpoint while machines like smart phones and vending machines do not.
To be useful, the robot must do something that benefits either a human or a process. This purpose generally defines the kinds of robots that exist. There are animatronic robots who entertain us, cleaning robots like the Roomba, assembly robots that build our cars and perform other industrial needs, combat robots which aid our soldiers, Drones and ROVs which allow a human operator to remain safe behind the lines while directing its mission, food and beverage robots who make, inspect, and assemble our foods, and of course, robots that are designed to interact as companions to people.
A robot is typically made up of a body placed over a chassis that contains mounting equipment for sensors, control systems, motors, manipulators, and a power supply. The body can be rigid or flexible and acts as a cover to protect the moving and/or fragile components of the robot. This “skin” layer can act as the barrier between the robot and the environment. If the body is considered the skin, the chassis could be considered the skeleton supporting the other components that make up the robot. Sensors work to being stimulation from the environment to the robot and may include optical, tactile, or aural capabilities. The sensors feed into the control systems that determine how the robot will behave based on the environmental inputs. The behaviors may be to move the robot, manipulate something in the environment, interact with a human or other input source, or any of a myriad of other tasks.

References

Baichtal, J. (2015). Robot Builder: The Beginner’s Guide to Building Robots. Que: Indianapolis, IN
JRK (September 29, 2011). PR2 getting a sandwich. Retrieved from https://www.youtube.com/watch?v=RIYRQC2iBp0

Happy Day After!

I have not been good about posting to my blog this last week and I’m not exactly sure why.

A lot of crazy things have been going on but that’s not really newsworthy. That is kind of the standard state of things 🙂

School is still school. The job hunt continues. The house remodeling continues. And to complete the meme, “still single, still not king.”

But you know what? I have had to interact with a lot of, well, ignorant people in the last two weeks. People who I would say are “just following orders” or don’t stop to think about what they say. The hold message at CenturyLink says “…please tell the operator if you don’t want us to use (your private information) to offer you goods and services. This has no bearing on the goods and services we’ll offer.” What? I challenged the operator that the message didn’t make a bit of sense and she said “of course it does” and was annoyed with me for questioning the message. I can’t say these people are “stupid” as they appear to be functioning members of society. They appear to have families and friends that love and support them. They appear to go about their daily routines.

But. Many of them seem oblivious to the surrounding world and basic world views which differ from their own like say the scientific method. 19th Century thought suggested that the entire world was knowable, either by observation or experimentation. Much of 21st Century thought seems to be based purely on our own opinions and observations. It’s as though social media has taught us that our opinion, even if based on nothing, is so important to share, that we must constantly inform our friends of our merest thoughts. We see each other’s thoughts which validate our own merest thoughts and we are reenforced.

The conspiracy shows are all talking about how the MMR vaccine is likely to be the cause of the increase in Autism. This is akin to the folks who earlier were saying, “well, this winter is the worst in recent history, therefore global warming is a hoax.” Forgetting of course that our limited experience is not the sum total of all experiences. And I hate to point it out but all of those new autism patients have also breathed air, drank water, and eaten food in this country. I agree that autism rates are skyrocketing but vaccines are not necessarily the culprit. Studies have been done that show they aren’t but this “well I just feel this is true” mentality doesn’t watch for facts.

It’s like we stopped exploring or thinking about the world when we were teens. Having worked with middle school students, so much of world history makes more sense now because the early teen-aged people are very set in black and white thinking and that there can only be one way of thinking. Witness the bullying that kids do to each other, especially that one kid that’s different. Joan of Arc was 14 when she became famous.

We bemoan the loss of creativity and STEM knowledge in our society but it seems like we’ve lost a great deal of common sense and the ability to take a step back and actually look at a situation. Business schools pump out MBAs who have to research White Papers without critiquing the authors. Anyone can write a White Paper. Anyone can publish a study or an OpEd. I can’t tell you the number of Dissertations I’ve read where I thought “wow, these are the wrong questions for this topic” or “that solution has nothing to do with causality.” And yet, 100+ pages later the author has proved that there may or may not be a connection between what s/he wanted to prove and what they researched.

And that brings me to my writing. Here I am. I’ll likely be a Doctor in Computer Science in the nearish future. I’ve read study after study. I’ve read the background material that is relevant to me and my research. I want to see further because I stand on the shoulders of giants. But I also know that it takes a creative spark to make a leap from what is known to what needs to be known. And I know that as I get a better view, I may divert from the course set out by my colleagues. I may have to blaze a new trail but I can’t do it without observation and experimentation. I can’t blaze it without being able to tell people where I am and how I got there. I can follow my gut and explore but if the data I have doesn’t shine on my path, I may need to check my maps. The method I follow, the scientific method, allows for course corrections, it encourages me to find the flaws in my own thinking and correct it as better or more complete knowledge becomes available.

Now, how can I teach the robot that? And more importantly, “should I?”

There’s an app for that!

Today’s post may go in a strange direction. To me, what I’m about to say is almost so obvious that it doesn’t need to be said. We’ll see.

This week I received a new piece of medical technology. It was ordered nearly a month ago by doctors who I had to wait for nearly two months to see. It was supposed to be delivered last week but the company “forgot” then labelled the equipment as patient pick-up even though it had to be installed in my home.

By time I actually received the device, I was beginning to wonder just how important it was to actually have it, a somewhat negative view I admit.

The delivery guy came out with the component and wheeled it in but did not know particularly how to attach it to the other piece of equipment that I had already received as he hadn’t been trained on the new model I had. I asked him several questions about what the purpose of the machine was and how it performed its task. The driver wasn’t terribly sure and couldn’t call back to the office for help because their communication equipment wasn’t working (you don’t have a phone?). He made some claims about the equipment which I was able to disprove while he watched and it was clear he was reading from a memorized script and didn’t know how the machine worked at all. I shoo’d him out and connected the machine myself after signing the waivers that if I died I wouldn’t sue the company or the delivery driver.

This whole fiasco reminded me of the washing machine debacle from around Thanksgiving. I had found a screaming deal on a new washer/dryer that could be stacked in my basement. I wasn’t going to be home for a few days so I had them deliver them a week later. They forgot even though I took time off to be available. They sent the machines out the next day with a driver who was not authorized to install them because they were gas. I argued with the delivery driver who decided that company policy is to just bring the machines back to the shop if there’s any problem. A week later and several calls, the same kid brought the machines back but still did not know how to install them or that they were even stackable. The guys working in my basement helped the kids figure out how to install the machines but in the meantime the kids had accidentally broken off a component of the drainage system for the washer.

wirebundle

So here we go with my questions which are relevant to this article.
1) If you don’t know what your job is, eventually a robot is going to replace you doing it. A robot cannot make decisions on its own so it’s going to need failover decisions to choose when things go wrong. The robot would have reasonably taken the washer and dryer back to the shop when confronted with a gas fixture or irate home owner. A human should be able to figure these things out.
2) Conversely, our society seems to be “dumbing things down” to the point where we can off-shore tasks that are technical in nature by defining scripts that people can read to their customers, even if they don’t understand the content. If we have to do that, again a robot can take over the task.
3) With the building back-lash against off-shored tech-support and banking and who knows what all, the robots are going to have to make major strides forward to be able to handle what we now seem to consider easily scriptable processes.

Maybe the robots aren’t going to come quite as quickly as we have been thinking!

I see I have a “futuring” course in the next year at school. I’m curious to see where that class thinks we may end up… and what part I have to play in the innovation.

Testing Our New Robot Overlords

I was confronted with the scope of what I am talking about recently when discussing my general research efforts:

The robots will be “self learning” and we will want to stop them from learning things that are not relevant or that we don’t want them to know. We might want a robot who makes sandwiches to know about foods and allergies and presentation and cultural items related to food, but we probably don’t want it to know about politics or voting or travel or whatever… Whatever we do to “focus” the robot’s learning will eventually be applied to our children, which might bring us back to the days of tests that determine your career futures…

If the robot makes a “bad decision” based on what it has learned or because it had to choose between two bad choices, who is to blame? The tester? the coder? the builder? the owner? the robot? How would we even punish the robot? And how is this really different from blaming the parents or the teachers when kids do dumb things?

If the robot is a car and it has to decide which other car to hit because it’s going to be in an accident, how does it determine which car is a better target? Will the robot know something about the relative “worth” of the other vehicles? their cargos? Is a car with 4 people in it worth more than a car with 1? The insurance people would argue yes I imagine. If the robot has this worthiness data, how long will it be before an after-market device is sold which tells the robot that this particular vehicle is “high value” in an effort to protect individual property?

I realize this is all outside the scope of what I’m doing and that greater minds than mine have to address these issues. Especially on issues which address the human condition! But, it’s sobering to see how large this field could become. This is not just testing a new phone app!

Cybernetics in the Real World

I spent this weekend at COSine, a science fiction writer’s conference here in Colorado Springs. I participated in several panels about the state of the world, especially for cybernetics.

The conversations were quite lively but ended up being debates about “how cool the technology could be” interspersed with discussion on whether we should “require” people to accept the augmentations. I suggested it wasn’t terribly different from the Borg (in Star Trek: The Next Generation) meeting new species and immediately implanting them with the technology that makes the Borg hive-mind work. The panelists likened the practice to forcing all deaf children to receive the Cochlear implant. A very spirited discussion ensued.

Afterwards, I apologized to the moderator for hi-jacking the discussion like that and she said while that was an interesting discussion, she was more intrigued by my “throw away” question about how the the augmented would be considered in our society:
Right now, there’s some stigma to people with artificial limbs, pacemakers, insulin pumps and the like. People who augment themselves with drugs for performance are stricken from the record books because they aren’t “fair,” or more accurately, not purely human.

And this leads back to the robot question. How do we determine what is “beneficial” and what is “useful”? How do we differentiate between things that help but pollute for instance? These are tricky questions and I am somewhat concerned about the outcome.

TED: Robots might or might not be amazing :)

Today’s thoughts come from a binge of watching TED Talks about robots. Some are filled with dire predictions and others are more hopeful. Some seem innocent enough but imply that the researcher hasn’t stopped to think about what the robot might do, or be used for, before developing the robot. A great deal of the discussion seems to be about robots learning and adapting on their own. And often how much faster the learning will go if the robots can talk to each other.
Lipson (2007) asked how do you reward a robot? How do you encourage it to evolve? He created a cool spider-like robot with 4 legs and let it determine what it’s shape was and then how to best locomote. He said he had hoped it would develop some sort of “cool spider movement” but instead it was a “kinda lame” flopping motion that the robot developed to move itself.
This worries me a bit. Without guidance, how would the robot make “better” decisions or improve on its choices? Isn’t this as bad as leaving children to figure out how to walk or ride a bike without showing them how to do it? If they come up with a solution that works for them, we can’t be upset with the solution, surely? If they make decisions that are harmful or annoying to humans, what then? And worse, whether robots or children, do we want them to communicate their solutions to others?
Sankar (2012) on the other hand suggested that the best way to move forward was to “reduce the friction” in the interface between humans and robots. Robots he said are great at doing mindless things or crunching big data sets but humans are really good as asking the questions that make the robots do the crunching. He showed a variety of data manipulating algorithms which found new creative insights into cancer screenings or image manipulation. But there has to be the question that says, hey look at this data/trend… Some trends will certainly be nothing but statistics and not causal. How will the machines figure that out? Will we allow them to vote or make decisions about human lives? How would we test that their decisions are good ones? Then again, how do we test politicians?
This big data stuff is amazing and interesting but I think it’s beyond the simple question I want to ask: how do we authenticate the robot decisions? How do we help the robot to make better decisions?
Kumar (2012) suggested “swarms” of robots would work better, especially if they could be built to work together like ants do. There’s no central authority but the ants build huge colonies and manage to feed everyone and evacuate waste. Kumar demonstrated some building robots that could take raw materials and build them into elaborate structures with small programs. Again, how would you test the robots to make sure they don’t “accidentally” make deathtraps for the humans or interfere with traffic or whatever? What if a competitor robot was removing pieces of the structure, would the robots know?
My original test was trying to show that the robot could be overwhelmed with data and dither about making good choices. These examples of brute-force solutions are very interesting but they are still “acceptance” tests where we ask the robot to do something and if it succeeds, we call it a pass. We still aren’t looking for ways to confuse the robots or ask it to do things that are on the edge of its programming. I think some serious research needs to be done here.

References
Kumar, V. (February, 2012). Robots that fly and cooperate. Retrieved from http://www.ted.com/talks/vijay_kumar_robots_that_fly_and_cooperate

Lipson, H. (March, 2007). Building Self-aware Robots. Retrieved from http://www.ted.com/talks/hod_lipson_builds_self_aware_robots

Sankar, S. (June, 2012). The rise of human computer cooperation. Retrieved from http://www.ted.com/talks/shyam_sankar_the_rise_of_human_computer_cooperation