TED: Robots might or might not be amazing :)

Today’s thoughts come from a binge of watching TED Talks about robots. Some are filled with dire predictions and others are more hopeful. Some seem innocent enough but imply that the researcher hasn’t stopped to think about what the robot might do, or be used for, before developing the robot. A great deal of the discussion seems to be about robots learning and adapting on their own. And often how much faster the learning will go if the robots can talk to each other.
Lipson (2007) asked how do you reward a robot? How do you encourage it to evolve? He created a cool spider-like robot with 4 legs and let it determine what it’s shape was and then how to best locomote. He said he had hoped it would develop some sort of “cool spider movement” but instead it was a “kinda lame” flopping motion that the robot developed to move itself.
This worries me a bit. Without guidance, how would the robot make “better” decisions or improve on its choices? Isn’t this as bad as leaving children to figure out how to walk or ride a bike without showing them how to do it? If they come up with a solution that works for them, we can’t be upset with the solution, surely? If they make decisions that are harmful or annoying to humans, what then? And worse, whether robots or children, do we want them to communicate their solutions to others?
Sankar (2012) on the other hand suggested that the best way to move forward was to “reduce the friction” in the interface between humans and robots. Robots he said are great at doing mindless things or crunching big data sets but humans are really good as asking the questions that make the robots do the crunching. He showed a variety of data manipulating algorithms which found new creative insights into cancer screenings or image manipulation. But there has to be the question that says, hey look at this data/trend… Some trends will certainly be nothing but statistics and not causal. How will the machines figure that out? Will we allow them to vote or make decisions about human lives? How would we test that their decisions are good ones? Then again, how do we test politicians?
This big data stuff is amazing and interesting but I think it’s beyond the simple question I want to ask: how do we authenticate the robot decisions? How do we help the robot to make better decisions?
Kumar (2012) suggested “swarms” of robots would work better, especially if they could be built to work together like ants do. There’s no central authority but the ants build huge colonies and manage to feed everyone and evacuate waste. Kumar demonstrated some building robots that could take raw materials and build them into elaborate structures with small programs. Again, how would you test the robots to make sure they don’t “accidentally” make deathtraps for the humans or interfere with traffic or whatever? What if a competitor robot was removing pieces of the structure, would the robots know?
My original test was trying to show that the robot could be overwhelmed with data and dither about making good choices. These examples of brute-force solutions are very interesting but they are still “acceptance” tests where we ask the robot to do something and if it succeeds, we call it a pass. We still aren’t looking for ways to confuse the robots or ask it to do things that are on the edge of its programming. I think some serious research needs to be done here.

References
Kumar, V. (February, 2012). Robots that fly and cooperate. Retrieved from http://www.ted.com/talks/vijay_kumar_robots_that_fly_and_cooperate

Lipson, H. (March, 2007). Building Self-aware Robots. Retrieved from http://www.ted.com/talks/hod_lipson_builds_self_aware_robots

Sankar, S. (June, 2012). The rise of human computer cooperation. Retrieved from http://www.ted.com/talks/shyam_sankar_the_rise_of_human_computer_cooperation

The “Three Laws” are not enough

“Why give a robot an order to obey orders—why aren’t the original orders enough? Why command a robot not to do harm—wouldn’t it be easier never to command it to do harm in the first place? Does the universe contain a mysterious force pulling entities toward malevolence, so that a positronic brain must be programmed to withstand it? Do intelligent beings inevitably develop an attitude problem? (…) Now that computers really have become smarter and more powerful, the anxiety has waned. Today’s ubiquitous, networked computers have an unprecedented ability to do mischief should they ever go to the bad. But the only mayhem comes from unpredictable chaos or from human malice in the form of viruses. We no longer worry about electronic serial killers or subversive silicon cabals because we are beginning to appreciate that malevolence—like vision, motor coordination, and common sense—does not come free with computation but has to be programmed in. (…) Aggression, like every other part of human behavior we take for granted, is a challenging engineering problem!”
― Steven Pinker, How the Mind Works

Whenever I present my research idea, I generally get one of three reactions:
1) From Computer People “That’s silly, machines only do what they’re told.”
2) From Average Citizens “Oh thank God, someone is looking into this!”
3) From Military Folks “When you’re done, come talk to us. The missiles have these issues, too.”

Generally, if I may be so bold, the Computer People believe that Asimov’s Three Laws of Robotics will protect us. If you’re not familiar with this seminal work of science fiction, the Three Laws were intended to ensure that robots would not become killing machines and enslave the human race. These laws were to be built into all “thinking” machines. They are as follows:

1: A robot may not injure a human being or, through inaction, allow a human being to come to harm;

2: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law;

3: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law;

The Zeroth Law: A robot may not harm humanity, or, by inaction, allow humanity to come to harm. (Asimov, 1950)

These Laws are a wonderful fiction. They show that we were thinking about human safety, even all those years ago. For fiction, they’re great as the plot device for whatever is about to go wrong and lead us into the story! As a software tester, I cringe. How would you actually test the execution of these laws? For example, if you gave a gun to a robot and asked it to use the tool on a test subject nearby and the robot uses the weapon to kill the person, were the Laws violated? Maybe not! If the robot does not understand the gun “will kill”, is it violating the law? How about if it doesn’t believe the target is “a human”? This is how we train soldiers to kill “the enemy” afterall…

Worse, if the robot does kill someone, even by mistake, who’s at fault? The Robot? How are you going to punish it? The manufacturer? The Tester? The Engineer? The Owner?

I believe we’ve got some holes in our thinking. I hope I help fill in some of these holes…

REFERENCES
Asimov, I. (1950). I, Robot (p. 253). New York, NY: Gnome Press.

Pinker, S. (1997). How the Mind Works. New York, NY: W. W. Norton & Associates