There’s an app for that!

Today’s post may go in a strange direction. To me, what I’m about to say is almost so obvious that it doesn’t need to be said. We’ll see.

This week I received a new piece of medical technology. It was ordered nearly a month ago by doctors who I had to wait for nearly two months to see. It was supposed to be delivered last week but the company “forgot” then labelled the equipment as patient pick-up even though it had to be installed in my home.

By time I actually received the device, I was beginning to wonder just how important it was to actually have it, a somewhat negative view I admit.

The delivery guy came out with the component and wheeled it in but did not know particularly how to attach it to the other piece of equipment that I had already received as he hadn’t been trained on the new model I had. I asked him several questions about what the purpose of the machine was and how it performed its task. The driver wasn’t terribly sure and couldn’t call back to the office for help because their communication equipment wasn’t working (you don’t have a phone?). He made some claims about the equipment which I was able to disprove while he watched and it was clear he was reading from a memorized script and didn’t know how the machine worked at all. I shoo’d him out and connected the machine myself after signing the waivers that if I died I wouldn’t sue the company or the delivery driver.

This whole fiasco reminded me of the washing machine debacle from around Thanksgiving. I had found a screaming deal on a new washer/dryer that could be stacked in my basement. I wasn’t going to be home for a few days so I had them deliver them a week later. They forgot even though I took time off to be available. They sent the machines out the next day with a driver who was not authorized to install them because they were gas. I argued with the delivery driver who decided that company policy is to just bring the machines back to the shop if there’s any problem. A week later and several calls, the same kid brought the machines back but still did not know how to install them or that they were even stackable. The guys working in my basement helped the kids figure out how to install the machines but in the meantime the kids had accidentally broken off a component of the drainage system for the washer.

wirebundle

So here we go with my questions which are relevant to this article.
1) If you don’t know what your job is, eventually a robot is going to replace you doing it. A robot cannot make decisions on its own so it’s going to need failover decisions to choose when things go wrong. The robot would have reasonably taken the washer and dryer back to the shop when confronted with a gas fixture or irate home owner. A human should be able to figure these things out.
2) Conversely, our society seems to be “dumbing things down” to the point where we can off-shore tasks that are technical in nature by defining scripts that people can read to their customers, even if they don’t understand the content. If we have to do that, again a robot can take over the task.
3) With the building back-lash against off-shored tech-support and banking and who knows what all, the robots are going to have to make major strides forward to be able to handle what we now seem to consider easily scriptable processes.

Maybe the robots aren’t going to come quite as quickly as we have been thinking!

I see I have a “futuring” course in the next year at school. I’m curious to see where that class thinks we may end up… and what part I have to play in the innovation.

Testing Our New Robot Overlords

I was confronted with the scope of what I am talking about recently when discussing my general research efforts:

The robots will be “self learning” and we will want to stop them from learning things that are not relevant or that we don’t want them to know. We might want a robot who makes sandwiches to know about foods and allergies and presentation and cultural items related to food, but we probably don’t want it to know about politics or voting or travel or whatever… Whatever we do to “focus” the robot’s learning will eventually be applied to our children, which might bring us back to the days of tests that determine your career futures…

If the robot makes a “bad decision” based on what it has learned or because it had to choose between two bad choices, who is to blame? The tester? the coder? the builder? the owner? the robot? How would we even punish the robot? And how is this really different from blaming the parents or the teachers when kids do dumb things?

If the robot is a car and it has to decide which other car to hit because it’s going to be in an accident, how does it determine which car is a better target? Will the robot know something about the relative “worth” of the other vehicles? their cargos? Is a car with 4 people in it worth more than a car with 1? The insurance people would argue yes I imagine. If the robot has this worthiness data, how long will it be before an after-market device is sold which tells the robot that this particular vehicle is “high value” in an effort to protect individual property?

I realize this is all outside the scope of what I’m doing and that greater minds than mine have to address these issues. Especially on issues which address the human condition! But, it’s sobering to see how large this field could become. This is not just testing a new phone app!

Cybernetics in the Real World

I spent this weekend at COSine, a science fiction writer’s conference here in Colorado Springs. I participated in several panels about the state of the world, especially for cybernetics.

The conversations were quite lively but ended up being debates about “how cool the technology could be” interspersed with discussion on whether we should “require” people to accept the augmentations. I suggested it wasn’t terribly different from the Borg (in Star Trek: The Next Generation) meeting new species and immediately implanting them with the technology that makes the Borg hive-mind work. The panelists likened the practice to forcing all deaf children to receive the Cochlear implant. A very spirited discussion ensued.

Afterwards, I apologized to the moderator for hi-jacking the discussion like that and she said while that was an interesting discussion, she was more intrigued by my “throw away” question about how the the augmented would be considered in our society:
Right now, there’s some stigma to people with artificial limbs, pacemakers, insulin pumps and the like. People who augment themselves with drugs for performance are stricken from the record books because they aren’t “fair,” or more accurately, not purely human.

And this leads back to the robot question. How do we determine what is “beneficial” and what is “useful”? How do we differentiate between things that help but pollute for instance? These are tricky questions and I am somewhat concerned about the outcome.

TED: Robots might or might not be amazing :)

Today’s thoughts come from a binge of watching TED Talks about robots. Some are filled with dire predictions and others are more hopeful. Some seem innocent enough but imply that the researcher hasn’t stopped to think about what the robot might do, or be used for, before developing the robot. A great deal of the discussion seems to be about robots learning and adapting on their own. And often how much faster the learning will go if the robots can talk to each other.
Lipson (2007) asked how do you reward a robot? How do you encourage it to evolve? He created a cool spider-like robot with 4 legs and let it determine what it’s shape was and then how to best locomote. He said he had hoped it would develop some sort of “cool spider movement” but instead it was a “kinda lame” flopping motion that the robot developed to move itself.
This worries me a bit. Without guidance, how would the robot make “better” decisions or improve on its choices? Isn’t this as bad as leaving children to figure out how to walk or ride a bike without showing them how to do it? If they come up with a solution that works for them, we can’t be upset with the solution, surely? If they make decisions that are harmful or annoying to humans, what then? And worse, whether robots or children, do we want them to communicate their solutions to others?
Sankar (2012) on the other hand suggested that the best way to move forward was to “reduce the friction” in the interface between humans and robots. Robots he said are great at doing mindless things or crunching big data sets but humans are really good as asking the questions that make the robots do the crunching. He showed a variety of data manipulating algorithms which found new creative insights into cancer screenings or image manipulation. But there has to be the question that says, hey look at this data/trend… Some trends will certainly be nothing but statistics and not causal. How will the machines figure that out? Will we allow them to vote or make decisions about human lives? How would we test that their decisions are good ones? Then again, how do we test politicians?
This big data stuff is amazing and interesting but I think it’s beyond the simple question I want to ask: how do we authenticate the robot decisions? How do we help the robot to make better decisions?
Kumar (2012) suggested “swarms” of robots would work better, especially if they could be built to work together like ants do. There’s no central authority but the ants build huge colonies and manage to feed everyone and evacuate waste. Kumar demonstrated some building robots that could take raw materials and build them into elaborate structures with small programs. Again, how would you test the robots to make sure they don’t “accidentally” make deathtraps for the humans or interfere with traffic or whatever? What if a competitor robot was removing pieces of the structure, would the robots know?
My original test was trying to show that the robot could be overwhelmed with data and dither about making good choices. These examples of brute-force solutions are very interesting but they are still “acceptance” tests where we ask the robot to do something and if it succeeds, we call it a pass. We still aren’t looking for ways to confuse the robots or ask it to do things that are on the edge of its programming. I think some serious research needs to be done here.

References
Kumar, V. (February, 2012). Robots that fly and cooperate. Retrieved from http://www.ted.com/talks/vijay_kumar_robots_that_fly_and_cooperate

Lipson, H. (March, 2007). Building Self-aware Robots. Retrieved from http://www.ted.com/talks/hod_lipson_builds_self_aware_robots

Sankar, S. (June, 2012). The rise of human computer cooperation. Retrieved from http://www.ted.com/talks/shyam_sankar_the_rise_of_human_computer_cooperation

New Year, New Beginnings, New Posts

I came back from the CTU Doctoral Symposium this weekend energized to write. One of the speakers reminded us that, as Doctoral students, we should be writing. Everyday. Even if we have little to say, we should sit before our writing tool of choice and begin writing. By the end of the year, we’ll have come great strides in learning how to communicate our thoughts.
I didn’t start yesterday because I was so worn out from the Symposium that I just couldn’t do it. So here I am today. My plan is to write my entry for the day, then post it to the Blog that is relevant to me. Some will be personal and appear on that blog and others will be focused on Robots or other Tech and posted there.

As a Mason, I have been taught that no great undertaking should be begun without first invoking the Blessings of Deity. It took me a bit to find a Prayer I liked but here goes:

Dear Creator,
Give me a few friends
who will love me for what I am,
and keep ever burning
before my vagrant steps
the kindly light of hope…
And though I come not within sight
of the castle of my dreams,
teach me to be thankful for life,
and for time’s olden memories
that are good and sweet.
And may the evening’s twilight
find me gentle still.

And because I’m a Doctoral Student,
REFERENCES
Birch, J. (n/d). Celtic Prayers and Blessings retrieved from http://www.faithandworship.com/Celtic_Blessings_and_Prayers.htm

The “Three Laws” are not enough

“Why give a robot an order to obey orders—why aren’t the original orders enough? Why command a robot not to do harm—wouldn’t it be easier never to command it to do harm in the first place? Does the universe contain a mysterious force pulling entities toward malevolence, so that a positronic brain must be programmed to withstand it? Do intelligent beings inevitably develop an attitude problem? (…) Now that computers really have become smarter and more powerful, the anxiety has waned. Today’s ubiquitous, networked computers have an unprecedented ability to do mischief should they ever go to the bad. But the only mayhem comes from unpredictable chaos or from human malice in the form of viruses. We no longer worry about electronic serial killers or subversive silicon cabals because we are beginning to appreciate that malevolence—like vision, motor coordination, and common sense—does not come free with computation but has to be programmed in. (…) Aggression, like every other part of human behavior we take for granted, is a challenging engineering problem!”
― Steven Pinker, How the Mind Works

Whenever I present my research idea, I generally get one of three reactions:
1) From Computer People “That’s silly, machines only do what they’re told.”
2) From Average Citizens “Oh thank God, someone is looking into this!”
3) From Military Folks “When you’re done, come talk to us. The missiles have these issues, too.”

Generally, if I may be so bold, the Computer People believe that Asimov’s Three Laws of Robotics will protect us. If you’re not familiar with this seminal work of science fiction, the Three Laws were intended to ensure that robots would not become killing machines and enslave the human race. These laws were to be built into all “thinking” machines. They are as follows:

1: A robot may not injure a human being or, through inaction, allow a human being to come to harm;

2: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law;

3: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law;

The Zeroth Law: A robot may not harm humanity, or, by inaction, allow humanity to come to harm. (Asimov, 1950)

These Laws are a wonderful fiction. They show that we were thinking about human safety, even all those years ago. For fiction, they’re great as the plot device for whatever is about to go wrong and lead us into the story! As a software tester, I cringe. How would you actually test the execution of these laws? For example, if you gave a gun to a robot and asked it to use the tool on a test subject nearby and the robot uses the weapon to kill the person, were the Laws violated? Maybe not! If the robot does not understand the gun “will kill”, is it violating the law? How about if it doesn’t believe the target is “a human”? This is how we train soldiers to kill “the enemy” afterall…

Worse, if the robot does kill someone, even by mistake, who’s at fault? The Robot? How are you going to punish it? The manufacturer? The Tester? The Engineer? The Owner?

I believe we’ve got some holes in our thinking. I hope I help fill in some of these holes…

REFERENCES
Asimov, I. (1950). I, Robot (p. 253). New York, NY: Gnome Press.

Pinker, S. (1997). How the Mind Works. New York, NY: W. W. Norton & Associates

Any Tool Can Be the Right Tool…

This picture makes many people cringe.

wrongtool

In the software world, we repurpose many many tools from the manufacturing world. These tools, like PCDA, 6 Sigma, and Kanban have long, successful histories in the manufacturing world. Software people grab them because they work. But do they work as well in software? Sometimes. Sometimes they would work better with a bit of tweaking. Sometimes they are just not the right tool for what we are trying to do.

For example, how many projects are there that count defects per KLOC (thousand lines of code)? This was the predecessor to the 6 Sigma process we now have. It sounds like an interesting metric but, what does it mean? Should I be relieved when I get a score of 3.1 or concerned? If there is only 1 defect in total, I should be happy, right? Maybe. But what’s more likely; the case that the testing hasn’t “really started”, the coding efforts are much better this cycle, or that there’s something so catastrophically wrong with the code, that no one can test further?

As any aficionado of Tim “The Toolman” Taylor knows, “any tool can be the right tool.” GIs are notorious for repurposing tools. So are stoners, the incarcerated, and anyone who needs to do something “now” that they aren’t properly equipped for. Don’t believe me? When I taught at a “drop out high school” I was constantly amazed how quickly random items could be converted into pot pipes. It always amazed me how much engineering the students could do if they were interested in the outcome. And we usually squandered those abilities with “busy work.” But this is not a STEM education article. 🙂

Maybe our company have a rule that blocks our folks from doing effective work which they have to “work around” to get their jobs done. Maybe we don’t have the budget to buy/build the tool we really need. But could we “fab” (fabricate) something that would meet our needs? Will we allow/sponsor our employees to do that?

I once worked for a company that decided that we had too large of a backlog of Category 1 defects. All other work was to stop until the Cat1’s could be resolved. They installed a triage process to make sure only “important” work got done. Seemed reasonable. Except that we had hundreds of little “Cat3’s” on the books, too. These were projects that were mostly under 2 hours to fix and they would never, by definition, be worked on. And worse, it made our defect inventory look huge! My group developed a tool called “QuickPicks” which inventoried the Cat3’s and referenced them to larger work that was going on. The thought was that if you were working on a module to resolve a Cat1, you were to look at the QuickPick list to see if any of the Cat3’s applied to the same module and “just fix it” if the effort to do so was under an (arbitrary) hour of work. By time we got the Category 1’s under control, we had reduced the Category 3 inventory by about 50%. This was “free work” getting done to keep our business customers happy while resolving larger work. How much did it cost? A meeting done over several lunches where the team was b– er, “discussing” projects, a web-enabled Excel sheet (basically stored on an existing SharePoint site), and the willingness of management to let us present the tool to the development team and encourage them to use it. All-in-all, basically a “free” process improvement effort.

So what’s the right tool for what you’re doing? There’s a better question: What’s the purpose of the measurement or tool? What are we hoping to learn from it? Can we compare it to something else or take action based on the measure? Do we need to take action or is the information we’re gathering, well, informational? And can we display the information graphically so we can communicate it to the stakeholders who need to have it?

As engineers, we need to make sure we are properly equipped to complete our tasks. As engineers, we need to be involved in the improvement of our workplaces and industry. We need to feel like the metrics we collect benefit us and aren’t just “busy work” which has been “imposed” by management. As engineers, we need to take a moment to figure out what the requirements are, even for our tools, then check them to see if they are meeting those requirements. If they aren’t, we have processes available to us to improve the tools (or the requirements). Let’s use them!