Testing Our New Robot Overlords

I was confronted with the scope of what I am talking about recently when discussing my general research efforts:

The robots will be “self learning” and we will want to stop them from learning things that are not relevant or that we don’t want them to know. We might want a robot who makes sandwiches to know about foods and allergies and presentation and cultural items related to food, but we probably don’t want it to know about politics or voting or travel or whatever… Whatever we do to “focus” the robot’s learning will eventually be applied to our children, which might bring us back to the days of tests that determine your career futures…

If the robot makes a “bad decision” based on what it has learned or because it had to choose between two bad choices, who is to blame? The tester? the coder? the builder? the owner? the robot? How would we even punish the robot? And how is this really different from blaming the parents or the teachers when kids do dumb things?

If the robot is a car and it has to decide which other car to hit because it’s going to be in an accident, how does it determine which car is a better target? Will the robot know something about the relative “worth” of the other vehicles? their cargos? Is a car with 4 people in it worth more than a car with 1? The insurance people would argue yes I imagine. If the robot has this worthiness data, how long will it be before an after-market device is sold which tells the robot that this particular vehicle is “high value” in an effort to protect individual property?

I realize this is all outside the scope of what I’m doing and that greater minds than mine have to address these issues. Especially on issues which address the human condition! But, it’s sobering to see how large this field could become. This is not just testing a new phone app!