{"id":68,"date":"2015-01-21T12:41:58","date_gmt":"2015-01-21T12:41:58","guid":{"rendered":"http:\/\/futureoftesting.net\/?p=68"},"modified":"2015-01-22T06:46:16","modified_gmt":"2015-01-22T06:46:16","slug":"68","status":"publish","type":"post","link":"https:\/\/futureoftesting.net\/?p=68","title":{"rendered":"TED: Robots might or might not be amazing :)"},"content":{"rendered":"<p>Today\u2019s thoughts come from a binge of watching TED Talks about robots.  Some are filled with dire predictions and others are more hopeful.  Some seem innocent enough but imply that the researcher hasn\u2019t stopped to think about what the robot might do, or be used for, before developing the robot.  A great deal of the discussion seems to be about robots learning and adapting on their own.  And often how much faster the learning will go if the robots can talk to each other.<br \/>\nLipson (2007) asked how do you reward a robot?  How do you encourage it to evolve?  He created a cool spider-like robot with 4 legs and let it determine what it\u2019s shape was and then how to best locomote.  He said he had hoped it would develop some sort of \u201ccool spider movement\u201d but instead it was a \u201ckinda lame\u201d flopping motion that the robot developed to move itself.<br \/>\nThis worries me a bit.  Without guidance, how would the robot make \u201cbetter\u201d decisions or improve on its choices?  Isn\u2019t this as bad as leaving children to figure out how to walk or ride a bike without showing them how to do it?  If they come up with a solution that works for them, we can\u2019t be upset with the solution, surely?  If they make decisions that are harmful or annoying to humans, what then?  And worse, whether robots or children, do we want them to communicate their solutions to others?<br \/>\nSankar (2012) on the other hand suggested that the best way to move forward was to \u201creduce the friction\u201d in the interface between humans and robots.  Robots he said are great at doing mindless things or crunching big data sets but humans are really good as asking the questions that make the robots do the crunching.  He showed a variety of data manipulating algorithms which found new creative insights into cancer screenings or image manipulation.  But there has to be the question that says, hey look at this data\/trend\u2026  Some trends will certainly be nothing but statistics and not causal.  How will the machines figure that out?  Will we allow them to vote or make decisions about human lives?  How would we test that their decisions are good ones?  Then again, how do we test politicians?<br \/>\nThis big data stuff is amazing and interesting but I think it\u2019s beyond the simple question I want to ask: how do we authenticate the robot decisions?  How do we help the robot to make better decisions?<br \/>\nKumar (2012) suggested \u201cswarms\u201d of robots would work better, especially if they could be built to work together like ants do.  There\u2019s no central authority but the ants build huge colonies and manage to feed everyone and evacuate waste.  Kumar demonstrated some building robots that could take raw materials and build them into elaborate structures with small programs.  Again, how would you test the robots to make sure they don\u2019t \u201caccidentally\u201d make deathtraps for the humans or interfere with traffic or whatever?  What if a competitor robot was removing pieces of the structure, would the robots know?<br \/>\nMy original test was trying to show that the robot could be overwhelmed with data and dither about making good choices.  These examples of brute-force solutions are very interesting but they are still \u201cacceptance\u201d tests where we ask the robot to do something and if it succeeds, we call it a pass.  We still aren\u2019t looking for ways to confuse the robots or ask it to do things that are on the edge of its programming.  I think some serious research needs to be done here.<\/p>\n<p>References<br \/>\nKumar, V. (February, 2012). Robots that fly and cooperate. Retrieved from <a href=\"Today\u2019s thoughts come from a binge of watching TED Talks about robots.  Some are filled with dire predictions and others are more hopeful.  Some seem innocent enough but imply that the researcher hasn\u2019t stopped to think about what the robot might do, or be used for, before developing the robot.  A great deal of the discussion seems to be about robots learning and adapting on their own.  And often how much faster the learning will go if the robots can talk to each other. Lipson (2007) asked how do you reward a robot?  How do you encourage it to evolve?  He created a cool spider-like robot with 4 legs and let it determine what it\u2019s shape was and then how to best locomote.  He said he had hoped it would develop some sort of \u201ccool spider movement\u201d but instead it was a \u201ckinda lame\u201d flopping motion that the robot developed to move itself.   This worries me a bit.  Without guidance, how would the robot make \u201cbetter\u201d decisions or improve on its choices?  Isn\u2019t this as bad as leaving children to figure out how to walk or ride a bike without showing them how to do it?  If they come up with a solution that works for them, we can\u2019t be upset with the solution, surely?  If they make decisions that are harmful or annoying to humans, what then?  And worse, whether robots or children, do we want them to communicate their solutions to others? Sankar (2012) on the other hand suggested that the best way to move forward was to \u201creduce the friction\u201d in the interface between humans and robots.  Robots he said are great at doing mindless things or crunching big data sets but humans are really good as asking the questions that make the robots do the crunching.  He showed a variety of data manipulating algorithms which found new creative insights into cancer screenings or image manipulation.  But there has to be the question that says, hey look at this data\/trend\u2026  Some trends will certainly be nothing but statistics and not causal.  How will the machines figure that out?  Will we allow them to vote or make decisions about human lives?  How would we test that their decisions are good ones?  Then again, how do we test politicians? This big data stuff is amazing and interesting but I think it\u2019s beyond the simple question I want to ask: how do we authenticate the robot decisions?  How do we help the robot to make better decisions? Kumar (2012) suggested \u201cswarms\u201d of robots would work better, especially if they could be built to work together like ants do.  There\u2019s no central authority but the ants build huge colonies and manage to feed everyone and evacuate waste.  Kumar demonstrated some building robots that could take raw materials and build them into elaborate structures with small programs.  Again, how would you test the robots to make sure they don\u2019t \u201caccidentally\u201d make deathtraps for the humans or interfere with traffic or whatever?  What if a competitor robot was removing pieces of the structure, would the robots know? My original test was trying to show that the robot could be overwhelmed with data and dither about making good choices.  These examples of brute-force solutions are very interesting but they are still \u201cacceptance\u201d tests where we ask the robot to do something and if it succeeds, we call it a pass.  We still aren\u2019t looking for ways to confuse the robots or ask it to do things that are on the edge of its programming.  I think some serious research needs to be done here.  References Kumar, V. (February, 2012). Robots that fly and cooperate. Retrieved from http:\/\/www.ted.com\/talks\/vijay_kumar_robots_that_fly_and_cooperate  Lipson, H. (March, 2007). Building Self-aware Robots.  Retrieved from http:\/\/www.ted.com\/talks\/hod_lipson_builds_self_aware_robots  Sankar, S. (June, 2012). The rise of human computer cooperation.  Retrieved from http:\/\/www.ted.com\/talks\/shyam_sankar_the_rise_of_human_computer_cooperation \">http:\/\/www.ted.com\/talks\/vijay_kumar_robots_that_fly_and_cooperate<\/a><\/p>\n<p>Lipson, H. (March, 2007). Building Self-aware Robots.  Retrieved from <a href=\"http:\/\/www.ted.com\/talks\/hod_lipson_builds_self_aware_robots\">http:\/\/www.ted.com\/talks\/hod_lipson_builds_self_aware_robots<\/a><\/p>\n<p>Sankar, S. (June, 2012). The rise of human computer cooperation.  Retrieved from <a href=\"http:\/\/www.ted.com\/talks\/shyam_sankar_the_rise_of_human_computer_cooperation\">http:\/\/www.ted.com\/talks\/shyam_sankar_the_rise_of_human_computer_cooperation<\/a> <\/p>\n","protected":false},"excerpt":{"rendered":"<p>Today\u2019s thoughts come from a binge of watching TED Talks about robots. Some are filled with dire predictions and others are more hopeful. Some seem innocent enough but imply that the researcher hasn\u2019t stopped to think about what the robot might do, or be used for, before developing the robot. A great deal of the [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[5,6],"tags":[],"_links":{"self":[{"href":"https:\/\/futureoftesting.net\/index.php?rest_route=\/wp\/v2\/posts\/68"}],"collection":[{"href":"https:\/\/futureoftesting.net\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/futureoftesting.net\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/futureoftesting.net\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/futureoftesting.net\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=68"}],"version-history":[{"count":3,"href":"https:\/\/futureoftesting.net\/index.php?rest_route=\/wp\/v2\/posts\/68\/revisions"}],"predecessor-version":[{"id":71,"href":"https:\/\/futureoftesting.net\/index.php?rest_route=\/wp\/v2\/posts\/68\/revisions\/71"}],"wp:attachment":[{"href":"https:\/\/futureoftesting.net\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=68"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/futureoftesting.net\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=68"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/futureoftesting.net\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=68"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}