What’s In a Name?

As I was walking back to my car from parking the float I had been driving at the Labor Day parade, I happened across an R2 Unit that had been on the side of the parade route. And I got to talk to its owners/builders. Turns out they were visiting from Mountain States Droid Builders, a fan club that builds robot replicas from the Star Wars universe. They were happy to demonstrate the way they built their ‘droid and invited me to join them at their work parties. We discussed that this R2 unit was technically more of an RC vehicle than a proper robot but that they had plans for adding robot-skills to their device as time and money became available.
This led me to ponder, yet again, how to define the difference between the machines we’ve been talking about. I see there being 5 or 6 kinds of machines at the moment; RCs and Drones, autonomous vehicles, human-assisted, industrial and other kinds of robots, and ‘Droids.

RC vehicles and drones are machines which are piloted by humans who are remote from the machine itself. The humans rely on information from sensors or visual observation of the craft itself. The craft may have components which aid the human in various tasks much like cruise-control and auto-pilot do today.

Autonomous vehicles would be able to control their own actions, using sensors and other inputs to navigate safely from point-to-point. These vehicles may coordinate with external sources such as road- or other vehicle-feedback

Hybrid vehicles (I suppose we need a new word for this since Hybrid already means multiple fuel sources) would be semi-autonomous and able to make some decisions on its own but would require a human in the loop to control the vehicle in certain situations, perhaps mandated by law, or when the computer can’t make a valid decision. Above I referred to these as “human-assisted.”

Industrial robots can repeat a set of tasks that it has been “trained” to do and adding sensors to it allows the machine to receive feedback on its actions and make corrections. The robot knowing that the part that it should be working on is misaligned can cause the robot to re-align the part or ask for help instead of completing a task which is not going to be correct at the end.

Robots are considered semi-autonomous at this point, depending on their decision-making skills.

Then there are Droids. All science fiction robots fit into this category, at least initially. Short for Android, these machines seem to have personalities and are able to perform their tasks with no supervision.

So here are my questions. Are these enough categories? Where does a typical Roomba fit into all of this? How about the current set of vehicles which can park themselves or brake to avoid accidents? What about a machine that can assist rescuers in a unsafe environment? How solid should the lines be between the categories? If the machine can be over-ridden by a human, does that mean it’s human-assisted? I don’t know the answer to this question either but I’m starting to research it because we need a better vocabulary to talk about these machines.

The Best Laid Plans of Mice…

It’s been a strange, stressful week. I’ll beg your indulgence for a moment here. I still want to consider how robots will be developed and tested but, today I want to go in a slightly different direction. I want to quote from the Hitchhiker’s Guide to the Galaxy.
In Douglas Adams’ story, hyper-dimensional beings, who appear to humans as lab mice, attempted to sort out the “wretched question” of Life, the Universe, and Everything! They developed a computer called Deep Thought which would calculate the answer to the question once and for all. After generations, the computer came up with an answer, “42!” It then suggested that no one actually knew what the question was and that a bigger computer would have to be built. A computer so large, in fact, that it would require organic organisms to be part of the computational matrix. Deep Thought named the new computer “The Earth.” (Adams, 1983, p 171)
The Earth was built and after 2 million years processing, just before it could complete it’s calculations, an alien species destroyed it to make way for a hyper-space by-pass.
The reason I bring this up, besides the fact that I’m a geek and it’s my nature, is that companies do this kind of thing all the time. The company may develop on a shorter time-scale, for instance only worrying about this quarter’s results. If it plans longer term, the further out the resolution is, the more planning has to be done. And the more monitoring the company has to do to make sure the goal is still being reached. In the story for example, had the mice been diligent about making sure the computer program was working, they would have noticed the plan to put in the hyper-space bypass and fought it, at least until their calculations had concluded.
The other thing to notice here is that laws may change while a product is being developed or another team may develop something that is at cross-purposes to the project we are working on. In this case, government planning functionally rezoned an area that the mice were using but didn’t bother to inform the mice. This implies that the risk review must be a constant process in a project, especially if it runs for any length of time.

References
Adams, D. (1979). The Hitchhiker’s Guide to the Galaxy. New York: Random House

Testing, Testing, Planning, Planning…

This week at OR, we’ve been discussing “scenario planning,” a method for visualizing alternatives which allows planners to perform risk management type activities. Wade (2014) says scenario planning asks two fundamental questions “What could the landscapes look like?” and “What trends will have an impact (on us) and how will they develop?”. In Wade’s model, two or more trends are considered, using an “either … or …” framework for each then pairing the various endpoints in a matrix. For example, a researcher might decide that oil price and electrical stability are the two factors that will impact our future plans. The researcher would set two extremes for the “oil price,” say ‘higher than now’ and ‘lower than now,’ and two extremes for the stability, say ‘cheap and plentiful’ and ‘expensive and scarce.’ By combining those four extremes, the researcher could define a set of four scenarios which would allow for planning.
It turns out that much of what’s written about scenario planning is based on financial forecasts. One notable failure of the model involved the Monitor Group, which ironically performed scenario planning for other companies. The problem they faced was akin to a mechanic getting into an accident because his vehicle had faulty brakes. (Hutchinson, 2012) Monitor Group got into trouble when they began to experience negative cash flow. They trimmed the employees by 20% and assumed they would weather the coming storm until the market picked back up. (Cheng, n/d)
It didn’t. They didn’t.
Victor Cheng (ibid) opined that Monitor fell because they spent too much time in denial that their models were not working. Desperate for cash, they contracted with Libyan Dictator Moammar Gadhafi in an attempt to improve his image which ironically hurt theirs. He suggested that Monitor should have “protected its reputation” to be able to borrow during their downturn. They should have monitored (no pun intended) their Pride and Ego which told them they couldn’t be having these problems since they solve them for others. Hutchinson (ibid) also suggested that this is a common problem within organizations, where collectively held beliefs can be spectacularly wrong. And finally Cheng warns, a “strategy” is only as good as the paper it’s written on. Execution of the strategy is hard and needs to be monitored closely to prevent ending up alone and in the weeds.
So why am I writing about this you ask? To me, this scenario planning is at the core of software testing. How do we know which scenarios to test? How do we find interesting combinations of actions? How do we validate that complex behaviors are working in a way that makes sense?
Many of the same tools can be used to define the scenarios for testing the autonomous vehicles. Delphi, affinity grouping, brainstorming are all well known ways of collecting requirements. Each can be used to help define scenarios we would be interested in. Once we have the many thousands of ideas for what a vehicle must do in a specific situation, we can start grouping them together by process or mechanical grouping to find overlaps.
I recently began a project to define the tests that were necessary for a small cash register program I was brought in to test. After playing with the application for a week, I started writing test cases. I came up with nearly 1400 of them before I got to talk to the development team. I showed them my list and told them that I would need their help to prioritize the list since we obviously would not be able to test them all. Their eyes widened. Then they asked me a question that shocked both them and me. How much of this have you already tested while working this week?
I set down my sheet and said, “Before I defined these tests, I felt I had a fairly good grasp of your product. I would have said I was working at about 80% coverage. Now, looking at all these paths, I might have been up to about 10% coverage.”
I believe the automotive testers such as Google have only begun to scratch the surface (no pun intended) of their required testing, even with all the miles they have under their belts.
Next post, we’ll start looking at some of those possible scenarios and we’ll start trying to define a priority for them as well…

References

Cheng, V. (n/d). Monitor group bankruptcy-the downfall. Retrieved from http://www.caseinterview.com/monitor-group-bankruptcy

Hutchinson, A. (November 13, 2012), Monitor group: a failure of scenario planning. Retrieved from http://spendmatters.com/2012/11/13/monitor-group-a-failure-of-scenario-planning/

Roxburgh, C. (November 2009). The use and abuse of scenarios. Retrieved from http://www.mckinsey.com/insights/strategy/the_use_and_abuse_of_scenarios

Wade, W. (May 21, 2014) “Scenario planning” – thinking differently about future innovation (and real-world applications). Retrieved from http://e.globis.jp/article/000363.html

RIP HitchBOT

RIP HitchBOT
I’m not sure what to make of this. Researchers in Port Credit, Ontario created a “robot” based on the Flat Stanley principle and turned it loose about a year ago. Like Flat Stanley, HitchBOT required strangers to pick it up and transport it to a new destination. Like Flat Stanley, people took their pictures with it at interesting events and people wrote about their experiences travelling with it.
HitchBOT travelled more than 6000 miles across Canada, visited Germany and the Netherlands then began it’s journey across the United States with the goal of reaching San Francisco one day. The robot was built to help researchers answer the question “can robots trust humans?” Brigitte Deger-Smylie (Moynihan, 8/4/2015), a project manager for the HitchBOT experiment at Toronto’s Ryerson University says they knew the robot could become damaged and had plans for “family members” to repair it if needed.
The three foot tall, 25 pound robot was a robot in name only as it was literally built out of buckets with pool noodles for arms and legs. Because of privacy concerns, the machine did not have any real-time surveillance abilities. It could however respond to verbal input and take pictures which it could post to it’s social media site along with GPS coordinates. Researchers could not operate the camera remotely.
Starting July 16, HitchBOT travelled through Massachusetts, Connecticut, Rhode Island, New York, and New Jersey. When it reached Philadelphia though, it was decapitated and left on the side of the road on August 1st. Knowing how attached people get to objects which have faces, I wonder how the person who dropped it off last feels, knowing they were the last to interact with it. As a tester, I wonder how HitchBOT responded to being damaged since it had as least rudimentary speech processing abilities.
The researchers say there are several ways of looking at the “decapitation.” One way is to think, “of course this happened, people are awful.” Another way is to think, “Americans are obnoxious, so of course it happened here.” Or worse, “of course this happened in Philly, where fans once lashed out at Santa Claus.” The project team suggests that the problem is an isolated incident involving “one jerk” and that we should concentrate on the distance the machine got and the number of “bucket list” items it was able to complete before it was destroyed. Deger-Smylie (ibid) says the team learned a lot about how humans interact with robots in non-restricted, non-observed ways which were “overwhelmingly positive.”
This makes me wonder. Will it be a “hate crime” to destroy robots one day? Will protesters picket offices where chips are transplanted, effectively changing the robot from one “being” to another? If the robot hurts a human, will they all be recalled for retraining? Where do we draw the line between what is “alive” and what is not? Does that question even mean anything? Sherry Turkel at MIT is researching how “alive” robots have to be to be a companion. I read a fascinating scifi novel back in the day called “Flight of the Dragonfly.” In it, a starship had a variety of AI personalities to help the crew maintain their sanity. The ship was damaged at one point and the crew had to abandon it, but was afraid to leave the injured ship on its own to die. The ship reminded the crew that the devices they used were all extensions of itself and that the different voices it used were just fictions to help the humans interact with it. How are these “fictions” going to play out with people who already name their cars?
In the meantime, the Hacktory, a Philly-based art collective is taking donations to rebuild the HitchBOT and send it back on the road.

References:
Moyniham, T. (August 4, 2015) Parents of the decapitated HitchBOT say he will live on. Retrieved from http://www.wired.com/2015/08/parents-decapitated-hitchbot-say-will-live/

The Future Is So Bright, I Gotta Wear Shades…Or Blinders

I see autonomous vehicles being introduced in a serious way in the next 15-20 years. I don’t think they will be wide-spread per se but i think they will be on the roads. I think this will spin off a fair number of competing/collaborative technologies.
I don’t think the vehicles will be like KITT in Knight Rider but I don’t think they’ll be far off, after a while. Initially, your car will be waiting for you in the drive way. When you come out, it will start itself, position itself to pick you up if necessary, and ask your destination. After you tell it, the vehicle will check road conditions and plan the optimal route for you to get to work, taking considerations such as your love of Starbucks drive-thrus or cell phone reception into account. While you take a nap or talk on the phone, or conduct a meeting, the vehicle will navigate the streets, avoiding pedestrians, other vehicles, and various obstacles. The vehicle will drop you off at work then park itself, waiting for you to call for it again.
This idea has been around for decades.
And yet, I see some potential problems with the rollout. Most people like the idea of the vehicle driving itself so that they “don’t need to.” But people seem to be in two camps about these vehicles as they currently exist: they either love the idea and can’t imagine that anything might go wrong, or they imagine the vehicles will lead to Terminator type robots in the near future. On the first hand, people are already suggesting that insurance companies will go under once the autonomous vehicles are common because accidents will be so rare. And in the other, people are afraid we may end up with a fleet of machines like the movie vehicle, Christine, bent on killing us all. The truth is somewhere in between there, I imagine. On a slightly more serious note, sooner or later the vehicles will be forced to make a bad decision when confronted with a no win scenario, such as hitting one of several objects. If an accident is required in heavy traffic on slick roads, will the vehicle hit the vehicle in front of it let the one behind it hit it, or swerve and hit an oncoming car or pedestrian? How will the vehicle make the determination which is the “best thing” to hit? One potential solution would be to allow the vehicles to talk to each other so that they know the relative worth of each player. A vehicle with four people in it might be considered more valuable than a vehicle with only one occupant, according to the insurance people who would pay out the medical claims. Of course, if the vehicles are communicating like that, you know there will be an aftermarket device/mod that reports the vehicle has 10 people on board and therefore should not be hit!
And, as the machines become more “responsible”, they may report information to waiting police sentries concerning your health/sobriety, the vehicle’s health/maintenance, or number of “registered people” in the vehicle and whether they are wearing their seatbelts. Ack!
There might be some interesting activities which may occur if the vehicles are successful. Bus transportation (and other delivery type activities) should be easier to set up and maintain. Fewer vehicles may end up on the road as individual vehicles get “loaned out” during their idle times, meaning that an individual wouldn’t need to have exclusive rights to the machine if it is acting as a taxi/chauffeur. And if the vehicles become successful, the marketing will be about how relaxed you can be in the vehicle, how comfortable the seats are, how good the internet connections are, or how nice the stereo (or equivalent) is.
Or we’ll end up with machines working like a conveyor belt following predetermined routes like busses do. Any maybe swerving to run us down since pedestrians won’t have the aftermarket “no hit” devices. 🙂

Keep Your Requirements Close But Your Crazy Ideas Closer

One of the courses I teach, which I’m quite proud of, is the capstone course for the CS and IT majors. In the two semester course, we design, build, and release something to the world. The projects are based on interests in the various groups, usually 2-4 people per group. We’ve built robots, network sniffing tools, video games (some better than others ;)), and a variety of web portal kinds of tools. This cycle, one student team built an existential video game called Unbearable which was impossible to win and whose arcade-style high score tracker published random numbers. It was a silly game on the surface but had lots of ironic and fun easter egg kinds of behavior.
The other teams collaborated to build what was functionally a tele-presence device. The intent of the device when they started was to be able to “call home” to see your dogs and make sure they were ok. They had promised “bark recognition” software that would send you a text if the dog was barking too much and a Pan-Tilt-Zoom functionality for the camera so you could watch the dog. Early designs had a ball thrower so you could play with the dog remotely but that was simply too complex for the time we had. There had been some discussion about blowing bubbles for the dog to chase instead or perhaps giving the dog “some kibble” to get it to come to the device. The “kibble launcher” became the in-joke for the two semesters but was kind of moved to “nice to have” instead of a requirement.
Last night, the team demo’d their product with the intent of showing the customer that the product should be funded to actually be built. One of the team members brought their dog in to show the reaction to the device. Generally, the dog was less than impressed 🙂 We joked that they should have included the “kibble launcher” after all to get the dog’s attention. The student responsible for the case that the telepresence device lived in reached under the desk and pulled out a small box-like object which he hooked onto the side of the device. You could see there was a place for wires and a motor though they had not actually been installed. He then reached into his backpack and pulled out a baggie of dry dog food which he loaded into the new add-on. It was manually activated but the dog was excited to get some “kibble” so at least hung around the device during the demo. Network issues in the classroom prevented the demo from working well but we had seen the process work in the past.
The reason I bring this all up is that the “kibble launcher” started as a joke with the team while we were brainstorming the functionality of the product. In brainstorming, you write down all the ideas that come up, no matter how odd they may seem. Using Affinity Diagrams and other tools, we pared down the ideas to a manageable number of requirements to build. The launcher was decided to be a “if we have time” kind of feature and it was shelved. But, we made so much fun of the idea that the cabinet builder kept thinking about how to actually do it. He mocked up a prototype which he proudly displayed last night. It helped their product and would have differentiated it on the market if we had really built it.
My point is that we often come up with crazy ideas in brainstorming sessions and we filter them out right away because they’re silly/stupid/too expensive. But sometimes, those ideas live on and we find a way to incorporate them into the product. And we should. Those “crazy ideas” are how we sometimes get new, cool products.

Testers and Developers

I started a new job today. It doesn’t involve robots (yet!) but it’s about setting up a testing lab to make sure the various devices we interact with work correctly with our products. The lead developer and I went to lunch today to discuss his plans for the team.
He’s very excited to have a professional tester on the team finally because he believes, as I do, that testers have a different mindset than the developers have. I agree with him completely. Testers want to “break” the code. They are curious to see what will happen when ___. Developers tend to want the code “to work.” Testers try entering invalid data or click the buttons out of order, Developers typically test the “happy path.”
There’s nothing wrong with these two viewpoints but we need to acknowledge them as different. And we need to realize that we are both working on the same team, it’s not Testers vs Developers. It should be that we’re both fighting the defects that will hurt/annoy/anger our customers.
We agreed on that. But, what he said next, really floored me, at first. He said he was thinking about bringing in some computer science majors as college interns. He wants them to test some basic/UAT kinds of tests and see if they want to continue with the company as developers. WHAT?! This seems diametrically opposed to what he claimed earlier about the mindsets being different. He asked me about my thoughts on this and I told him it seemed counter to his goals of having testers.
He defended his position by saying that Test Driven Development works better if the developers understand the idea of testing to begin with. He said he’s found the best way to teach testing is to have the “students” do it. I agreed with that. When I teach testing courses, one of the things we look at is the airport parking cost tool at Grand Rapids airport. http://www.grr.org/ParkCalc.php In the class we brainstorm what the requirements likely were, then test the finished product based on those requirements. We look at the site to find out what the rates are and manually calculate some of the prices, just to make sure the Calculator works correctly.
The Calculator is a simple form with a pulldown to select the parking type and a couple boxes to enter the start and stop dates and times. When you hit Submit, the Calculator should tell you how long you are staying at the airport and how much it will cost to park your car there. If you’re testing the Happy Path, all is good.
But, if you’re not… and you try February 30 as one of the dates, the Calculator accepts that and still performs the calculation. Turns out the Calculator accepts a lot of crazy dates like ‘9999999’ and ‘-9999999’ and 6/8/2015 and 2015/06/08 and “date”. Ack! Turns out I’m not the only one using this website. There are testing competitions to find out who can find the largest or the smallest payment required. Can anyone get a negative price? And it turns out the calculation is not always correct in any case, a regular functional defect that may require a fair number of test cases to find the error.
In this particular example, the fact that the Calculator is wrong is more amusing than problematic. But, what if the Calculator fed it’s number into a charge card or accounting process? Now would we care? You betcha!
So, I’m all for training folks to be testers. I’m all about showing Developers why testing matters and how they can be “more robust” in their efforts. And let’s face it, if the Developers are doing the testing and they get the idea that the same test with differing values and boundary conditions might be useful, they might start to think of testing as cyclic and that might make them think of loops and that might lead them to build some interesting tools to allow us to try a larger variety of tests than we typically do now. And I’m definitely all for that!
Especially once the robots start getting built!

The Real World is a Messy Place

The Real World is a messy place which we need to fit into the decision processes of our autonomous vehicles. In Acceptance Testing, we would certify that the machine was instructed to perform this specific task and it did, therefore we are happy. We could make a list of these tasks and check them off one by one:
a) The vehicle travelled from point A to point B without hitting anything for example.
b) The vehicle swerved correctly to miss a person standing in the road.
c) The vehicle changed roads because there was an obstruction ahead.
And so on! There are a million scenarios to test if we want to be certain that the machine will operate correctly, regardless of the situation.
But, what if we allowed other types of testing to become involved? White Box testing would allow us to test the individual sensors or try to feed them bad data specifically to confuse the vehicle. Exploratory Testing might ask “what-if” questions like “What if it’s snowing?” or “What if visibility is low?” or “What if the road was gravel?”
The Real World rarely has only one variable at play however, just ask anyone who has used a flight simulator. Eventually, the testing is going to have to try multiple problems at once. Let’s explore Hypothetical Scenario 17:

An autonomous vehicle is driving down an icy city side-street. There is a woman bundled against the cold, carrying a baby, attempting to cross the street. There are parked cars on both sides of the road. The vehicle in the on-coming lane abruptly turns left in front of our robot, perhaps into its own driveway. Our vehicle is going to hit something. What does the robot hit? How does it decide?
So far, all of our tests have been based on the idea that there is a “correct” answer for the robot to pick. What is that isn’t true? What will the robot do when faced with having to select a “bad” choice?

If the human was driving, s/he might reasonably select the target least likely to be damaged by their vehicle. They might select whatever is directly in front of them. They might swerve or perform another action specifically to avoid the pedestrian. The human would make this choice, potentially based on experience, their own value judgements, or some other consideration. What does the robot do?
If we are concerned about this scenario, perhaps we need to design an algorithm for the robot. An auto insurance company might suggest hitting the thing with the least value, relatively speaking. A Yugo parked on the side of the road is much cheaper to repair than a Bentley. A moving vehicle with one occupant is potentially cheaper than a vehicle with four passengers, medical claims-wise. We’d have to come up with whatever our algorithm is and build in a way for the robot to learn the information which leads to the decision.
One potential solution is to allow/require the vehicles to talk to each other. Car A reports that it has three passengers but good airbags. Car B reports that it is empty and a rental car. The pedestrian reports nothing. What does the robot hit now? Worse, if there is communication between the vehicles, how long until an after-market device is introduced claiming its vehicle is “expensive” by whatever scale? If everyone uses this device, we’re back to the original problem of having to make a bad decision with no data. Even worse, how long until there are websites (or whatever) which show backyard mechanics how to hack the decision-making process to protect their own vehicles/families? And even more exciting, if the cars are reporting on themselves, why can’t the police poll all the cars going by to see who’s speeding and not wearing their seatbelts and who hasn’t done timely “proper maintenance”?
And an even more troubling question is, who’s in trouble if the robot chooses a “bad” solution? The “driver” or the owner of the vehicle? Perhaps the builder, or the developer, or the tester? How will we know why the robot made the choice it did? Does it need to log the information in some way? If a human would have made a different decision, does that trump whatever the robot actually did? This is akin to telling Developers to make it work like the existing system/prototype without defining the actual requirements.
There’s going to be a period of transition as we bring these vehicles into reality. There will be some period of time, perhaps decades, where there will be autonomous, semi-autonomous, and human-driven vehicles all interacting on the same roads. We’re working hard to bring these vehicles to our streets, both in the US and elsewhere. So far, it seems that we’re excited that the robots can do simple tasks in controlled environments. But the Real World is not a controlled environment and maybe we need to think more about integrating messy, fragile, and litigious humans into the mix.

Spare No Expense…

Dr John Hammond described the safety measures employed in the original Jurassic Park as “sparing no expense” but admitted when things went wrong that no one reviewed the reviewers. The public deserves this and is going to demand it.

My dissertation topic has remained focused around the idea that the coming generation of robots needed to be tested more robustly than the tests we were currently performing. It was, and is, my conjecture that by applying James Bach’s Exploratory approach, we could define new tests which were needed , to complement the more traditional decision-tree kinds of approaches that are typically used in Acceptance Testing. When I started this project, that was a hard sell. Even my Dean said it was a silly topic with too much science fiction content. That was the better part of 5 years ago.

Nowadays, several companies including Google, Boston Dynamics, and GM, are showcasing possible robots that may soon be interacting with us, though most hasten to add that the robots are to be “service oriented.” Today, more voices are joining mine. More voices are beginning to appear in the main-stream media asking just how safe these robots are going to be. So far, the response has amounted to “don’t worry, we’ve got this.”

So far, vehicle tests have been mostly conducted to show that a group of vehicles can operate “together,” while safely maneuvering a test track. There’s a smart-road project being developed in Europe this year that steps beyond the test track and a new vehicle being released on the roads of Beijing this year as well.

The Cooperative ITS Corridor in Europe will ”harmonize smart-road standards” (Ross, P. 12/30/2014) and allow researchers to explore how vehicles of the future will interact. This road relies on wifi signals to communicate to all the cars on the road. Since so few vehicles are actually set up to receive these signals, the road project will allow the researchers to test theories about how differently equipped vehicles will interact.

The Chinese solution, on the other hand, assumes that a human will remain “in the loop.” “Our idea is not that a car should totally replace the driver, but that it will give the driver freedom.” (Wan, A. and Nam, W. 4/20/2015) Yu Kai, Chinese manufacturer Baidu’s top AI researcher suggests “…the car is intelligent enough to operate by itself, like a horse, and make decisions depending on different road situations.” (ibid)

Earlier this week, Dubai announced it was looking at rolling out “robo-cops” some time in the next few years, hopefully in time for Expo 2020. The Chief Information Officer and General Director of Dubai Police HQ Services Department says the move will help them deal with an ever-increasing populace. The robots will be part of the Smart City Initiative and will allow organizers to provide “better services” without hiring more people. Genra Khalid Nasser Alrazooqi says the robots will initially be used in malls and other public areas to increase police presence. At first, “the robots will interact directly with people and tourists. They will include an interactive screen and microphone connected to the Dubai Police call center. People will be ble to ask questions and make complaints, but they will also have fun interacting with the robots.” (Khaleej Times, 4/28/2015)

Alrazooqi says he hopes the robots will evolve quickly and that four or five years from now, the Dubai Police will be able to field autonomous robots that require no input from humans. “These will be fully intelligent robots that can interact with people, with no human intervention at all.” (ibid).

To me, this is scary stuff! I get the desire to move these things to the public sphere because there’s money to be made there, especially if you’re “first to market.” But there’s also a huge chance of killing this nacent industry if something goes wrong. So far, Acceptance Testing has been “good enough” for the kinds of experiences the vehicles have had. As we move these machines into the public arena and allow them to interact with “fragile and litigious” human beings, the testing must get more robust and the public made aware of the kinds of testing that have been done.

References:
Khalej Times. (4/28/2015). Smart policing to come Dubai robo-cops. Retrieved from https://en-maktoob.news.yahoo.com/smart-policing-come-dubai-robo-cops-055522487.html

Ross, P. (12/30/2014). Europe’s Smart Highway will Shepherd Cars from rotterdam to vienna. IEEE Spectrum. Retrieved from http://spectrum.ieee.org/transportation/advanced-cars/europes-smart-highway-will-shepherd-cars-from-rotterdam-to-vienna

Wan, A. and Nan, W. (4/20/2015). Baidu’s Yu Kai talks autonomous cars, artificial intelligence and the future of search. South China Morning Post. Retrieved from http://www.scmp.com/lifestyle/article/1767799/baidus-yu-kai-talks-autonomous-cars-artificial-intelligence-and-future

Happy Day After!

I have not been good about posting to my blog this last week and I’m not exactly sure why.

A lot of crazy things have been going on but that’s not really newsworthy. That is kind of the standard state of things 🙂

School is still school. The job hunt continues. The house remodeling continues. And to complete the meme, “still single, still not king.”

But you know what? I have had to interact with a lot of, well, ignorant people in the last two weeks. People who I would say are “just following orders” or don’t stop to think about what they say. The hold message at CenturyLink says “…please tell the operator if you don’t want us to use (your private information) to offer you goods and services. This has no bearing on the goods and services we’ll offer.” What? I challenged the operator that the message didn’t make a bit of sense and she said “of course it does” and was annoyed with me for questioning the message. I can’t say these people are “stupid” as they appear to be functioning members of society. They appear to have families and friends that love and support them. They appear to go about their daily routines.

But. Many of them seem oblivious to the surrounding world and basic world views which differ from their own like say the scientific method. 19th Century thought suggested that the entire world was knowable, either by observation or experimentation. Much of 21st Century thought seems to be based purely on our own opinions and observations. It’s as though social media has taught us that our opinion, even if based on nothing, is so important to share, that we must constantly inform our friends of our merest thoughts. We see each other’s thoughts which validate our own merest thoughts and we are reenforced.

The conspiracy shows are all talking about how the MMR vaccine is likely to be the cause of the increase in Autism. This is akin to the folks who earlier were saying, “well, this winter is the worst in recent history, therefore global warming is a hoax.” Forgetting of course that our limited experience is not the sum total of all experiences. And I hate to point it out but all of those new autism patients have also breathed air, drank water, and eaten food in this country. I agree that autism rates are skyrocketing but vaccines are not necessarily the culprit. Studies have been done that show they aren’t but this “well I just feel this is true” mentality doesn’t watch for facts.

It’s like we stopped exploring or thinking about the world when we were teens. Having worked with middle school students, so much of world history makes more sense now because the early teen-aged people are very set in black and white thinking and that there can only be one way of thinking. Witness the bullying that kids do to each other, especially that one kid that’s different. Joan of Arc was 14 when she became famous.

We bemoan the loss of creativity and STEM knowledge in our society but it seems like we’ve lost a great deal of common sense and the ability to take a step back and actually look at a situation. Business schools pump out MBAs who have to research White Papers without critiquing the authors. Anyone can write a White Paper. Anyone can publish a study or an OpEd. I can’t tell you the number of Dissertations I’ve read where I thought “wow, these are the wrong questions for this topic” or “that solution has nothing to do with causality.” And yet, 100+ pages later the author has proved that there may or may not be a connection between what s/he wanted to prove and what they researched.

And that brings me to my writing. Here I am. I’ll likely be a Doctor in Computer Science in the nearish future. I’ve read study after study. I’ve read the background material that is relevant to me and my research. I want to see further because I stand on the shoulders of giants. But I also know that it takes a creative spark to make a leap from what is known to what needs to be known. And I know that as I get a better view, I may divert from the course set out by my colleagues. I may have to blaze a new trail but I can’t do it without observation and experimentation. I can’t blaze it without being able to tell people where I am and how I got there. I can follow my gut and explore but if the data I have doesn’t shine on my path, I may need to check my maps. The method I follow, the scientific method, allows for course corrections, it encourages me to find the flaws in my own thinking and correct it as better or more complete knowledge becomes available.

Now, how can I teach the robot that? And more importantly, “should I?”