Some very smart people have publicly come out with concerns about robots getting some level of intelligence, or awareness, or whatever. Then again, every new technology had detractors who thought it was going to destroy civilization.
It did give me pause to think when I read some articles about the problems of programming self-driving cars. It turns out that safely driving the car is the easy part.
Ethical decisions are harder. I watched one video where a large truck was blocking the "in" lane of the parking lot. A human driver would simply ignore the law that requires us to stay in our own lane and go around it. The Google self-driving car sat there waiting for it's assigned lane to be clear, which could have been hours.
Can you program a robot to know which laws are OK to ignore, and when?
There are bigger dilemmas, such as choosing between having a head-on collision that kills your own passengers, or avoiding it by plowing into a crowd of bystanders. Or one bystander. Where do you draw the line?
Interesting stuff. Humans have never resolved these dilemmas for ourselves. Maybe robots will.