A common suggestion on the topic of AI ethics is to implement the Three Laws of Robotics that Isaac Asimov explored in his science fiction stories (e.g. in "I, Robot", which has just come out as a movie). But are these laws really safe? As Asimov's stories themselves make clear, they can be misinterpreted in a lot of ways. On the 3 Laws Unsafe website, recently launched by the Singularity Institute, you'll find several articles further explaining possible flaws in the three laws. I think that as AI progresses, it will become extremely important to deal with safety issues in less simplistic ways. Any opinions/etc on this?