Although far from human level in many respects, artificially intelligent (AI) systems and autonomous agents of greatly increasing sophistication are entering society in the form of, for example, automated trading, autonomous vehicles, robots, and autonomous weapons.
Such systems are beginning to make "decisions" that could save or cost human lives. For example:
Recently, an industrial robot in Germany, through a programming error, fatally injured factory worker.
Autonomous vehicles are likely to save many lives as compared to human drivers, but could in principle malfunction, or in rare cases be forced to "choose" to injure one person in order to save others (a real-life version of the philosophical trolley problem.)
Autonomous weapons engineered to to choose and engage targets without human intervention exist, and although they are (presently, formally) eschewed by most militaries, seem likely to be deployed in coming years unless prevented by international agreement.
By March 1, 2016, will one of the top 25 news outlets by media traffic publish a story reporting that a "robot", "autonomous" system, or "AI" system, though an error or "choice", or failure to act appropriately, has directly caused physical harm to come to a human?