So now that they have robots blowing people up they are thinking about putting ethics in to the software.
"Smart missiles, rolling robots, and flying drones currently controlled by humans, are being used on the battlefield more every day. But what happens when humans are taken out of the loop, and robots are left to make decisions, like who to kill or what to bomb, on their own? Ronald Arkin, a professor of computer science at Georgia Tech, is in the first stages of developing an "ethical governor," a package of software and hardware that tells robots when and what to fire. His book on the subject, "Governing Lethal Behavior in Autonomous Robots," comes out this month. "
Robots are computers, very fancy computers. Computers are good at copying things, crunching numbers and playing card games. Thinking is not what Computers are best at, but I guess we will tell a robot the same as a soilder;
"We tell soldiers what is right and wrong," said Arkin. "We don't allow soldiers to develop ethics on their own."
Didn't any of these guys read Asimov when they where growing up? Or at least see the movie irobot? At least this guy did; Asimov's Laws of Robotics Are Total BS
Here are some of the Robots, a maars and a terminator;
A Single Cloud Compromise Can Feed an Army of AI Sex Bots
-
Organizations that get relieved of credentials to their cloud environments
can quickly find themselves part of a disturbing new trend: Cybercriminals
using...
2 days ago
1 comment:
Post a Comment