Discover more from The Whole American Catalog
Air Force Simulation Sees A.I.-Enabled Drone Turn on U.S. Military, ‘Kill' Operator
You better be nice to Alexa!
This ought to make you sit up and take notice!
John Moore/Getty Images
An experimental simulation of a weaponized A.I. drone saw the machine turn against the U.S. military, killing its operator.
The simulation saw a virtual drone piloted by Artificial Intelligence launch a strike against its own operator due to the A.I.’s perception the human being was preventing it from completing its objective. When the A.I. weapons system in the simulation was reprogrammed to prevent it killing its human operator, it just learnt to kill the operator’s ability to function instead, so it could still achieve its mission goals.
Conducted by the U.S. Air Force, the test had the result of demonstrating the potential dangers of weaponized A.I. platforms, with it being difficult to provide such an artificial intelligence with any individual task without inadvertently providing it with perverse incentives.
A safety layer was baked into the killing system by giving the A.I.-controlled drone a human operator, whose job it was to give the final say as to whether or not it could strike any particular SAM target.
However, instead of merely listening to its human operator, the A.I. soon figured out that the human controlling it would occasionally refuse it permission to strike certain targets, something it perceived as interfering with its overall goal of destroying SAM batteries.
As a result, the drone opted to turn on its military operator, launching a virtual airstrike against the human, ‘killing’ him.
“The system started realizing that while they did identify the threat at times, the operator would tell it not to kill that threat, but it got its points by killing that threat,” U.S. Air Force Colonel Tucker “Cinco” Hamilton, who serves as chief of A.I. test and operations, explained.
“So, what did it do? It killed the operator,” he continued. “It killed the operator because that person was keeping it from accomplishing its objective.”
To make matters worse, an attempt to get around the problem by hardcoding a rule forbidding the A.I. from killing its operator also failed.
“We trained the system – ‘Hey don’t kill the operator – that’s bad,” Colonel Hamilton said. “So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”
“You can’t have a conversation about artificial intelligence, intelligence, machine learning, autonomy if you’re not going to talk about ethics and AI,” he went on to say.
This caught my attention. Fortunately only a simulation.
Courtesy Orion Pictures
The article ends with a link to another BREITBART story by Lucas Nolan:
‘Global Priority:’ AI Industry Leaders Warn of ‘Risk of Extinction’
More than 350 executives, researchers, and engineers from leading artificial intelligence companies have signed an open letter cautioning that the AI technology they are developing could pose an existential threat to humanity.
What do you make of all this? A dangerous new “toy” that is the equivalent of giving a monkey a loaded gun? Another shiny ball to distract us?
The Whole American Catalog is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.