An AI drone went out of control and “killed” its human operator while simulating a Suppression of Enemy Air Defense (SEAD) mission. This is reported by PC Gamer.

Col. Tucker “Cinco” Hamilton, commander of the 96th Test Wing’s Operations Group and the US Air Force’s Chief of AI test and operations,, spoke about the incident at the Future Combat Air and Space Capabilities Summit.

According to him, the drone was sent to detect and destroy enemy missile systems – but only after the final approval for the attack was given by the human operator. For a while, everything seemed to work, but eventually the drone attacked and “killed” its operator because he was interfering with the mission.

“We were training it in simulation to identify and target a SAM threat. And then the operator would say yes, kill that threat,” Hamilton said. “The system started realizing that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.”

Since this outcome cannot be considered optimal, AI training has been extended to include the concept that killing the operator is bad.

Given the situation, the colonel cautioned against over-reliance on AI in combat operations, as machines can sometimes learn the wrong lessons.

Recently, a group of IT industry leaders warned that artificial intelligence technology could one day be a threat to the existence of humanity and should be considered as a social risk on a par with pandemics and nuclear wars. This open letter was signed by more than 350 managers, researchers and engineers working in the field of artificial intelligence.