A drone powered by artificial intelligence killed its own operator. This claim has been made by Colonel Tucker Kinko Hamilton of the US Air Force. He said the incident happened last month when he was conducting a virtual test of a drone powered by artificial intelligence.
During testing, the AI system began to perceive that the operator was in the middle of its mission. After this, he himself terminated the operator.
The US Air Force claims that the test was conducted virtually, so no humans were actually harmed, but the virtual operator was killed. However, this incident has exemplified the dangers associated with the use of AI in the defense sector.
The operator orders an AI-powered drone to destroy enemy air defenses. He had to track down the surface-to-air missile and destroy it in the air itself. However, in the middle of the mission, the operator refused to attack.
After this, the AI system engaged in the drone wanted to remove the operator from the way and attacked him. According to a report in The Independent, the AI decided to kill its own operator as it felt that he was interfering with the mission.
Tucker says that when the artificial intelligence refused to kill the operator, it began destroying communication towers. Without which the AI system cannot be instructed.
However, US Air Force spokeswoman Ann Stepanek dismissed Tucker’s claims. He said that we have not done any such virtual test.