The US Department of Defense announced that Amnesty International has flew a 17-hour F-16 plane, distributed over 12 flights. Flights were conducted last December at the Edwards Air Force Base in California using a trial plane called X-62a Vista.
It is controlled by four artificial intelligence algorithms, and the planes are involved in task battles in tasks that mimic real battles, including automatic takeoff and landing.
The use of automated pilot systems, whether in combat aircraft or aircraft, is not new as the system can navigate a careful path using GPS. However, the air fighting without human intervention is completely new. The company, which designed the plane (Lockheed Martin), said it was the first time that "artificial intelligence has been addressed in a tactical plane."
It is part of a joint research project between the Advanced Defense Research Projects Agency (DARPA) and the US Air Force to develop independent aviation technology.
The program aims to develop self -control in objection and attack aircraft, with the support of artificial intelligence. The program was launched in 2019, and it hosted a challenge in August 2020 to bring technology to virtual air fighting, and its climax reached the victory of artificial intelligence over the professional war pilots who fly the F-16 simulation aircraft.
Colonel Ryan Hefron, director of the program, said that the team made multiple trips to test the algorithm in different situations depending on the type of enemy and the efficiency of weapons, with the aim of training new technology to better adapt it to the various fighting conditions.
The use of artificial intelligence in military decision -making was a hot discussion between supporters and opponents, as reports from major countries have emerged to test technology a few years ago. Supporters argue that technology can improve military capabilities and reduce human error, while opponents are concerned about potential risks and unintended consequences that technology can cause.
Opponents fear that relying on artificial intelligence in decision -making may lead to a lack of accountability and transparency, as it will be difficult to understand the reasons behind such decisions and thus reduce responsibility if artificial intelligence takes wrong decisions. . With severe consequences, such as directing a missile towards a wrong goal or causing civilian injuries. This raises the question: Are technology operators responsible? Or the manufacturer? Or the device itself?
There are also fears that artificial intelligence systems are at risk or penetrated by the attackers, which leads to unintended consequences. Technology opponents say they have the ability to increase the risk of escalating war and undermine the chances of a peaceful solution.