The U.S. Air Force is developing an unmanned stealth aircraft called the XQ-58A Valkyrie that uses artificial intelligence (AI) to fly and operate itself. This experimental platform could give American forces a major advantage in future conflicts by absorbing risks and limiting pilot losses. However, handing lethal decision-making power to AI also raises ethical concerns.
AI expert Phil Siegel says this technology will be crucial for future defense capabilities, but notes it must be properly programmed to act within appropriate bounds. The Valkyrie successfully completed its first test flight in 2019 and has since demonstrated capabilities like flying in formation with manned jets.
A key advantage is the relatively low $4 million per unit cost, far less than manned jets. The AI aircraft is envisioned as a loyal wingman that can identify threats, engage targets, and draw enemy fire away from pilots. While this protects pilots, it transfers lethal decisions to AI systems.
Experts say defining the right balance of human control and autonomy in these systems is crucial. Offensive roles would require more advanced AI than current defensive applications. Guidelines must be set for cases where sensors malfunction or unpredictable situations arise.
Although AI autonomy entails risks, many believe these unmanned systems will be necessary to confront top rivals like China. The appeal is using inexpensive aircraft aggressively without endangering pilots. However, ethical concerns remain about granting full lethal authority to AI. Ongoing development will focus on building in safeguards for human control over AI-driven combat aircraft.