A aviation & planes forum. AviationBanter

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Go Back   Home » AviationBanter forum » Aviation Images » Aviation Photos
Site Map Home Register Authors List Search Today's Posts Mark Forums Read Web Partners

Military AI vanquishes human fighter pilot in F-16 simulation. How scared should we be? - The Loyal Wingman drone, pictured above, is controlled by a human fighter pilot who instructs the drone's AI to perform specific tasks.jpg



 
 
Thread Tools Display Modes
  #1  
Old August 31st 20, 10:23 PM posted to alt.binaries.pictures.aviation
Miloch
external usenet poster
 
Posts: 24,291
Default Military AI vanquishes human fighter pilot in F-16 simulation. How scared should we be? - The Loyal Wingman drone, pictured above, is controlled by a human fighter pilot who instructs the drone's AI to perform specific tasks.jpg

https://www.nbcnews.com/think/opinio...ow-ncna1238773

Artificial intelligence can master difficult combat skills at warp speed, but
the Pentagon’s futurists must remain mindful of its limitations and risks.

From the outside, the simulated aerial dogfight the Pentagon held two weeks ago
looked like a standard demonstration of close-up air-to-air combat as two F-16
fighter jets barreled through the sky, twisting and diving as they sought an
advantage over the other. Time and time again the jets would “merge,” with one
or both pilots having just split seconds to pull off an accurate shot. After one
of the jets found itself riddled with cannon shells five times in these
confrontations, the simulation ended.

From the inside, things seemed very, very different.

“The standard things we’re trained to do as a fighter pilot aren’t working,”
lamented the losing pilot, an Air Force fighter pilot instructor with the call
sign Banger.

That’s because this wasn’t a typical simulation at all. Instead, the U.S.
military’s emerging-technologies research arm, the Defense Advanced Research
Projects Agency, had staged a matchup between man and machine — and the machine
won 5-0.

Indeed, the victor was an artificial intelligence-directed “pilot” developed by
Heron Systems. It quickly put the lie to a statement DARPA made just one year
ago, “No AI currently exists … that can outduel a human strapped into a fighter
jet in a high-speed, high-G dogfight.”

The AlphaDogfight simulation on Aug. 20 was an important milestone for AI and
its potential military uses. While this achievement shows that AI can master
increasingly difficult combat skills at warp speed, the Pentagon’s futurists
still must remain mindful of its limitations and risks — both because AI remains
long away from eclipsing the human mind in many critical decision-making roles,
despite what the likes of Elon Musk have warned, and to make sure we don’t race
ahead of ourselves and inadvertently leave the military exposed to new threats.

That’s not to minimize this latest development. Within the scope of the
simulation, the AI pilot exceeded human limitations in the tournament: It was
able to consistently execute accurate shots in very short timeframes;
consistently push the airframe’s tolerance of the force of gravity to its
maximum potential without going beyond that; and remain unaffected by the
crushing pressure exerted by violent maneuvers the way a human pilot would.

All the more remarkable, Heron’s AI pilot was self-taught using deep
reinforcement learning, a method in which an AI runs a combat simulation over
and over again and is “rewarded” for rapidly successful behaviors and “punished”
for failure. Initially, the AI agent is simply learning not to fly its aircraft
into the ground. But after 4 billion iterations, Heron seems to have mastered
the art of executing energy-efficient air combat maneuvers.

Human pilots could perhaps devise tactics designed to exploit the Heron AI’s
limitations, just as Banger did with temporary success in the final round of the
competition. But, like the Borg in "Star Trek," the AI-powered pilot may, in
turn, eventually learn from its failures and adapt. (The machine-learning
algorithm was disabled during the tournament.)

However, the tournament’s focus on within-visual-range warfare with guns didn’t
challenge the AI pilot to perform more complex tasks. It focused on a narrow,
though foundational, slice of air warfare known as "basic fighter maneuvers,"
leaving out aspects such as using sensors and missile weapons that may decide
the outcome of an air battle well before the opposing fighter pilots ever come
close enough to see each other.

For comparison, the U.S. Air Force’s newest fighter, the F-35, is optimized less
for dogfighting and more for stealthy surprise attacks executed from beyond
visual range, as well as for fighting cooperatively with friendly air and
surface forces by sharing sensor data.

Even more importantly, one-on-one duels between individual fighters as occurred
in the simulation are very different from likely air-battle scenarios in a major
conflict, which could play out over huge distances and involve dozens of
supporting air and surface units.

And machine learning still has major limitations. It can have trouble working
collaboratively, even though cooperation is key to how militaries fight wars. AI
agents are also known to rigidly adhere to flawed assumptions based on limited
datasets, and their trial-by-error learning style can produce suboptimal
outcomes thanks to the errors side of the equation when confronting novel
situations.

DARPA has acknowledged that it wanted to establish AI’s capability in
fundamental piloting tasks before tackling more complicated ones. The need for
additional development should be reassuring to those like Musk who fear that AI
poses an “existential threat” — but it is a reminder that prematurely relying on
AI carries other risks, as well. In the near term, the military wants AI to
assist, not replace, human actors in war.

Most armed drones today are remotely piloted by a human and only rely on
autonomous algorithms to avoid crashing when their control link is interrupted.
But remote control prevents the drone aircraft from reacting with the
super-human speed and precision the recent tournament demonstrated they are
capable of.

One concept rapidly entering the mainstream is the so-called Loyal Wingman drone
controlled by a nearby manned fighter pilot who instructs the drone's AI agent
to perform specific tasks. Basically, the human handles big-picture
decision-making, while the AI takes on the risky dirty work of pressing home
attacks and drawing away enemy missile fire.

A key advantage of the Loyal Wingman concept is that it may cost as little as $3
million each compared to around $80 million to over $100 million for a new F-35
stealth fighter. That means the drones could be treated as reusable assets that
can be sacrificed if necessary.

Indeed, there is a debate raging in military circles across the globe as to
whether it’s affordable to develop a new generation of manned fighters or
whether lower costs and greater convenience dictate that the next generation
will be largely unmanned with at least semi-autonomous AI.

But an AI capable of performing the full range of missions doesn’t exist yet.
And as AI- and remote-controlled unmanned systems proliferate, militaries will
sharpen their ability to disrupt control links and hack AI systems. Therefore,
caution is needed to avoid a "Battlestar Galactica" scenario wherein a heavily
networked military is hindered by jamming and computer viruses.

The potential advent of autonomous war machines also arouses justified ethical
and existential concerns. For example, integrating facial-recognition AI to
further automate drone strikes could go wrong in all sorts of terrible ways. And
we should certainly never leave AI in a position to initiate the use of
strategic nuclear weapons, as has been suggested.

In a sense, the AlphaDogfight confirmed something we already knew in our gut:
Given sufficiently good algorithms, AI can outperform most humans in making
rapid and precise calculations in a chess-like contest with clearly defined
rules and boundaries. How flexibly and cooperatively AI pilots can make
decisions in the more chaotic and uncertain environment of a high-end war zone
remains to be seen.

In the next few years, semi-autonomous AI will be harnessed by pilots of both
manned and unmanned combat aircraft. They will eventually be delegated to
piloting and attack roles that they can perform faster and more precisely than
humans.

However, AI as it stands isn’t yet poised to innovate or make informed judgments
in response to novel problems, and for that reason it is essential that humans
remain in the loop of future robotic air wars. One of those novel problems, in
fact, will be deciding just how much autonomy we can safely accord to future
robotic war machines.




*



  #2  
Old September 1st 20, 02:29 AM posted to alt.binaries.pictures.aviation
Mitchell Holman[_9_]
external usenet poster
 
Posts: 8,922
Default Military AI vanquishes human fighter pilot in F-16 simulation. How scared should we be? - The Loyal Wingman drone, pictured above, is controlled by a human fighter pilot who instructs the drone's AI to perform specific tasks.jpg

Miloch wrote in
:

https://www.nbcnews.com/think/opinio...hes-human-figh
ter-pilot-f-16-simulation-how-ncna1238773

Artificial intelligence can master difficult combat skills at warp
speed, but the Pentagon’s futurists must remain mindful of its
limitations and risks.

From the outside, the simulated aerial dogfight the Pentagon held two
weeks ago looked like a standard demonstration of close-up air-to-air
combat as two F-16 fighter jets barreled through the sky, twisting and
diving as they sought an advantage over the other. Time and time again
the jets would “merge,” with one or both pilots having just split
seconds to pull off an accurate shot. After one of the jets found
itself riddled with cannon shells five times in these confrontations,
the simulation ended.

From the inside, things seemed very, very different.

“The standard things we’re trained to do as a fighter pilot aren’t
working,” lamented the losing pilot, an Air Force fighter pilot
instructor with the call sign Banger.

That’s because this wasn’t a typical simulation at all. Instead, the
U.S. military’s emerging-technologies research arm, the Defense
Advanced Research Projects Agency, had staged a matchup between man
and machine — and the machine won 5-0.

Indeed, the victor was an artificial intelligence-directed “pilot”
developed by Heron Systems. It quickly put the lie to a statement
DARPA made just one year ago, “No AI currently exists … that can
outduel a human strapped into a fighter jet in a high-speed, high-G
dogfight.”

The AlphaDogfight simulation on Aug. 20 was an important milestone for
AI and its potential military uses. While this achievement shows that
AI can master increasingly difficult combat skills at warp speed, the
Pentagon’s futurists still must remain mindful of its limitations and
risks — both because AI remains long away from eclipsing the human
mind in many critical decision-making roles, despite what the likes of
Elon Musk have warned, and to make sure we don’t race ahead of
ourselves and inadvertently leave the military exposed to new threats.

That’s not to minimize this latest development. Within the scope of
the simulation, the AI pilot exceeded human limitations in the
tournament: It was able to consistently execute accurate shots in very
short timeframes; consistently push the airframe’s tolerance of the
force of gravity to its maximum potential without going beyond that;
and remain unaffected by the crushing pressure exerted by violent
maneuvers the way a human pilot would.

All the more remarkable, Heron’s AI pilot was self-taught using deep
reinforcement learning, a method in which an AI runs a combat
simulation over and over again and is “rewarded” for rapidly
successful behaviors and “punished” for failure. Initially, the AI
agent is simply learning not to fly its aircraft into the ground. But
after 4 billion iterations, Heron seems to have mastered the art of
executing energy-efficient air combat maneuvers.

Human pilots could perhaps devise tactics designed to exploit the
Heron AI’s limitations, just as Banger did with temporary success in
the final round of the competition. But, like the Borg in "Star Trek,"
the AI-powered pilot may, in turn, eventually learn from its failures
and adapt. (The machine-learning algorithm was disabled during the
tournament.)

However, the tournament’s focus on within-visual-range warfare with
guns didn’t challenge the AI pilot to perform more complex tasks. It
focused on a narrow, though foundational, slice of air warfare known
as "basic fighter maneuvers," leaving out aspects such as using
sensors and missile weapons that may decide the outcome of an air
battle well before the opposing fighter pilots ever come close enough
to see each other.

For comparison, the U.S. Air Force’s newest fighter, the F-35, is
optimized less for dogfighting and more for stealthy surprise attacks
executed from beyond visual range, as well as for fighting
cooperatively with friendly air and surface forces by sharing sensor
data.

Even more importantly, one-on-one duels between individual fighters as
occurred in the simulation are very different from likely air-battle
scenarios in a major conflict, which could play out over huge
distances and involve dozens of supporting air and surface units.

And machine learning still has major limitations. It can have trouble
working collaboratively, even though cooperation is key to how
militaries fight wars. AI agents are also known to rigidly adhere to
flawed assumptions based on limited datasets, and their trial-by-error
learning style can produce suboptimal outcomes thanks to the errors
side of the equation when confronting novel situations.

DARPA has acknowledged that it wanted to establish AI’s capability in
fundamental piloting tasks before tackling more complicated ones. The
need for additional development should be reassuring to those like
Musk who fear that AI poses an “existential threat” — but it is a
reminder that prematurely relying on AI carries other risks, as well.
In the near term, the military wants AI to assist, not replace, human
actors in war.

Most armed drones today are remotely piloted by a human and only rely
on autonomous algorithms to avoid crashing when their control link is
interrupted. But remote control prevents the drone aircraft from
reacting with the super-human speed and precision the recent
tournament demonstrated they are capable of.

One concept rapidly entering the mainstream is the so-called Loyal
Wingman drone controlled by a nearby manned fighter pilot who
instructs the drone's AI agent to perform specific tasks. Basically,
the human handles big-picture decision-making, while the AI takes on
the risky dirty work of pressing home attacks and drawing away enemy
missile fire.

A key advantage of the Loyal Wingman concept is that it may cost as
little as $3 million each compared to around $80 million to over $100
million for a new F-35 stealth fighter. That means the drones could be
treated as reusable assets that can be sacrificed if necessary.

Indeed, there is a debate raging in military circles across the globe
as to whether it’s affordable to develop a new generation of manned
fighters or whether lower costs and greater convenience dictate that
the next generation will be largely unmanned with at least
semi-autonomous AI.

But an AI capable of performing the full range of missions doesn’t
exist yet. And as AI- and remote-controlled unmanned systems
proliferate, militaries will sharpen their ability to disrupt control
links and hack AI systems. Therefore, caution is needed to avoid a
"Battlestar Galactica" scenario wherein a heavily networked military
is hindered by jamming and computer viruses.

The potential advent of autonomous war machines also arouses justified
ethical and existential concerns. For example, integrating
facial-recognition AI to further automate drone strikes could go wrong
in all sorts of terrible ways. And we should certainly never leave AI
in a position to initiate the use of strategic nuclear weapons, as has
been suggested.

In a sense, the AlphaDogfight confirmed something we already knew in
our gut: Given sufficiently good algorithms, AI can outperform most
humans in making rapid and precise calculations in a chess-like
contest with clearly defined rules and boundaries. How flexibly and
cooperatively AI pilots can make decisions in the more chaotic and
uncertain environment of a high-end war zone remains to be seen.

In the next few years, semi-autonomous AI will be harnessed by pilots
of both manned and unmanned combat aircraft. They will eventually be
delegated to piloting and attack roles that they can perform faster
and more precisely than humans.

However, AI as it stands isn’t yet poised to innovate or make informed
judgments in response to novel problems, and for that reason it is
essential that humans remain in the loop of future robotic air wars.
One of those novel problems, in fact, will be deciding just how much
autonomy we can safely accord to future robotic war machines.



I am not sure that there even a
place for air-to-air combat in future
wars. Ground attack missions are already
dominated by drones, in short order they
will be defended by other drones as well,
IMHO.

Generals are always ready to fight
the last war over again........








 




Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
US Air Force pilot becomes the first woman to fly the F-35A stealth fighter into combat - US Air Force Capt. Emily Thompson, 421st Expeditionary Fighter Squadron pilot,.jpg ... Miloch Aviation Photos 0 June 12th 20 05:44 AM
Air Force plans to pit autonomous fighter drone against human pilot in dogfight to test AI warfare [4/4] - XQ-58 Valkyrie.mp4 (1/1) Miloch Aviation Photos 0 June 7th 20 12:12 AM
Air Force's Secretive XQ-58A Valkyrie Experimental Combat Drone Emerges After First Flight - 'loyal wingman'.jpg Miloch Aviation Photos 0 March 7th 19 03:45 AM


All times are GMT +1. The time now is 11:58 AM.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 AviationBanter.
The comments are property of their posters.