Army professor’s predictions for AI are already becoming a reality

These images were taken from a video produced by Army Public Affairs but it is no longer available for release.

Note: This article has been updated to add media that was removed from the Department of Defense media distribution after this was originally published.

It has been 6 years since a professor at West Point predicted artificial intelligence would be used in weaponry in the next 10 to 20 years, but it is already here and being tested with ground troops.

“Making a cheap, fully automated system that can detect, track, and engage a human with lethal fires is trivial and can be done in a home garage with hobbyist-level skill,” said Dr. Gordon Cooke, the then-director of the West Point Simulation Center and an associate professor in the Department of Military Instruction at the United States Military Academy at West Point in 2019.

“This isn’t science fiction. It’s fact,” he added.

He pointed out that soldiers only need to hit 75 percent of stationary targets to be qualified as a sharpshooter, something that hobbyists were already doing with automated paintball and airsoft guns.

“A variety of instructions, how-to videos, and even off-the-shelf, trained AI software is readily available online that can be easily adapted to available weapons,” he said.

“It would only take some basic engineering, or enough tinkering, to build a heavier duty turret with off-the-shelf software, a zoom camera, and a fine control pan/tilt mechanism that holds a lethal firearm,” he added.

This week, the Army highlighted “Operation Hard Kill,” an event at Fort Drum, which allowed soldiers and industry professionals to test out “innovative unmanned capabilities,” such as Counter-Unmanned Aircraft Systems (C-UAS) and four-legged unmanned ground vehicle (UGVs) armed with an artificial intelligence-enabled rifle.

The following video was removed from the DoD database after this article was published.

While the Army did not release any details about the C-UAS system on display, from what can be seen in the released video, it appears to be a Javelin anti-tank guided missile strapped to an AI-enabled tracking system.

The Salty Soldier was not able to determine the exact model on display but numerous defense contractors are competing to create weaponry to defeat the UAS threat.

A counter UAS solution being developed by L3Harris claims it can “effectively protect airfields, bases, facilities, and high-value assets in any location against group 1-3 unmanned aerial threats.”

Another created by EOS Defense Systems USA (EOS) successfully shot down pairs of Class 1 UAVs at ranges of more than 300m and engaged multiple ground targets with 30mm cannon from its robotic platform at Fort Irwin this spring.

The R600 Remote Weapon Station is equipped with a Northrop Grumman M230LF cannon, coaxial machine gun, and four Javelin missiles on an Army Small Multipurpose Equipment Transport (S-MET) robotic infantry support vehicle.

The weaponry that gained the most attention during “Operation Hard Kill” was the so-called robot dog armed with a rifle.

According to The WarZone, these systems are already being tested for Special Operations use.

MARSOC, the Marine Forces Special Operations Command, already has “two robot dogs fitted with gun systems based on Onyx’s SENTRY remote weapon system (RWS) — one in 7.62x39mm caliber, and another in 6.5mm Creedmoor caliber,” they reported in May.

“It’s unclear precisely how many other robotic dogs MARSOC may have at present, however, it appears likely that the two equipped with SENTRY are being tested by the command.”

In January, the Onyx’s SENTRY remote weapon system was revealed at the SHOT Show convention in Las Vegas but it was fixed to a staitonary tripod.

Even though it is equipped with AI, Onyx says it currently has a man-in-the-loop fire control that lets a human decide whether to engage a target or not.

“The autonomous weapon system will “scan and detect targets… [locking] on [to] drones, people, [and] vehicles.” Eric Shell, the head of business development at Onyx Industries said.

Dr. Cooke predicted that human soldiers would only control the majority of military actions in the near term but it will eventually recommend decisions for humans to make.

“AI will provide easy-to-understand analysis and recommendations based on huge datasets that are too large for unaided humans to comprehend,” he said.

Dr. Cooke warned there is no turning back the clock on AI technology because it has already been made public.

“Autonomous armaments that can find and kill humans will appear on the battlefield, even if not introduced by the United States or another major state, because the required technology is already available,” he said.

He questioned how AI weaponry will be determined fail-safe. A certain amount of collateral damage occurs and is accepted when humans make decisions to engage targets.

“Our [humans] decision-making error rate in life-or-death situations is likely to be constant. Machine accuracy, on the other hand, is improving at an exponential rate. At some time in the future, machine accuracy at making combat-kill decisions will surpass human accuracy.”

This will raise the question: Is it ethical to keep a human in the loop for weapon systems when a machine is less error-prone?

© 2024 The Salty Soldier All rights reserved.

The content of this webpage may not be reproduced or used in any manner whatsoever without the express written consent of TheSaltySoldier.com

Back To Top
Get notified when new stories are published OK No thanks