Skip to content

By Andreas Voniatis

Should AI Replace Humans? Lessons From the Ukraine Frontline

Share

Being an avid reader of history and innovation, I’m always drawn to what other domains can teach us. Military history is quite consistent on this point. Radar became air traffic control. The internet began as a military communications network. GPS was built for nuclear missile guidance before landing in every smartphone. Each major conflict of the twentieth century compressed decades of civilian innovation into years. What worked under the pressure of war eventually found its way into everyday life.

Ukrainians in the loop

Right now on the Ukrainian frontline, German AI defence company Helsing’s HX-2 drones are hunting Russian targets using AI-powered object recognition. On the battlefield, the drone identifies then calculates, before closing in. But one person still makes the final call.

That single human in the loop is not a bottleneck but the entire point. War is a brutal yet honest competition of systems, accelerating innovation faster than any peacetime market and revealing which approaches actually prevail. 

What the Ukrainian frontline is showing us, at quite an enormous human cost, is that the most effective system is not full autonomy but human judgment amplified by AI.

Human Control is Non-Negotiable

As a NATO member sharing a border with Russia, Estonia’s position is unambiguous: human control over decisions involving lethal force is non-negotiable. 

Not everyone sees it the same way. The Pentagon has given Anthropic until Friday 27th February 2026 to drop the restriction on requiring human sign-off or face removal from the military supply chain entirely (Yahoo!). Anthropic CEO Dario Amodei has drawn a firm line where Claude will not be used for final targeting decisions in military operations without human involvement, arguing the model is not reliable enough to avoid potentially lethal mistakes without human judgment (CBS News). The company building the AI is insisting on human oversight. The government is pushing to remove it.

Helsing has confirmed their drones can technically operate without human oversight, even though government regulation requires that they don’t. The capability for full automation is there, but that single requirement is revealing everything about where AI is heading in every other industry.

Is every employee action a potential drone strike?

A drone strike is a high-stakes, irreversible action taken on imperfect information in a fast-moving environment. And so are client communications during a relationship that took months to build or a product roadmap choice for example.

Getting any of these wrong could have consequences which are barely visible until the damage is done. The approval loop is shortening because AI is making it faster and cheaper to act, making the errors invisible until they are catastrophic.

The humans being replaced are not just headcount but the collective memory that makes the next decision better than the last.

Replacing humans creates a gap that threatens succession

The headlines I’m seeing such as tech-specific internship postings that have fallen 30% since 2023 while applications have risen 7% (Stack Overflow) shows a shift happening faster than most organizations might acknowledge.

What leaves with those staff exits is not just headcount but judgment built by years of making mistakes while the stakes were still manageable. When Deloitte submitted a $290,000 report to the Australian government containing fabricated academic references and a made-up quote from a federal court judge, it was a researcher, not the AI, who caught it (Fortune). This is what happens when the human approval loop thins to the point of decoration. The outputs look credible until someone with genuine expertise checks them.

When those senior roles need filling, the talent will be scarcer and the cost eye-wateringly expensive, making it a business longevity risk not just a hiring problem.

Automation’s great, until you realize you need operators

Let’s face it, a million drones still requires a million people to operate them. 

Helsing is building toward one operator overseeing multiple drones simultaneously, Anduril is developing a system marshalling a fleet of ten or more, and in early 2026 China’s People’s Liberation Army demonstrated a single soldier controlling a swarm of over 200. The human is technically present although their capacity to genuinely assess any single decision is approaching zero.

The one operator to many drones model is partly a response to Ukraine’s recruitment crisis. Between January and June 2025 alone, Ukraine’s Human Rights Ombudsman received more than 2,000 complaints about the use of force by conscription patrols, according to Al Jazeera. Reducing the human headcount the frontline demands while maintaining operational capability is not just a technological choice but a necessity.

The same erosion is happening in software teams calling AI-assisted review “human oversight,” in legal teams treating AI contract summaries as due diligence, in marketing organisations approving content in batches of fifty. The human becomes a compliance checkbox, still in the loop but no longer required to think.

The Ukrainian frontline is not showing us a world without humans but one where humans are becoming automation managers. The question is whether your organisation is building that capability or simply cutting headcount.

Businesses see AI as Job Creators

How likely is AI to replace human staff?

We analysed 3.1 million opinions from business leaders across X, Quora, Reddit, Bluesky, TikTok and Threads over the past twelve months, collected within a 95% confidence interval and 5% margin of error.

39% of business leaders believe AI will create new jobs and 26% are focused on innovation. The FIA (the motorsports governing body) now uses computer vision based ECAT (Every Car All Turns) system to flag track infringements in F1 races, catching corner-cutting advantages that human stewards likely missed. Yet, only 3% are discussing human oversight more broadly.

On the Ukrainian frontline, every major European government and every defence company deploying lethal AI is treating human oversight as non-negotiable. The business conversation about oversight exists but is being drowned out thirteen to one by the job creation and innovation narrative.

The drone operators keeping humans in the kill chain are the 3%. That gap between the frontline and the boardroom is where the real risk lives. But the people actually using AI-powered services tell a different story.

Customers tolerate AI for service

What are your feelings on companies using AI for customer service?

Using the Artios data platform we analysed 4.2 million US customer opinions on AI being used for customer service over the past twelve months.

The results are more nuanced than the automation narrative suggests. 24.4% are strongly supportive of AI as a cost-cutting measure and 12.5% welcome enhanced support. But the enthusiasm is transactional. When it comes to human touch, only 1.5% are strongly supportive of replacing it, with a further 2.2% raising privacy concerns and 2.9% expressing reservations about cost-cutting measures specifically.

Customers will tolerate AI. They do not prefer it when it comes to the human moments that build lasting relationships.


Employees unconvinced by bosses replacing staff with AI

Describe your feelings on your employer using AI to replace human jobs in your company?

Using the Artios data platform we analysed 4.5 million US employee opinions on AI replacing human jobs over the past twelve months.

The dominant concern is ethical. 14.7% are strongly opposed on ethical AI use grounds and 7.9% are somewhat opposed, making it the single largest category of response. Innovation in the workforce divides opinion almost evenly, with 13.3% somewhat opposed and 13.8% somewhat supportive, suggesting employees can see both sides of the argument but remain fundamentally uneasy.

Only 2.1% are strongly supportive of AI replacing human jobs on any grounds. The employees watching this happen are not convinced. And unconvinced employees do not build the kind of cultures that retain customers.

What made Idiocracy a Comedy is now a Documentary

In 2006 Mike Judge made a comedy film called Idiocracy about a future where humanity had grown too dependent on technology to think for itself. It was meant to be absurd. It is not a forecast anymore.

The generation entering the workforce today may be the last to develop independent judgment. Good judgment comes from making decisions, getting them wrong and understanding why. If AI is making those calls, within a generation you may end up with people who can operate the system but cannot question it.

Who benefits? The few with the resources to build and control these systems. The corporation building its operations on another company’s AI defaults is making the same strategic error as the nation that outsources its defence capability.

AI as force multiplier, not replacement

None of this means AI is the enemy. The drone operators using Helsing’s platform are more effective than those without it. AI amplifying human judgment is the model that works.

But replacing humans entirely carries real costs. Automating workflows moves businesses toward a SaaS model, yet even that is now under threat. Anthropic’s product releases triggered a nearly $1 trillion selloff of enterprise software stocks (Bloomberg). The irony is not lost that the same company insisting on human oversight in its Pentagon dispute triggered that selloff (MoneyWeek). Every business leader building on someone else’s AI defaults should take note. 

The customer data, the employee data, and the Ukrainian frontline are all pointing in the same direction. Humans want to be in the loop, not as a compliance checkbox but as genuine participants in decisions that matter, and the businesses that understand this are building something that is both worth working for and worth buying from..

That is the only sustainable advantage on offer.