Special Guests

Prepare for AI Terrorism on Horizon

By Jerry McGlothlin

Via Newsmax Platinum:

Could an AI terror attack make 9/11 look like a stroll through Central Park? In a word: Yes.

After the attacks of Sept. 11, 2001, America found itself confronting a new paradigm of terrorism that redefined modern warfare. Two decades later, strategists are sounding the alarm about another looming threat that could be even more complex, unpredictable, and catastrophic: AI terrorism.

Imagine an AI-enabled cyberattack that doesn’t just cripple bank servers or leak sensitive data — but paralyzes power grids, disables emergency services, hijacks automated vehicles, and mimics human communication with uncanny precision. An attack where no bombs fall, but entire cities grind to a halt. And, most chillingly, one in which the perpetrator may never be known.

This isn’t science fiction; it’s the new frontier of national security.

“Artificial intelligence has moved from the margins to the forefront — it’s now the arena where America’s strength, economic success, and liberty will be won or lost,” retired Lt. Col. Robert Maginnis told Newsmax.

An attack without a return address

Today’s cyberwarfare is already difficult to trace. State and nonstate actors — from Russia and China to rogue hacker collectives — launch complex cyberattacks with plausible deniability. But when these tools are amplified by AI, the risk grows exponentially.

“Unlike a nuclear blast, artificial general intelligence won’t reveal itself with a dramatic signal,” Maginnis said. “Instead, it could silently infiltrate our networks, economy, and defense systems — emerging without obvious warning.”

Such an AI attack could simulate legitimate commands, override human control, or spread disinformation at machine speed.

The scenario haunting many analysts is the AI equivalent of 9/11: a coordinated, multivector assault on digital and physical systems that collapses critical infrastructure. From autonomous cars turned into “slaughterbots,” as the U.N. recently cautioned, to deepfakes impersonating public officials and triggering geopolitical escalations, the range of threats is staggering.

The Trump administration recently released “Winning the AI Race: America’s AI Action Plan,” a 90-point policy roadmap that Maginnis hailed as “a bold strategy combining vision with action.” It emphasizes three core pillars: accelerating innovation, building domestic AI infrastructure, and leading on global AI security standards.

“From fast-tracking permits for chip plants to exporting liberty-based AI to allies, this plan recognizes that we must lead —not follow —in AI,” Maginnis said. “And crucially, it defends free speech, requiring that federally contracted language models be free from ideological censorship. That’s a win for the Constitution.”

But he also cautions: “The optimism is incomplete. America must not only pursue victory in AI, but prepare for the worst.”

Break-glass protocols for AI emergencies

What happens if a company suddenly claims to have developed AGI and demands national security protections — access to classified data, regulatory exemptions, even military deployment? What if an adversary, like China, gets there first?

Maginnis argues that America needs “break-glass protocols — clear, tested plans to respond to AI emergencies, whether that’s cyberattacks, misinformation campaigns, or autonomous systems going rogue.”

These contingency plans must span the Pentagon, DHS, the intelligence community, and private tech firms, he said.

“We don’t get a second chance,” Maginnis said, recalling how the COVID-19 pandemic blindsided global systems. “With AI, the margin for error is even slimmer.”

This sentiment echoes concerns from the Foreign Affairs article “America Should Assume the Worst About AI,” which urges policymakers to approach AI with Cold War-era seriousness, not Silicon Valley optimism.

A central challenge in any AI attack is attribution.

“Advanced AI attacks may not come with a digital return address,” Maginnis said.

Whether they originate from Beijing, Tehran, a terrorist cell in hiding, or a rogue machine learning model trained by a teenager in a basement, it won’t matter. The damage will be done.

Thus, American defenses must be “attribution-agnostic”— capable of detection, containment, and recovery before knowing where to place blame:

  • Hardening critical infrastructure from remote AI penetration
  • Physically isolating sensitive data centers
  • Creating “air-gapped” military continuity plans in the event of digital system failure

These are no longer speculative goals. As the White House’s action plan makes clear, “The U.S. must build analytical muscle to separate hype from real breakthroughs — and act fast when a threat emerges.”

According to a report by the West Point Combating Terrorism Center, generative AI poses special risks:

  • Hyper-realistic propaganda videos
  • Custom malware through AI-generated code
  • Voice-cloned threats and recruitment messages
  • Deepfake leaders calling for violence or surrender

The report notes, “AI lowers the barrier to entry. Groups with limited technical skill can now harness powerful tools to amplify chaos.”

In a world already overwhelmed with misinformation, the weaponization of synthetic media could blur truth to the point of paralysis.

Maginnis and other national security experts agree that America’s strength lies not just in exporting technology, but exporting values: liberty, transparency, restraint, and accountability.

“Our allies want alternatives to China’s surveillance-state AI,” he said. “We must speak as clearly about ethics as we do about engineering.”

White House AI adviser David Sacks echoed this, saying, “Victory in AI is not just about lines of code — it’s about preserving what it means to be human.”

A future not yet written

There’s a sobering line in the report on AI exploitation by terrorist groups: “The coming wave of AI terrorism will not resemble past threats. It will be asymmetric, fast-moving, and potentially anonymous.”

That future could be days or decades away—but the urgency to prepare is now.

Maginnis closed his exclusive Platinum Newsmax interview with a caution and a call to action.

“Securing victory in the AI race marks a pivotal beginning — but bold goals don’t guarantee safety,” he said. “The U.S. must pursue a twofold approach: push technological progress with determination, while matching that pace with serious preparation for potential crises.”

Because in the age of AI, leadership means more than innovation. It means vigilance, vision, and the moral courage to secure not just our systems — but our civilization.

A.I.
Visit Us On TwitterVisit Us On Facebook