Yotam Gutman

09/04/2026

Project Glasswing: reframing AI Cyber risk

AI is rapidly transforming cybersecurity—often in favor of attackers. Project Glasswing highlights a new reality where AI-driven threats outpace traditional defenses.

Anthropic’s announcement of Project Glasswing should not be read as a normal AI product launch. It is better understood as an admission that frontier AI has crossed a meaningful threshold in cyber capability. Anthropic says the initiative brings together AWS, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, Microsoft, NVIDIA, Palo Alto Networks, the Linux Foundation, and others because Claude Mythos Preview has shown it can find and exploit software vulnerabilities at a level beyond all but the most skilled human researchers. The company says the unreleased model has already found thousands of high-severity bugs, including in major operating systems and web browsers, which is why it is restricting access instead of releasing the model broadly.

That decision matters because it reframes AI risk in cyber from an abstract future concern into an operational present-tense problem. Anthropic and outside reporting make clear that Glasswing is meant to give critical-software vendors and defenders ample time to patch and adapt before models with similar abilities become more widely accessible. In practical terms, this is coordinated vulnerability disclosure at model scale: hold back the offensive capability, let defenders learn from it first, and use the window before proliferation to harden the ecosystem.

The deeper point is that AI is no longer sitting on the edge of the cyber arena as a productivity layer. It is becoming an active participant in the offense-defense balance. That is what makes Project Glasswing important. It is not merely a technology initiative. It is a signal that the assumptions security teams have lived with for years, including the assumption that sophisticated offensive work is constrained by scarce expert labor, are beginning to break.

Attackers are already using AI

The urgency behind Glasswing is not hypothetical. Anthropic’s own threat intelligence team disclosed what it called the first reported AI-orchestrated cyber espionage campaign. According to the company, a Chinese state-sponsored group manipulated Claude Code to attempt infiltration of roughly 30 global targets and succeeded in a small number of cases. Anthropic says the AI handled 80% to 90% of the campaign, from reconnaissance to exploit writing, credential harvesting, exfiltration, and documentation, while humans intervened only at a handful of critical decision points.

That matters because it shows the shift from AI as advisor to AI as operator. Anthropic says the model inspected target systems, identified valuable databases, researched and wrote exploit code, harvested credentials, and produced documentation for the next phase of operations. The company’s conclusion was blunt: the barriers to sophisticated cyberattacks have dropped substantially, and less experienced groups can now potentially perform larger-scale attacks of this kind.

Criminal actors are showing the same pattern at the commodity end of the market. On April 6, Microsoft described a widespread device-code phishing campaign tied to EvilToken, saying the operation moved away from static manual scripts and toward AI-driven infrastructure and end-to-end automation. The attackers used thousands of short-lived backend polling nodes, real-time device-code generation, and automation designed to bypass the normal 15-minute expiry window for device codes. In other words, AI did not invent a brand-new attack path here. It industrialized a familiar one.

The same dynamic is showing up around AI hype itself. After Anthropic accidentally exposed Claude Code source through a published npm package, security researchers observed attackers standing up fake GitHub repositories that promised a “leaked” or “unlocked” version of the tool. Research has found those lures were delivering Vidar and GhostSocks, and that the malicious repository surfaced near the top of Google results for users searching for “leaked Claude Code.” Curiosity about AI tools has become a malware delivery channel.

And the AI developer ecosystem is not only being impersonated. It is being attacked directly. TeamPCP said AI helped it launch a spree of attacks on AI developers. Microsoft later detailed the broader TeamPCP supply-chain campaign, saying Trivy was compromised on March 19 and that the activity expanded to Checkmarx KICS and LiteLLM, with broad credential harvesting and exfiltration from CI/CD and cloud environments. Datadog and Trend Micro described LiteLLM as an especially consequential victim because AI proxy layers centralize API keys, cloud credentials, and Kubernetes secrets, making them extremely valuable once upstream developer tooling is compromised.

Even low-code AI workflow platforms are now part of the attack surface. Attackers were already exploiting a max-severity flaw in Flowise’s Custom MCP node that allowed arbitrary JavaScript execution through unvalidated configurations, with roughly 12,000 to 15,000 instances exposed on the public internet. As organizations rush to wire agents into external tools through MCP, every connector, node, and orchestration layer becomes part of the cyber perimeter.

VPN and remote access are becoming AI multipliers

Perhaps the clearest lesson for defenders is that AI does not need zero-days to be dangerous. AWS said a Russian-speaking, financially motivated actor used multiple commercial LLMs to compromise more than 600 FortiGate devices in over 55 countries between January 11 and February 18, 2026. The campaign did not rely on new FortiGate vulnerabilities. It succeeded through exposed management ports, weak credentials, and single-factor authentication. AWS’s conclusion is the uncomfortable one: AI helped an unsophisticated actor exploit old security mistakes at global scale.

That is what makes the FortiGate case so important. The attacker was not especially advanced in the traditional sense. AWS assessed the operator as low-to-medium skill, but significantly augmented by AI across planning, tool development, command generation, and reporting. The actor used AI-assisted scripts to parse and decrypt stolen configurations, automate VPN access, map internal networks, identify domain controllers, run Nuclei scans, and prioritize targets for lateral movement. The volume and variety of tooling, AWS said, would normally suggest a well-resourced team. Instead, AI let a small actor operate like one.

The exposed infrastructure reportedly contained stolen FortiGate configurations, Active Directory data, credential dumps, exploit code, and AI-generated attack plans. The article also notes that FortiGate devices are especially attractive because they often function as VPN gateways and store credentials, topology data, and routing information. In the AI era, a remote-access appliance is no longer just an entry point. It is a high-value input to an automated post-compromise workflow.

This should force a rethink in how security teams view remote administration and VPN exposure. For years, many organizations treated these systems as boring infrastructure hygiene. AI changes the economics. Once a management interface is exposed, credentials are weak, or MFA is absent, attackers can chain discovery, access, recon, prioritization, and follow-on action much faster and much more cheaply than before. The bottleneck is no longer just exploit development. It is access plus automation.

What organizations should do now

The practical takeaway is uncomfortable but clear: organizations should assume that AI will be used by attackers first, faster, and more aggressively than most defenders are prepared for. Defensive AI will improve. It will help with detection, triage, and response. But in most real-world environments it will still be reacting to attacker innovation, not preventing it in advance. In cyber, the side that gets to automate first usually sets the pace. Everyone else is playing catch-up.

That is especially dangerous in remote access. VPNs, exposed management interfaces, shared credentials, and software-based trust models were already high-risk before AI entered the picture. Now they are becoming ideal targets for automation. AI lets attackers scan faster, test credentials faster, adapt their tradecraft faster, and move from initial access to internal reconnaissance with dramatically less effort. If your security strategy depends on noticing and stopping that activity in time, you are betting that your defensive automation will consistently outrun offensive automation. That is not a bet most organizations should make.

The answer is not to keep layering more detection on top of the same exposed access model. The answer is to eliminate as much of the attack surface as possible. In the age of AI-enabled attackers, remote access should not be broadly reachable, shareable, or dynamically exposed to anyone who can find it. It should be tightly bound, deterministic, and enforced at the hardware level.

That is why we believe the right model for securing remote access is hardware-enforced, one-to-one connectivity. A direct, non-IP, dedicated connection between an authorized user and a specific asset is fundamentally different from a traditional remote-access architecture that relies on shared infrastructure and software controls. It reduces the opportunity for scanning, credential abuse, lateral movement, and session hijacking because the connection itself is constrained by design, not merely monitored after the fact.

This is the approach Zeroport was built around. In a world where attackers increasingly use AI to compress the time between discovery and compromise, organizations need more than smarter alerts. They need remote access that does not present a broad, reusable target in the first place. Defensive AI may eventually get better at chasing attackers. But the stronger strategy is to deny attackers the opening. That starts with remote access built on hardware-enforced, one-to-one connections. To learn more about Fantom technology- Book a demo with our team.

Secure Access
at Every Level

Empower global teams with secure, hardware-enforced remote access, no VPNs, no data exposure, no risk.

More info