CrowdStrike and NVIDIA open-source AI gives enterprises an edge against machine-speed attacks

Every SOC leader knows this sense: drowning in alerts, blind to the real threat, stuck on defense in a war fought at the speed of artificial intelligence.

Now Crowd blow AND NVIDIA they flip the script. Armed with autonomous agents powered by Charlotte AI and NVIDIA Nemotron models, security teams not only respond; they counterattack attackers before their next move. Welcome to the latest cybersecurity arms race. Combining the many benefits of open source software with agentic AI will change the balance of power against hostile AI.

CrowdStrike and the NVIDIA agent ecosystem are merging Charlotte AI AgentWorks, NVIDIA Nemotron open models, NVIDIA NeMo Data Designer synthetic data, NVIDIA Nemo Agent ToolkitAND NVIDIA NIM microservices.

- Advertisement -

“This collaboration is redefining security operations by enabling analysts to create and deploy specialized solutions AI agents on a large scale, leveraging reliable enterprise-grade security with Nemotron models,” I’m writing Bryan Catanzaro, vice president of deep learning applied research at NVIDIA.

The partnership aims to enable autonomous agents to learn quickly, reducing risks, threats and false alarms. Achieving this relieves the burden on SOC leaders and their teams, who struggle almost daily with data fatigue resulting from data inaccuracies.

The announcement, made at the GTC in Washington, D.C., signals the emergence of machine-speed defenses that can finally match machine-speed attacks.

Transforming analyst expertise into machine-scale data sets

The partnership is distinguished by the way its AI agents are designed to continuously aggregate telemetry data, including insights CrowdStrike Falcon Complete Managed Detection and Response analysts.

“We can take the intelligence, the data, the experience of our Falcon Complete analysts and turn those experts into datasets. Turn the datasets into AI models and then be able to create agents based on all the makeup and experience that we’ve built in-house so that our customers can always use these agents at scale,” said Daniel Bernard, CrowdStrike’s chief business officer, during a recent briefing.

By leveraging the strengths of NVIDIA Nemotron’s open models, organizations will be able to provide their autonomous agents with continuous learning by training on datasets from Falcon Complete, the world’s largest MDR service that makes millions of triage decisions each month.

CrowdStrike has prior experience curating AI detection to the point of launching a service that scales the feature across its customer base. Charlotte AI detection selection, Designed to integrate with existing security workflows and continually adapt to changing threats, it automates alert assessment with over 98% accuracy and reduces manual triage by over 40 hours per week.

Elia Zaitsev, Chief Technology Officer at CrowdStrike, explaining how Charlotte AI Detection Triage is able to deliver this level of performance. he told VentureBeat: “We couldn’t have achieved this without the support of our Falcon Complete team. They triage through their workflow, manually resolving millions of detections. The high-quality human annotated dataset they provide has allowed us to achieve over 98% accuracy.”

The lessons learned from Charlotte AI Detection Triage apply directly to the partnership with NVIDIA, further increasing the value it can deliver to SOCs that need help dealing with the deluge of alerts.

Open source is at stake for this partnership to work

NVIDIA’s Nemotron open models address what many security leaders consider the most important barrier to deploying AI in regulated environments: lack of clarity around how the model works, its severity, and its level of security.

Justin Boitano, vice president of enterprise and edge solutions at NVIDIA, speaking on behalf of NVIDIA at a recent press conference, explained: “Open models are where people start to build their own domain expertise. Ultimately, you want to own the IP. Not everyone wants to export their data and then import or pay for the intelligence they use. Many sovereign nations, many companies in regulated industries want to maintain all data privacy and security.”

John Morello, CTO and co-founder of Gutsy (now Minimus), he told VentureBeat that “the open nature of software Google’s BERT The open-source language model allows Gutsy to customize and train its model for specific security use cases while preserving privacy and performance.” Morello emphasized that practitioners cite “greater transparency and a better guarantee of data privacy, along with high availability of specialized knowledge and greater integration options in their architectures” as the most important reasons for using open source.

Keeping the balance of enemy AI forces under control

Cisco’s DJ Sampath, senior vp of Cisco’s AI software and platforms group, presented the industry-wide imperative for open source security models during a recent interview with VentureBeat: “The reality is that attackers also have access to open source models. The goal is to equip as many defenders as possible with robust models to strengthen security.”

Sampath explained that when Cisco released its Foundation-Sec-8B open source security model at RSAC 2025, it was driven by a sense of responsibility: “Funding for open source projects has stalled and there is a growing need for sustainable funding sources within the community. It is the corporation’s responsibility to deliver these models while enabling the community to use AI from a defensive standpoint.”

The commitment to transparency extends to the most sensitive facets of AI development. When concerns were raised about DeepSeek R1 training data and the potential compromise, NVIDIA responded decisively.

As Boitano explained to VentureBeat: “Government agencies were very concerned. They wanted DeepSeek’s reasoning capabilities, but of course they were a little concerned about what could be trained on the DeepSeek model, which is what really inspired us to completely open source everything in Nemotron models, including the reasoning datasets.”

For practitioners managing open source security at scale, this transparency is critical to their corporations. Itamar Sher, CEO Security sealhe emphasized to VentureBeat that “open source models provide transparency,” although he noted that “managing their cycles and compliance remains a major issue.” Sher uses generative AI to automate the remediation of open source software vulnerabilities, and Seal, as a recognized CVE naming authority (CNA), can discover, document and assign vulnerabilities, improving security across the ecosystem.

A key goal of the partnership: bringing intelligence to the edge

“Bringing intelligence closer to where data lives and where decisions are made will be a major advance for security operations teams across the industry,” Boitano emphasized. This edge deployment capability is particularly essential for government agencies with fragmented and often legacy IT environments.

VentureBeat asked Boitano how the initial discussions went with government agencies, which were informed about the partnership and its project goals before work began. “All the agencies we talked to feel like they always feel like they’re unfortunately falling behind when it comes to adopting technology,” Boitano explained. “The response was that you can do anything to help us secure the endpoints. It’s been a tedious and long process to create open models for these, you know, higher end networks.”

NVIDIA and CrowdStrike did the foundational work, including STIG hardening, FIPS encryption, airgap compatibility, and removing barriers that delayed the adoption of the open model in higher-end networks. The NVIDIA AI Factory for government reference designs provides comprehensive guidance for deploying AI agents in federal and high-assurance organizations while meeting the strictest security requirements.

As Boitano explained, the urgency is existential: “Having working AI-powered protection on your property that can search for and detect these anomalies and then warn and respond much faster is just a natural consequence. It’s the only way to protect against the speed of AI at this point.”

Latest Posts

Advertisement

More from this stream

Recomended