You probably have one running somewhere in your organization right now. A Langflow instance for the ML team’s RAG pipelines. A Flowise deployment for the customer chatbot. An n8n server automating internal workflows between GitHub, Slack, and your incident management system. A ComfyUI box someone spun up in the cloud for the design team and forgot to lock down.
These tools hold credentials for everything. They execute code as a first-class feature. And in Q1 2026, they’ve been consistently shipping CVSS 9.3 to 10.0 unauthenticated remote code execution vulnerabilities that attackers weaponize in under 24 hours.
This isn’t a string of bad luck. It’s a structural failure in how an entire category of developer tooling was built — and if your security posture hasn’t accounted for it, you have work to do.
The Pattern Nobody Is Naming
In the past six weeks, four major self-hosted AI workflow platforms disclosed critical unauthenticated RCE vulnerabilities:
- Langflow CVE-2026-33017 (CVSS 9.3) — arbitrary Python
exec()via the public flow build endpoint. Exploited within 20 hours of advisory publication. The “patched” version (1.8.2) remained fully exploitable; the real fix didn’t land until 1.9.0. - Flowise CVE-2025-59528 (CVSS 10.0) — JavaScript
Function()constructor injection via the CustomMCP node. Over 12,000 internet-exposed instances at the time of disclosure, with active exploitation confirmed from Starlink infrastructure. - n8n CVE-2026-21858 (CVSS 10.0) — content-type confusion in the webhook handler allows arbitrary file read, leading to credential theft, session token forgery, and full RCE through the admin interface. Public PoC released.
- ComfyUI — no single CVE, but an active campaign compromised over 1,000 exposed instances by abusing ComfyUI-Manager to silently install malicious custom nodes, then triggering RCE through crafted workflow submissions. Payloads included XMRig cryptominers, Monero miners, and Hysteria V2 proxy botnet clients.
Add Nginx UI CVE-2026-33032 (CVSS 9.8) — an authentication bypass in the platform’s MCP endpoint that defaults to allow-all — and you have five critical vulnerabilities across the AI tooling stack in roughly a month.
These aren’t isolated incidents from careless developers at obscure projects. Flowise has been downloaded millions of times. n8n has 60,000+ GitHub stars. Langflow is backed by DataStax and actively promoted for enterprise LLM deployment. These are mainstream tools used by real engineering teams at real organizations.
Why Every One of These Tools Has the Same Bug
The root cause isn’t a bad line of code. It’s a design philosophy.
Every tool in this category was built with the same priority stack: get developers to a working prototype as fast as possible, defer security configuration to the operator. Authentication is optional (or absent entirely). Network binding defaults to all interfaces. Code execution is a first-class feature — these tools are built to run user-supplied code, that’s the point.
Look at the vulnerability mechanics:
Langflow: The public flow build endpoint (/api/v1/build_public_tmp/{flow_id}/flow) was designed to let unauthenticated users build public flows. Reasonable feature. The flaw is that when an attacker supplies their own data parameter, their flow data — containing arbitrary Python code — gets passed directly to Python’s exec() without sandboxing. The dangerous primitive (user-supplied code execution) and the unauthenticated endpoint were two separate engineering decisions. Neither seemed wrong in isolation.
Flowise: The CustomMCP node’s convertToValidJSONString function used Function('return ' + inputString)() to parse JSON. This is textbook JavaScript anti-pattern — functionally identical to eval(). The endpoint accepting this input (/api/v1/node-load-method/customMCP) had no authentication requirement. The MCP integration, designed to make Flowise more extensible, created the attack surface.
n8n: The webhook handler for file uploads called req.body.files without validating that the request was actually a multipart form submission. Sending Content-Type: application/json with a manually crafted files object tricks n8n into reading arbitrary filesystem paths. From there, the attack chain is pure logic: read the SQLite database, extract the encryption key, forge a JWT admin token, gain authenticated RCE through native Code nodes.
ComfyUI: Ships with no authentication. Default bind address: 0.0.0.0. ComfyUI-Manager, the package manager, will install arbitrary custom nodes from URLs without user approval if the right workflow structure is submitted. Custom nodes execute with the full privileges of the ComfyUI process, which is often root on quick cloud deployments.
Each of these bugs is unique in implementation. The systemic failure is identical: user-supplied input reaches a code execution primitive via an unauthenticated path.
The Credential Vault Problem
These tools aren’t just a foothold — they’re a pre-loaded credential vault.
An n8n server automating your internal workflows holds OAuth tokens for every service it integrates with. Slack tokens. GitHub API keys. PagerDuty credentials. Your production database connection strings. The database password for the data warehouse. AWS IAM credentials if it’s running automation against your cloud infrastructure.
A Langflow deployment powering your RAG pipeline has connection credentials for your vector database (Pinecone, Weaviate, Chroma), your LLM API keys (OpenAI, Anthropic, Bedrock), and potentially your document storage backends. If you’ve wired it into internal systems, it has whatever credentials those systems require.
Flowise, n8n, and Langflow all store these credentials encrypted in their databases. The encryption key is stored in a config file on the same server. Reading one file gives you the key; with the key, every stored credential is plaintext.
This is what attackers observed in the wild during CVE-2026-33017 exploitation. Sysdig’s threat research team documented compromised Langflow instances being used to exfiltrate cloud access keys, database credentials, and API tokens. The attacker doesn’t need to be patient — the credentials are there, organized neatly, waiting.
The Pre-Auth Window Is Measured in Hours
The speed of exploitation has collapsed.
CVE-2026-33017 advisory published: March 17, 2026. First exploitation in the wild: March 17-18, 2026. No public PoC existed at that point — attackers reverse-engineered working exploits from the advisory description alone and began scanning immediately.
CVE-2025-59528 (Flowise) disclosed with active exploitation already confirmed, sourced from Starlink infrastructure. By the time most security teams had read the advisory, the campaign was running.
n8n CVE-2026-21858 had a public proof-of-concept released by the researcher. Time from PoC to first exploitation attempt: measured in hours, not days.
This pattern isn’t specific to AI tooling — it’s the current state of vulnerability exploitation across the board. But it has a specific implication for AI workflow tools: the assumption that “we’ll patch it this sprint” is no longer a risk tolerance that organizations can carry. By the time the patch goes through change management, the window for exploitation has been open for days.
This Has Happened Before
The pattern of unauthenticated AI infrastructure getting compromised at scale isn’t new. It’s just accelerating.
In March 2024, Oligo Security disclosed ShadowRay — the first documented attack campaign targeting AI workloads. Attackers exploited CVE-2023-48022 in Ray, the distributed computing framework used for ML training and inference at scale. Ray’s Jobs API allowed unauthenticated remote code execution by design — the documentation explicitly stated the API was for trusted environments and should not be exposed to the internet.
Thousands of Ray clusters were compromised. Attackers ran cryptominers, stole secrets, and exfiltrated data from live AI workloads. ShadowRay 2.0, disclosed in November 2025, evolved the same vulnerability into a self-propagating GPU cryptomining botnet with over 230,000 exposed Ray servers targeted.
The architectural advisory from the Ray maintainers: “Security and isolation must be enforced outside of the Ray Cluster.”
The same philosophy is baked into every tool we’ve discussed. Authentication, network isolation, and access control are treated as deployment concerns rather than software features. The implicit contract is: “we’ll give you powerful capabilities; you figure out how to secure them.”
That contract broke the moment these tools escaped the laptop and landed in cloud VMs.
Jupyter Notebook has been running the same attack pattern for years — Qubitstrike, cryptomining campaigns, ransomware — all targeting unauthenticated Jupyter instances exposed to the internet. The attack surface never went away; organizations just started running more of these tools, and attackers followed.
MCP Is Expanding the Attack Surface
Just as we’ve established that AI workflow tools are high-value targets, the Model Context Protocol is proliferating, and it’s making the problem significantly worse.
MCP servers are the new plugin architecture for AI agents. They expose tools and data sources to LLM-powered systems, and they’re being deployed at a pace that far outstrips security review. Trend Micro found 492 MCP servers exposed to the internet with zero authentication. Between January and February 2026, researchers filed over 30 CVEs targeting MCP servers — missing input validation, absent authentication, and blind trust in tool descriptions covering the full range.
The Nginx UI CVE-2026-33032 vulnerability is a direct consequence of MCP integration bolted onto existing infrastructure tooling. The /mcp_message endpoint was added as a feature, the authentication model was borrowed from an existing IP allowlist mechanism, and the detail that an empty allowlist means “allow all” was a buried behavioral assumption. No patch exists at time of writing. Default configuration: exploitable.
Azure MCP Server CVE-2026-32211 (CVSS 9.1): authentication entirely absent from the server, allowing unauthorized access to Azure DevOps data. Microsoft’s MCP server. This isn’t third-party tooling from an unknown developer — this is a major vendor shipping missing authentication as a feature.
The OWASP MCP Top 10, published in early 2026, lists missing authorization controls, tool poisoning, and insufficient sandboxing among the most critical risks. The spec itself makes authentication optional, with the documentation noting that “the MCP SDK does not include built-in authentication mechanisms.” Every MCP server implementation makes a choice about whether to implement authentication, and an alarming number are choosing not to.
What You Should Do Right Now
Assume you have at least one of these tools running somewhere in your environment. The audit starts there.
Enumerate your exposure. Run Shodan queries for Langflow, Flowise, n8n, ComfyUI, and any MCP-enabled tooling your organization uses. If you find them internet-exposed, treat it as an incident, not a remediation task. Check your cloud VMs, your Kubernetes clusters, your internal developer tooling. These tools frequently get deployed by application teams outside of the normal infrastructure provisioning process.
Patch, and verify the patch. The Langflow situation — where 1.8.2 was widely reported as fixed but JFrog confirmed it remained exploitable — should be a forcing function for validation. Don’t check the CVE’s fixed-in version against your installed version and call it done. Verify the actual fix landed.
Enforce authentication at the ingress layer, not the application. Every one of these tools should be behind a reverse proxy (nginx, Caddy, Traefik) with an authentication layer in front of it. Basic auth for internal tooling is better than nothing. OAuth via your SSO is better. The application-level authentication is a backup, not the primary control — and given the track record, you cannot rely on it.
Bind to localhost; access via VPN or SSH tunnel. There is no legitimate reason for a ComfyUI instance, an n8n workflow server, or a Langflow deployment to be directly accessible on a public IP. Bind to 127.0.0.1 and require engineers to tunnel in. This is a single configuration change that eliminates the entire internet-exposure attack surface.
Isolate credentials stored in these tools. Create dedicated service accounts, API keys, and database users for each workflow tool. Scope them to the minimum necessary permissions. Rotate them on a schedule. If you’re using a Langflow instance that holds admin-level database credentials, that’s not a Langflow problem — that’s an IAM problem with a Langflow vector.
Monitor for the IOCs. Unexpected outbound connections from AI tool servers. New cron jobs. LD_PRELOAD entries in /etc/ld.so.preload. Child processes spawned from Node.js or Python processes that don’t match expected workflow execution. XMRig, lolMiner, or Hysteria2 binaries on disk. These are the forensic artifacts of the current campaign playbook.
Audit custom nodes and plugins. ComfyUI-Manager, Flowise’s node marketplace, and equivalent plugin systems are trust boundaries you’re probably not treating as trust boundaries. Enumerate what’s installed. Verify it matches what your team deliberately installed. Remove anything unknown.
The Defaults Are Broken and Vendors Know It
The ComfyUI GitHub repository has had an open discussion proposing mandatory authentication by default since late 2025. It hasn’t shipped. Langflow’s incomplete patch sat in production as the “fixed version” long enough for JFrog to test and disclose it. MCP server implementations across the ecosystem are treating authentication as optional because the spec says they can.
The argument from tool developers is usually the same: these tools are designed for developer productivity, security configuration is a deployment concern, and adding mandatory auth would break existing workflows and increase the barrier to getting started. All of that is true, and none of it matters when 12,000 instances are internet-exposed and actively being exploited.
The tooling category needs to make authentication mandatory, provide secure defaults on first run, and treat exposed attack surfaces as bugs rather than configuration issues. Until that happens, the responsibility falls entirely on the teams deploying these tools — which means your infra and security teams, who need to treat AI workflow tooling as the high-privilege attack surface it is, not as a developer convenience that lives outside the normal security perimeter.
The attackers have figured this out. The automation is running, the Shodan queries are continuous, and the time from advisory to active exploitation is now measured in hours. The question isn’t whether these tools will be targeted in your environment. It’s whether you’ve reduced the blast radius before the scan finds yours.
Further Reading
- Sysdig: CVE-2026-33017 — How Attackers Compromised Langflow AI Pipelines in 20 Hours
- JFrog: Langflow 1.8.2 Was Not Fixed
- Cyera Research: Ni8mare — n8n CVE-2026-21858
- Censys: ComfyUI Cryptomining Botnet via The Hacker News
- Oligo: ShadowRay — The First Known Attack Targeting AI Workloads
- MCP Security 2026: 30 CVEs in 60 Days
- OWASP MCP Top 10
- Microsoft Security: Threat Actor Abuse of AI Accelerates