Dynamic Worker Loader: Lightweight Sandboxing for AI Code
Cloudflare has moved its experimental Dynamic Worker Loader API into open beta for all paid Workers users. This feature addresses a critical challenge in AI agent deployment: safely executing code generated on-the-fly by language models without exposing your application to injection attacks or malicious prompts.
The core problem is straightforward: if you want AI agents to write and execute code to call APIs, that code must run in an isolated environment. Container-based solutions are the industry standard, but they're expensive—taking hundreds of milliseconds to boot and consuming hundreds of megabytes of memory. This overhead makes consumer-scale AI deployments impractical.
How It Works
Dynamic Worker Loader leverages Cloudflare's existing isolate technology (instances of the V8 JavaScript engine) to create isolated execution environments on-demand. Developers specify AI-generated code and pass it to the loader, which instantiates a new Worker in its own sandbox with configurable access to RPC APIs:
let worker = env.LOADER.load({
compatibilityDate: "2026-03-01",
mainModule: "agent.js",
modules: { "agent.js": agentCode },
env: { CHAT_ROOM: chatRoomRpcStub },
globalOutbound: null, // Block internet access
});
await worker.getEntrypoint().myAgent(param);
Key Advantages
- Performance: Isolates start in milliseconds and consume only a few megabytes of memory—roughly 100x faster and 10-100x more memory efficient than containers
- Scalability: No limits on concurrent sandboxes or creation rates; handles a million simultaneous requests with separate sandboxes for each
- Zero latency: Workers typically run on the same machine as the requesting Worker, with support across all 200+ Cloudflare global locations
- Security: Isolates are sandboxed by design, blocking all outbound access by default
JavaScript and TypeScript Integration
AI agents must write JavaScript, as it's designed for sandboxed execution and is ideal for small code snippets. TypeScript interfaces can define available APIs with minimal token overhead—far more efficient than OpenAPI schemas for LLMs to consume.
Full documentation is now available for all paid Workers customers.