Cursor vibration encoding tool, from launch Any spherehas introduced the Composerits first internal, proprietary large language coding model (LLM) as a part of its Cursor 2.0 platform update.
Composer is designed to perform coding tasks quickly and accurately in production-scale environments, representing a latest step in AI-assisted programming. It is already used by Cursor’s engineering team in day by day development, which indicates maturity and stability.
According to Cursor, Composer does most of the interaction in lower than 30 seconds while maintaining a high level of reasoning skills across large and complex code bases.
The model is described as 4 times faster than similarly intelligent systems and is trained for “agentic” workflows in which autonomous coding agents collaboratively plan, write, test and review code.
Previously, Cursor supported “vibrational coding” – using artificial intelligence to write down or complete code based on natural language instructions from a user, even someone untrained in programming – in comparison with other leading, own LLM corporations from corporations similar to OpenAI, Anthropic, Google and xAI. These options are still available to users.
Benchmark results
Composer’s capabilities are tested using “Cursor Bench,” an internal set of evaluations based on real requests from developer agents. The benchmark measures not only the correctness, but also the compliance of the model with existing abstractions, stylistic conventions, and engineering practices.
In this test, Composer achieves the highest level of coding intelligence, generating approx 250 tokens per second — roughly twice as fast as leading fast inference models and 4 times faster than comparable pioneer systems.
Cursor’s published benchmark models group models into several categories: “Best Open” (e.g. Qwen Coder, GLM 4.6), “Fast Frontier” (Haiku 4.5, Gemini Flash 2.5), “Frontier 7/2025” (the strongest model available mid-year) and “Best Frontier” (including GPT-5 and Claude Sonnet 4.5). Composer matches the intelligence of mid-frontier systems while providing the highest recorded generation speed of all classes tested.
Model built on reinforcement learning and mixed expert architecture
Scientist Sasha Rush of Cursor provided insight into the model’s development posts on social network Xdescribing Composer as a mixture of experts (MoE) model based on reinforcement learning (RL):
“We used RL to train a large MoE model to be really good at real-world coding and also very fast.”
Rush explained that the team co-created the Composer and Cursor environments to enable the model to run efficiently at production scale:
“Unlike other ML systems, you can’t abstract too much from a full-scale system. We designed this project and Cursor together to enable the agent to run at the scale necessary.”
Composer was trained on real-world software engineering tasks, not static datasets. During training, the model ran in full codebases, using a set of production tools—including file editing, semantic search, and terminal commands—to unravel complex engineering problems. Each training iteration included solving a specific challenge, similar to editing code, drafting a plan, or generating a focused explanation.
Booster loop optimized each correctness and performance. The composer learned to make effective decisions of tools, use parallelism, and avoid unnecessary or speculative responses. Over time, the model developed latest behaviors, similar to running unit tests, fixing linter errors, and autonomously performing multi-step code searches.
This design allows Composer to work in the same runtime context as the end user, making it higher suited to real-world coding environments – it supports version control, dependency management, and iterative testing.
From prototype to production
Composer’s development was based on an earlier internal prototype generally known as Cheetahwhich Cursor used to explore low-latency inference for coding tasks.
“The Cheetah was version 0 of this model primarily for speed testing,” Rush told X. “Our metrics say that [Composer] “It has the same speed, but it’s much, much smarter.”
Cheetah’s success in reducing latency helped Cursor discover speed as a key factor in developer confidence and usability.
Composer maintains this responsiveness while significantly improving task reasoning and generalization.
Developers who used Cheetah during early testing noticed that its speed modified the way they operated. One user commented that it is “so fast that I can stay up to date while working with it.”
Composer retains this speed but expands the capabilities with multi-step coding, refactoring, and test tasks.
Integration with Cursor 2.0
Composer is fully integrated with Cursor 2.0, a major update to the company’s agent-based development environment.
The platform introduces a multi-agent interface, allowing as much as eight agents working in parallel, each in an isolated workspace using git worktrees or distant machines.
In this technique, Composer can act as one or more agents, performing tasks independently or collaboratively. Developers can compare multiple results from concurrent agent runs and select the best result.
Cursor 2.0 also includes helper functions that increase Composer’s efficiency:
-
Viewer in Editor (GA) – allows agents to run and test code directly in the IDE, passing DOM information to the model.
-
Improved code review – aggregates differences in many files for faster control of changes generated by the model.
-
Sandboxed terminals (GA) – Isolating shell commands run by the agent for secure local execution.
-
Voice mode – adds speech-to-text control for initiating or managing agent sessions.
While these platform updates expand Cursor’s overall capabilities, Composer is positioned as the technical core enabling fast and reliable agent-based coding.
Infrastructure and training systems
To train Composer at scale, Cursor built a custom reinforcement learning infrastructure combining PyTorch and Ray for asynchronous training on hundreds of NVIDIA GPUs.
The team developed specialized MXFP8 MoE kernels and hybrid sharded data parallelism, enabling large-scale model updates with minimal communication overhead.
This configuration allows Cursor to natively train low-precision models without requiring post-training quantization, improving each inference speed and performance.
Composer training was based on a whole bunch of hundreds of concurrent sandbox environments – each a self-contained coding workspace – running in the cloud. The company has adapted its background agent infrastructure to dynamically schedule these VMs, supporting the dynamic nature of enormous RL runs.
Application in the enterprise
Composer performance improvements are supported by infrastructure-level changes to Cursor’s code evaluation stack.
The company has optimized its language server protocols (LSP) for faster diagnostics and navigation, especially in Python and TypeScript projects. These changes reduce latency when Composer interacts with large repositories or generates multi-file updates.
Enterprise users gain administrative control over Composer and other agents through team rules, audit logs, and sandbox enforcement. Cursor’s Teams and Enterprise tiers also support the use of an aggregate model, SAML/OIDC authentication, and analytics to observe agent performance across organizations.
Consumer pricing ranges from Free (Hobby) to Ultra ($200 monthly), with prolonged usage limits for Pro+ and Ultra subscribers.
Enterprise pricing starts at $40 per user monthly for Teams, and enterprise contracts offer custom usage and compatibility options.
The role of the composer in the evolving AI coding landscape
Composer’s focus on speed, reinforcement learning, and integration with live coding workflows sets it apart from other AI development assistants like GitHub Copilot or Agent Replit.
Rather than serving as a passive suggestion engine, Composer is designed for continuous agent-based collaboration, where multiple autonomous systems interact directly with a project’s codebase.
This model-level specialization – training AI to operate in the real environment in which it would operate – represents a significant step towards practical, autonomous software development. Composer is not trained only on text data or static code, but inside a dynamic IDE that reflects production conditions.
Rush described this approach as essential to achieving real-world reliability: the model learns not only the best way to generate code, but also the best way to integrate, test and improve it in context.
What this implies for enterprise developers and Vibe coding
With Composer, Cursor does greater than just a fast model – it implements an artificial intelligence system optimized for real-world use, built to run inside the same tools developers already use.
The combination of reinforcement learning, expert design, and tight product integration gives Composer a practical advantage in speed and responsiveness that sets it apart from general-purpose language models.
While Cursor 2.0 provides the infrastructure for multi-agent collaboration, Composer is the core innovation that makes these workflows viable.
It’s the first coding model built specifically for production-level agentic coding – and an early look at what on a regular basis programming could appear to be when developers and autonomous models share the same workspace.
