Cursor Launches Composer, In-House LLM with Fourfold Speed Increase

Cursor, a coding platform developed by startup Anysphere, has unveiled Composer, its first proprietary large language model (LLM), as part of its Cursor 2.0 platform update. This innovative tool is engineered to perform coding tasks both swiftly and accurately in production environments, marking a significant advancement in AI-assisted programming. Cursor”s engineering team is already using Composer in their daily development activities, highlighting the model”s robustness and reliability.

According to Cursor, Composer achieves most coding interactions in under 30 seconds and exhibits a strong reasoning capability across extensive and intricate codebases. The model is reported to be four times faster than comparable systems while being specifically trained for “agentic” workflows, where autonomous coding agents collaboratively plan, write, test, and review code.

Previously, Cursor facilitated “vibe coding,” which utilized AI to draft or complete code based on natural language prompts from users, including those without programming training. This feature leveraged existing proprietary LLMs from companies such as OpenAI, Anthropic, Google, and xAI, which remain accessible to users.

Benchmarking Composer”s Performance

The capabilities of Composer are assessed through “Cursor Bench,” an internal evaluation framework based on actual developer agent requests. This benchmark evaluates not only the accuracy but also the model”s compliance with established coding standards, style guidelines, and engineering protocols. Composer is noted for achieving top-tier coding intelligence, generating output at a rate of 250 tokens per second, which is approximately twice as fast as leading fast-inference models and four times quicker than similar advanced systems.

In its published comparisons, Cursor categorizes models into several groups, including “Best Open” (such as Qwen Coder and GLM 4.6), “Fast Frontier” (including Haiku 4.5 and Gemini Flash 2.5), “Frontier 7/2025” (the leading model expected midyear), and “Best Frontier” (which encompasses GPT-5 and Claude Sonnet 4.5). Composer matches the intelligence of mid-frontier systems while providing the highest recorded generation speed across all tested categories.

Innovative Development Techniques

Research scientist Sasha Rush of Cursor shared insights into the model”s creation via posts on the social network X, characterizing Composer as a reinforcement-learned (RL) mixture-of-experts (MoE) model. “We used RL to train a large MoE model to excel in real-world coding, along with achieving high speed,” Rush explained.

The development team worked to ensure that both Composer and the Cursor environment were co-designed to enhance operational efficiency at production scale. “Unlike other ML systems, you cannot abstract much from the full-scale system,” Rush noted. They designed the project and environment together to facilitate the necessary wide-scale operation of the agent.

Composer was trained using genuine software engineering tasks rather than static datasets. Throughout the training, the model engaged with complete codebases, employing a variety of production tools—including file editing, semantic search, and terminal commands—to tackle complex engineering challenges.

Each training cycle required the model to address specific tasks, such as implementing a code revision, formulating a plan, or providing a targeted explanation. The reinforcement loop was optimized for both accuracy and efficiency, allowing Composer to learn effective tool selection, utilize parallel processing, and avoid unnecessary responses. Over time, the model developed the ability to autonomously run unit tests, correct linter errors, and conduct multi-step code searches.

From Prototype to Advanced Production Model

The development of Composer followed an earlier internal prototype named Cheetah, which was utilized by Cursor to investigate low-latency inference for coding tasks. “Cheetah was the v0 of this model primarily to test speed,” Rush stated on X. “Our metrics indicate that Composer matches Cheetah”s speed but is much more intelligent.” The successful reduction of latency with Cheetah underscored the importance of speed in fostering developer trust and usability.

Composer retains this speed while significantly enhancing reasoning and task generalization capabilities. Developers who utilized Cheetah during its testing phase reported that its rapid response transformed their workflow, with one user noting that it was “so fast that I can stay in the loop when working with it.” Composer maintains this speed while extending functionalities to include multi-step coding, refactoring, and testing tasks.

Integration with Cursor 2.0

Composer is fully embedded within Cursor 2.0, a substantial update to the company”s agentic development environment. The platform features a multi-agent interface, enabling the operation of up to eight agents simultaneously, each within its isolated workspace using git worktrees or remote machines. Within this framework, Composer can act as one or more agents, performing tasks either independently or in collaboration with others. Developers can evaluate multiple outputs from concurrent agent executions to choose the most effective results.

Cursor 2.0 also introduces enhanced features to optimize Composer”s performance: an In-Editor Browser (GA) that allows agents to run and test their code directly in the IDE, an enhanced code review feature that compiles diffs across multiple files for quicker assessments of model-generated changes, sandboxed terminals (GA) that secure agent-run shell commands for local execution, and a Voice Mode that incorporates speech-to-text controls for initiating or managing agent sessions.

While these platform enhancements enrich the overall Cursor experience, Composer is positioned as the technical core that facilitates rapid, reliable agentic coding.

To train Composer on a large scale, Cursor developed a custom reinforcement learning infrastructure that integrates PyTorch and Ray for asynchronous training across thousands of NVIDIA GPUs. The team created specialized MXFP8 MoE kernels and hybrid sharded data parallelism, which enable extensive model updates with minimal communication overhead. This architecture allows Cursor to train models at low precision without the need for post-training quantization, enhancing both inference speed and efficiency.

Composer”s training utilized hundreds of thousands of concurrent sandboxed environments, each functioning as a self-contained coding workspace in the cloud. The company adapted its Background Agents infrastructure to dynamically schedule these virtual machines, accommodating the bursty nature of large RL operations.

Enterprise Applications and Future Outlook

The performance enhancements of Composer are supported by infrastructure-level changes across Cursor”s code intelligence stack. The company has optimized its Language Server Protocols (LSPs) to deliver faster diagnostics and navigation, particularly for Python and TypeScript projects. These modifications significantly decrease latency when Composer interacts with extensive repositories or generates updates across multiple files.

Enterprise users will have administrative control over Composer and other agents through team rules, audit logs, and sandbox enforcement. Cursor”s Teams and Enterprise tiers further facilitate pooled model usage, SAML/OIDC authentication, and analytics for tracking agent performance across organizations. Individual user pricing ranges from a free Hobby tier to an Ultra tier costing $200 per month, with expanded usage limits available for Pro+ and Ultra subscribers. Business pricing begins at $40 per user per month for Teams, while enterprise contracts can offer customized usage and compliance options.

Composer”s emphasis on speed, reinforcement learning, and seamless integration with active coding workflows distinguishes it from other AI programming assistants such as GitHub Copilot or Replit”s Agent. Unlike traditional tools that function as passive suggestion engines, Composer is engineered for ongoing, agent-driven collaboration, where multiple autonomous entities interact directly with a project”s codebase.

This specialized approach—training AI to operate within the actual environment it will serve—signifies a notable advancement toward practical, autonomous software development. Composer is not solely trained on text data or static code but within a dynamic integrated development environment that accurately reflects production conditions. Rush emphasized that this methodology is crucial for achieving real-world reliability: the model acquires not only the ability to generate code but also the skills to integrate, test, and enhance it contextually.

With Composer, Cursor is not merely introducing a rapid model; it is implementing an AI system tailored for real-world applications, designed to function within the existing tools that developers depend on. The combination of reinforcement learning, mixture-of-experts design, and tight product integration provides Composer with a practical advantage in speed and responsiveness, distinguishing it from general-purpose language models. While Cursor 2.0 lays the groundwork for multi-agent collaboration, Composer stands as the core innovation that makes these workflows feasible. It represents the first coding model specifically designed for agentic, production-level coding and offers an early insight into the potential future of programming as human developers and autonomous models coexist within the same workspace.