Introduction
Welcome to Magma - The One You’ve Been Waiting For
Another Agent Framework? Really?
We at Pompeii Labs have built enough agents to know the proper “framework-to-flexibility” ratio. Probably the biggest complaint we hear and have experienced ourselves is that existing frameworks are too opinionated, too complex, and have too many abstractions.
It doesn’t have to be that hard.
Magma was born out of necessity, because we were tasked with bringing agents into production at a rapid pace. Not only did other frameworks hinder development time, once the agent was built you had to address the biggest hurdle of all: infrastructure.
We are huge fans of how Vercel made it insanely easy to both develop & host web applications. Our mission is to do the same for AI & Agent development by eliminating the infrastructure burden that traditionally consumes developer resources. Magma provides a zero-friction path from concept to production-ready AI agents.
Quickstart
Get your first Magma agent running in under 5 minutes
Magma Basics
Learn more about the different features and capabilities of Magma
Why Magma?
With Magma, developers can:
- Focus entirely on agent logic while our infrastructure handles deployment, scaling, and real-time communication
- Deploy agents instantly with unique endpoints and built-in WebSocket support
- Leverage TypeScript-first development with robust state management and debugging tools
- Access a growing ecosystem of plugins and integrations
- Scale seamlessly from prototype to production with enterprise-grade security
The Magma framework combines these capabilities with an intuitive development experience, allowing teams to build sophisticated AI agents without wrestling with infrastructure complexity.
This means faster development cycles, reduced technical debt, and the ability to innovate at the speed of imagination rather than the speed of DevOps.
Additional key features include:
- Type Safety: Built with TypeScript for robust development
- Easy Integration: Simple to integrate with existing systems
- Flexible Architecture: Support for multiple LLM providers including OpenAI, Anthropic, Groq, and Google
- Multi-Tenancy: Each user has their own instance of your agent. You build the logic once, and it’s ready for everyone to use
- Production Ready: Built for scalable, production deployments
Getting Started
For a quick start, check out the Quickstart guide. When you’re ready to dive deeper, explore the Development Guide to learn how to develop and test your agents locally.