What's the API built with?
What's the API built with?
The core API is FastAPI (Python) with a background worker on Fly.io. Database is PostgreSQL on Supabase with SQLAlchemy ORM and Alembic migrations.
What about the frontend?
What about the frontend?
The marketing site and dashboard are Next.js 15 with React 19, deployed on Vercel. The conversational interface is a separate Next.js app using the Vercel AI SDK.
How does the simulation engine work?
How does the simulation engine work?
The simulation engine takes a population model (built from real survey data), a question, and answer options. It uses large language models to predict how the population would distribute their answers across the options. The engine is designed for consistency: the same question returns similar distributions across runs.We run our own inference infrastructure alongside the major model providers. This gives us control over latency and cost, and the ability to serve fine-tuned models that commercial APIs don’t support.
What's the MCP server?
What's the MCP server?
A FastMCP (Python) server that exposes Semilattice predictions as tools for AI assistants. Runs on Fly.io. Any MCP-compatible client (Claude, ChatGPT, Cursor, custom agents) can make predictions mid-conversation. Learn more →
How are population models built?
How are population models built?
You upload a CSV of survey responses (questions, answer options, response distributions). Semilattice processes this into a population model that can generalise to new questions. The model is automatically tested using leave-one-out cross-validation to estimate accuracy. See requirements →
What infrastructure do you use?
What infrastructure do you use?
AWS (eu-west-2) for core infrastructure, Vercel for web deployment, Fly.io for workers and the MCP server, Supabase for database and auth. All EU-hosted.