Skip to main content
The core API is FastAPI (Python) with a background worker on Fly.io. Database is PostgreSQL on Supabase with SQLAlchemy ORM and Alembic migrations.
The marketing site and dashboard are Next.js 15 with React 19, deployed on Vercel. The conversational interface is a separate Next.js app using the Vercel AI SDK.
The simulation engine takes a population model (built from real survey data), a question, and answer options. It uses large language models to predict how the population would distribute their answers across the options. The engine is designed for consistency: the same question returns similar distributions across runs.We run our own inference infrastructure alongside the major model providers. This gives us control over latency and cost, and the ability to serve fine-tuned models that commercial APIs don’t support.
A FastMCP (Python) server that exposes Semilattice predictions as tools for AI assistants. Runs on Fly.io. Any MCP-compatible client (Claude, ChatGPT, Cursor, custom agents) can make predictions mid-conversation. Learn more →
You upload a CSV of survey responses (questions, answer options, response distributions). Semilattice processes this into a population model that can generalise to new questions. The model is automatically tested using leave-one-out cross-validation to estimate accuracy. See requirements →
AWS (eu-west-2) for core infrastructure, Vercel for web deployment, Fly.io for workers and the MCP server, Supabase for database and auth. All EU-hosted.