Skip to main content
The core API is FastAPI (Python) with a background worker on Fly.io. Database is PostgreSQL on Supabase with SQLAlchemy ORM and Alembic migrations.
The marketing site and dashboard are Next.js 15 with React 19, deployed on Vercel. The conversational interface is a separate Next.js app using the Vercel AI SDK.
The simulation engine takes a user model (built from real human data), a question, and answer options. It uses large language models to predict how the user model would distribute their answers across the options. The engine produces stable results: the same question returns similar distributions across runs.We run our own inference infrastructure alongside major model providers. This gives us control over latency and cost, and the ability to serve fine-tuned models.
A FastMCP (Python) server that exposes Semilattice predictions as tools for AI assistants. Runs on Fly.io. Any MCP-compatible client (Claude, ChatGPT, Cursor, custom agents) can make predictions mid-conversation. Learn more →
We work with customers to build user models from their data. This can include survey responses, qualitative interviews, product analytics, and other internal data sources. Each model is tested for accuracy using cross-validation.
AWS (eu-west-2) for core infrastructure, Vercel for web deployment, Fly.io for workers and the MCP server, Supabase for database and auth. All EU-hosted.