Predictive modeling
Search across model families, not just inside one.
Prisma proposes the architecture, runs the sweep, and ranks the lineage. The leaderboard updates itself as evidence comes in.
AutoResearch-style experimentation,
built for production.
Give Prisma an objective. It runs the ML research loop on top of LUML and registers a deployable model when the search completes.
Two layers of search
Most teams pour their compute into hyperparameter tuning: learning rate, depth, regularization. The bigger gains usually come from the choices around that tuning: which architecture, which features, which loss, whether to ensemble at all. Prisma searches both layers and labels every trial, so you can see which kind of change actually moved the metric.
The best hyperparameters for one architecture are only half the answer.
Why Prisma is different
The coding agents you already use are good at writing code. They are not built to run a research project across days, runs, and artifacts. Prisma is the layer that does, orchestrating the loop while LUML holds the memory.
Runs, metrics, artifacts, lineage and candidates live in LUML, not in a context window.
Small agents inspect, fork, evaluate, and stop. No single context grows forever.
Codex, Claude Code, Cursor, Aider, or your own CLI execute the coding work.
How Prisma works
From an objective to a registered model.
Five small steps, each tracked, each starting from the evidence the last one produced.
Define the goal, constraints, and success metric.
Create a strong starting point to measure against.
Read metrics, logs, and artifacts from previous steps.
Spin up a focused branch to test each promising idea.
Promote the winner to your model registry and roll it out.
Use cases
If “better” is something you can measure, Prisma can search for it.
The four below are common starting points.
Search across model families, not just inside one.
Prisma proposes the architecture, runs the sweep, and ranks the lineage. The leaderboard updates itself as evidence comes in.
Numeric refinement that learns from prior trials.
Hand the inner loop to TPE, CMA-ES, or Prisma’s own planner. Trials are tagged so you can tell tuning from structural change later.
Move the metric by changing what the model sees.
Lags, encodings, embeddings, and target transforms are proposed, validated on the same eval, and kept only when they hold up.
Prompt, tool, and model search on a held-out eval.
Run prompt × tool × model matrices and surface pareto-best stacks. Same loop, applied to a prompt stack instead of a learner.
FAQ
Prisma is LUML's AutoResearch-style module for production teams. You describe an objective and configure the search tree; Prisma runs the autonomous research loop on top of LUML until the tree is exhausted.
Prisma plugs into the harness you already use. Experiments populate the leaderboard, and models go to the registry.