Prisma

AutoResearch-style experimentation,
built for production.

Give Prisma an objective. It runs the ML research loop on top of LUML and registers a deployable model when the search completes.

Works with

Two layers of search

The biggest gains aren't in the hyperparameters.

Most teams pour their compute into hyperparameter tuning: learning rate, depth, regularization. The bigger gains usually come from the choices around that tuning: which architecture, which features, which loss, whether to ensemble at all. Prisma searches both layers and labels every trial, so you can see which kind of change actually moved the metric.

Structural
Choices about the model itself
  • · architecture
  • · features
  • · loss / objective
  • · ensembles
Numeric
Tuning inside that model
  • · learning rate
  • · depth / width
  • · regularization
  • · schedules

The best hyperparameters for one architecture are only half the answer.

PRISMA • Refraction viewgain over baseline · 5 axes
Hyperparameters
lr, depth, regularization
+0.008
Architecture
family, depth, blocks
+0.022
Loss & objective
surrogate, reweighting
+0.029
Features
lags, encodings, embeds
+0.038
Ensembles
stacked + meta-learner
+0.052
BEST

Why Prisma is different

Not just a coding agent.
A control plane for autonomous research.

The coding agents you already use are good at writing code. They are not built to run a research project across days, runs, and artifacts. Prisma is the layer that does, orchestrating the loop while LUML holds the memory.

  • 01
    Persistent research memory

    Runs, metrics, artifacts, lineage and candidates live in LUML, not in a context window.

  • 02
    Externally orchestrated steps

    Small agents inspect, fork, evaluate, and stop. No single context grows forever.

  • 03
    Harness-agnostic execution

    Codex, Claude Code, Cursor, Aider, or your own CLI execute the coding work.

Prisma · control planelive
decide·prisma orchestrator
objectivebuild the best model
Prisma orchestrator
picks the next focused step from tracked memory
Inspect
Fork
Execute
Evaluate
dispatch one step
execute·any coding harness
Codex
Claude
Cursor
Gemini
Copilot
result returned
remember·LUML platform memory
Experiments
Artifacts
Registry
Deploy

How Prisma works

One loop. Five focused steps.

From an objective to a registered model.

Five small steps, each tracked, each starting from the evidence the last one produced.

01

Start with an objective

Define the goal, constraints, and success metric.

02

Build a baseline

Create a strong starting point to measure against.

03

Inspect tracked runs

Read metrics, logs, and artifacts from previous steps.

04

Fork experiments

Spin up a focused branch to test each promising idea.

05

Register & deploy

Promote the winner to your model registry and roll it out.

Use cases

Where Prisma fits.

If “better” is something you can measure, Prisma can search for it.
The four below are common starting points.

Models
Ensemble
0.763
CatBoost
0.752
LightGBM
0.741
XGBoost
0.720
NN-MLP
0.706

Predictive modeling

Search across model families, not just inside one.

Prisma proposes the architecture, runs the sweep, and ranks the lineage. The leaderboard updates itself as evidence comes in.

Hyperparameters
lossstep →

Hyperparameter search

Numeric refinement that learns from prior trials.

Hand the inner loop to TPE, CMA-ES, or Prisma’s own planner. Trials are tagged so you can tell tuning from structural change later.

Features
lag_7
lag_28
ratio_a
x_emb
tgt_enc
gain

Feature engineering

Move the metric by changing what the model sees.

Lags, encodings, embeddings, and target transforms are proposed, validated on the same eval, and kept only when they hold up.

LLM evals
pass rate
claudegpt-5mistral
100
v1v2v3v4

LLM workflows

Prompt, tool, and model search on a held-out eval.

Run prompt × tool × model matrices and surface pareto-best stacks. Same loop, applied to a prompt stack instead of a learner.

FAQ

Frequently
asked
questions

Prisma is LUML's AutoResearch-style module for production teams. You describe an objective and configure the search tree; Prisma runs the autonomous research loop on top of LUML until the tree is exhausted.

Give Prisma an objective.
Receive a deployable model.

Prisma plugs into the harness you already use. Experiments populate the leaderboard, and models go to the registry.