Flow

Live experiment tracking,
for ML and GenAI.

Local by default. Natively integrated with the LUML platform for team collaboration and the full set of platform capabilities.

pip install lumlflow
lumlflow ui
live
MetricsParamsModelexperiment
fraud_v3 · val_acc
0.510+0.0%
val_acctrain_loss
params · log_static7 keys
modellightgbm
learning_rate0.05
max_depth7
num_leaves63
early_stopping20
seed1337
Tailor this page to your work

Showing the unified view — ML and GenAI side by side.

One SDK

Classical ML and GenAI,
tracked the same way.

log_static, log_dynamic, and log_model cover training experiments. enable_tracing() plus an OTel instrumentor adds spans and eval samples for GenAI experiments. Both kinds live in the same groups.

ML
Experiments
  • · Parameters · log_static
  • · Metrics · log_dynamic
  • · Models · log_model
LLM
Traces & evals
  • · Spans · auto-captured
  • · Eval samples · scorers
  • · Human annotations

Charts update step by step as the script runs.

Spans appear in the dashboard as the agent executes.

One tracker, both experiment types.

flow tracker · live
ml + genai · one place
MLlightgbm · 200 steps
$train.py
log_staticlog_dynamiclog_model
GenAIrag · 312 spans
$agent.py
tracelog_evalannotate
tracker
luml·flow
unified feedlive
fraud_v3ml0s
agent_evalllm12s
fraud_v2ml4m
rag_qallm11m
churn_xgbml32m
5 of 142 experiments· same registry ·
two streams ─ one tracker ─ one feed

How Flow works

From script to dashboard in three steps.

Local by default.

Upload to LUML when an experiment is worth sharing.

01
train.py
# import once
from luml import tracker
with tracker.experiment(name="fraud_v3") as exp_id:
model.fit(X, y)
paramsmetricsmodeltraces

Wrap your script

Open a tracker.experiment(...) block around your training or eval call. Inside, log_static records parameters, log_dynamic records step metrics, and log_model captures the model. For GenAI, enable_tracing() plus an OTel instrumentor adds spans.

02
$
127.0.0.1:5000

Run the local UI

lumlflow ui starts a dashboard at localhost:5000 that reads the local SQLite store. Compare experiments, drill into traces, and annotate eval samples without leaving your machine.

03
luml/registry
team
·
fraud_v3you
AB
churn_xgb
2h
JM
agent_eval
1d

Share with the team

When an experiment is worth keeping, paste your API key into the UI and click Upload. Pick organization, orbit, and collection — the model and its experiment context land in the LUML registry as a versioned artifact.

Flow UI · ML

ML experiments in Flow.

Live metrics, parameter tables, and the model artifact for every experiment.

Flow UI · Unified

ML and GenAI experiments in Flow.

Same UI, with dedicated panels for each experiment type.

Flow UI · GenAI

GenAI experiments in Flow.

Span trees, scorer breakdowns, and human annotations on every eval and trace.

Storage

Save anything
alongside an experiment.

Datasets, plots, prompt files, training logs, eval reports — they live with the experiment in the local store and are uploaded with it. Preview without downloading.

any file typeorganized per experiment
Overview
Metrics
Traces
Evals
Attachments
agent_graph.mmd
agent_graph.svg
eval_scores.png
eval_summary.md
per_sample_heatmap.pdf
per_sample_results.csv
report.txt
run_config.json
eval_scores.png37.32 KB
eval_scores.png
Per-scorer aggregated scores · 3/3 items
0.00.20.40.60.81.01.001.001.00answer_length0.000.000.00exact_match0.440.650.86keyword_overlapminmeanmaxscore (0–1)
datasets · plots · logspreview & download

From local to team

Run locally.
Upload experiments you want to keep.

While you iterate, experiments stay in the local SQLite store. Open one in lumlflow ui, click Upload, and the model and its experiment context land in the LUML registry.

Secured

Local by default

No account or telemetry to use Flow locally. Experiments stay in the local SQLite store until you upload.
Traceable

Versioned in the registry

Each upload of a model is a new version. Browse history, compare across versions, and deploy to a Satellite from the registry.
Linked

Context travels with the model

Metrics, parameters, eval results, and traces are bundled into the .luml package on upload, so the registry shows what was trained and how it performed.

Integrations

Works with the stack
you already use.

The tracker drops into existing training scripts and agents without changes to surrounding code.

PyTorch logoPyTorch
LangGraph logoLangGraph
XGBoost logo
scikit-learn logoscikit-learn
LlamaIndex logoLlamaIndex
TensorFlow logoTensorFlow
Anthropic logo
LightGBM logoLightGBM
DSPy logoDSPy
JAX logo
LangChain logoLangChain
Keras logoKeras
PyTorch logoPyTorch
LangGraph logoLangGraph
XGBoost logo
scikit-learn logoscikit-learn
LlamaIndex logoLlamaIndex
TensorFlow logoTensorFlow
Anthropic logo
LightGBM logoLightGBM
DSPy logoDSPy
JAX logo
LangChain logoLangChain
Keras logoKeras

FAQ

Frequently
asked
questions

Flow is LUML's live experiment tracker. Wrap your training or evaluation code with the tracker and Flow captures parameters, metrics, models, traces, and eval samples as the experiment executes. The lumlflow ui command gives you a live dashboard, and you can upload any experiment to LUML when you want to share it.

Try it locally.

No signup needed to run locally. Connect to LUML when you want team collaboration and the rest of the platform.