Lunar Studio  ·  private beta

Train.Evaluate.Deploy - out loud.

Drop in a dataset. Tell the agent what to predict. Watch the model train, then ship a deployable Rust inference server - no notebooks, no Python in production.

Engine
Rust workspace · 32 crates
Tools
38 in-browser + 83 MCP
Catalog
43 model kinds · DL + GBM
Deploy
Static Rust binary · no Python
LIVE

01 / loss

Training & validation loss

val_loss0.241
trainvalstep 1240 / 1500

02 / boundary

Decision boundary

acc91.4%

03 / confusion

Confusion matrix

f10.903
142
3
1
0
4
138
5
2
1
6
134
3
0
2
4
145
agent.start_training(model="LightGBM", dataset="churn.csv")  ·  epoch 7/12  ·  loss 0.241  ·  val_loss flattening  ·  suggesting early stop
Lunar Studio mission console showing a live training run. The chat agent has called start_training; loss is decreasing, the boundary plot is sharpening, and the confusion matrix is populating.

What you get

One product. Three things wired underneath.

01
Chat is the verb

An agent that does the work.

Thirty-eight in-browser tools at the agent's disposal. It lists your datasets, inspects schemas, summarizes columns, proposes transform plans, edits the model graph, starts and stops training. You bring your own model token; it stays in your browser and never touches our server.

  • -describe_schema · summarize_columns · peek_rows
  • -propose_dataset_transform_plan with preview
  • -expand_model_graph_template · graph edits
  • -start_training · continue · stop · diagnose
02
Console is the witness

Every tool call streams into the dashboard.

When the agent starts a run, you don't get a status text - you get a live mission console. Per-batch loss, decision boundary, confusion matrix, weight histograms, per-class accuracy. Read it like a flight panel.

  • -Per-batch loss + validation curve
  • -Live decision boundary heatmap
  • -Confusion matrix populating in real time
  • -Weight histograms + prediction samples
03
Output is a binary

Deploy emits a Rust inference server.

Click deploy. Lunar emits a standalone Rust server: Cargo.toml, Dockerfile, handler, OpenAPI spec. Build the container. Ship it to edge or cloud. No Python at runtime. No notebook checkpoint. No trailing dependencies.

  • -Standalone Cargo workspace per deployment
  • -Dockerfile + handler.rs + OpenAPI bundled
  • -Runs on local CPU/GPU, Modal, or RunPod
  • -Same code path everywhere

How it works

From dataset to deployed inference in four steps. You drive when you want to.

  1. T-04STAGE

    Tell the agent

    Drop in a CSV. Type "predict churn from this." The agent calls describe_schema, summarize_columns, peek_rows. It proposes a model graph and a transform plan you can preview, approve, or reject.

  2. T-03STAGE

    Build the pipeline

    The agent drafts a DAG with propose_dataset_transform_plan and propose_dataset_package_plan. You see a preview diff. Approve. The form on the left hydrates from the agent's tool calls - no copy-paste.

  3. T-02STAGE

    Train & observe

    The agent calls start_training. SSE telemetry streams in: per-batch loss, boundary plot, confusion matrix, weight histograms. The agent narrates: "epoch 3, val loss flattening, suggesting early stop."

  4. T-01FINAL

    Ship

    Pick a name and a target environment. Lunar builds the Rust binary, packages the container, and deploys it for you. You get a live /predict endpoint - no Python at runtime, no infrastructure to set up.

Capabilities

What you get out of the box.

43
Model kinds
Deep learning, gradient boosting, audio, vision, time-series - one catalog.
38
Tools the agent can call
Datasets, transforms, model graphs, training runs - all from chat.
0
Lines of Python at runtime
Deployments are static Rust binaries. Container-ready, edge-ready.

Also included

MCP tools for external agents
83
MCP tools for external agents
loss functions
18
loss functions
optimizers
13
optimizers
augmentations
20
augmentations
export formats
11
export formats
execution backends
4
execution backends

Trust

Your data and your keys stay where you put them.

The wrong default in this category is your data on somebody else’s server. Lunar’s defaults are deliberately conservative - your dataset, your token, your binary.

Model token
Your AI provider key stays in the browser. It never reaches our servers, and we never see it.
Datasets
Your data lives on your machine. Lunar only sends it to a remote GPU when you choose to run a job there.
PII gate
A GLiNER ONNX scanner runs on every dataset before training. Block-or-redact policies enforce at the dataset state machine. Raw spans never reach transport.
Deployments
Static Rust binaries. No Python at runtime. No virtualenv drift. One container, edge-ready.

Pricing

Subscription for the studio. Pass-through plus margin for compute.

Local execution
FreeYes
ProYes
CloudYes
Agent + MCP surface
FreeFull
ProFull
CloudFull
Concurrent training runs
Free1
ProUnlimited
CloudUnlimited
Deployment server generation
Free-
ProYes
CloudYes
Team workspaces
Free-
ProYes
CloudYes
Modal & RunPod GPU pass-through
Free-
Pro-
CloudYes
Managed inference (~$0.05/run)
Free-
Pro-
CloudYes
BYO-cloud option
Free-
Pro-
CloudYes
SOC2 + audit logs
Free-
Pro-
CloudYes
Support
FreeCommunity
ProPriority
CloudDedicated SE

FAQ

Common questions.

  • How is this different from Weights & Biases?

    W&B is a dashboard for tracking runs launched elsewhere. Lunar is where you launch them - through chat, with an agent that has real tool access. Lunar also generates the deployment server, which W&B does not. And because the engine is Rust, nothing in the deploy path needs Python.

  • Why Rust under the hood?

    Three reasons. Inference deployments are static binaries - no Python at runtime, no dependency drift, edge-ready. Type safety means the contract between the agent's tool calls and the engine is checked at compile time. And Burn - the Rust ML framework Lunar builds on - has been benchmarked at 2.3× PyTorch on mixed-precision training.

  • Does the agent train arbitrary code, or only catalog models?

    Catalog first - 43 model kinds, including LightGBM gradient boosting, transformers, diffusion, audio, vision, and time-series. The catalog enforces a capability invariant: trainable = deployable = inferable. A custom-graph escape hatch is on the roadmap once the catalog story is fully locked.

  • Where do my tokens live?

    In your browser. Provider keys are exchanged client-side and kept in memory for the session only. They never reach our servers and are never logged. You can rotate or remove them at any time.

  • Can external agents (Claude Code, Cursor) drive Lunar?

    Yes. The same engine that powers the in-studio agent exposes 83 MCP tools over JSON-RPC 2.0. Any MCP-compatible client - Claude Code, Cursor, Windsurf, your own - can run the entire pipeline programmatically.

  • What about data privacy and compliance?

    Your dataset stays on your machine unless you choose to run a job on a remote GPU. PII scanning runs on every dataset before training (GLiNER ONNX, fail-closed). For regulated workloads, the BYO-cloud option keeps everything inside your VPC.

Request access

Train your next
model by talking
to one.

Lunar Studio is in private beta. Drop your email to get early access and the latest demo build.

No credit card  ·  unsubscribe any time  ·  emails are never sold or shared