Dashboard

Get started on Prime Intellect

Start with the Prime CLI and your coding agent. Add billing later, once you are ready to launch hosted jobs or rent compute.


Step 1

Install and Sign In

Use browser login to connect the CLI to your Prime account. No billing setup is required for this step.

Install Prime CLI:

uv tool install -U prime

Sign in:

prime login
Step 2

Prepare Your Workspace

Add the agent skills, starter configs, and Prime endpoints your local project needs before launching anything hosted.

Set up workspace:

prime lab setup

This will:

Install agent skills

Download SKILL.md files into your workspace so Claude, Codex, Cursor, or OpenCode understand Prime environments, evals, and training runs.

Scaffold ready-to-run configs

Add RL training configs (GSM8K, wiki-search, wordle, and more) and eval configs you can run locally or promote to hosted runs later.

Connect to Prime tooling

Configure endpoints and project defaults so your agent can use Prime workflows from your terminal.

Step 3

Hand Your First Task to an Agent

Choose an example task and paste the prompt into your coding agent. It can scaffold an environment, run a baseline eval, and decide what to refine next.

We'll use the example task here of
I want to train a model for math reasoning. Propose an initial environment scaffold including relevant tools, and come up with a good method to generate a small sample synthetic dataset. Run a quick eval baseline, inspect the results, and then decide how we should iterate on refining the implementation.

Start locally with the scaffold it creates. Move to hosted workflows once the baseline is worth scaling or sharing.

Step 4

Scale Up When You Need It

Billing stays out of the critical path. Add it only when you are ready for hosted workloads or rented compute.

Hosted workflows to use after setup:

Run Evaluations
Benchmark models locally or on hosted Prime infrastructure.
Hosted Training
Launch RL training runs with managed workflows and tracking.

Compute and community resources for larger runs:

On-demand GPUs
Deploy a single-node instance and start iterating quickly.
Environments Hub
Discover, upload, and run RL environments from the hub.

Rate your experience