Dream Engines
Open in Colab

DreamDojo · GR-1 worked example

End-to-end rollout against the deployed DreamDojo 2B · GR-1 spec, using a real teleop fixture from the engine's regression suite. Walltime budget: ~3 s warm.

Prerequisites

  • pip install dream-engine (≥0.1.0)
  • An API key in DREAM_API_KEY

Get a real (start_frame, actions) pair

The SDK ships a one-call helper that downloads a real GR-1 episode from the engine repo and caches it locally (~500 KB):

PYTHON
import dream
img, actions = dream.examples.dreamdojo_grasp_real()
# img: (480, 640, 3) uint8 — real start frame from a teleop session
# actions: (48, 384) float32 — real GR00T teleop actions

If you want your own fixtures instead, you have two options:

  • NVIDIA's public dataset. Download one episode from nvidia/PhysicalAI-Robotics-GR00T-Teleop-GR1, convert the parquet to a PNG + .npy pair matching the SDK's wire shape.
  • The synthetic helper. dream.examples.dreamdojo_grasp() returns deterministic bytes with the right shape but out-of-distribution values — useful for CI smoke tests, not for visual comparison.

Run the rollout

PYTHON
import dream
client = dream.Client()
model = client.models.get("dreamdojo-2b-gr1")
img, actions = dream.examples.dreamdojo_grasp_real()
rollout = model.predict(start_frame=img, actions=actions, seed=0)
print(f"request_id: {rollout.request_id}")
print(f"engine_wall_s: {rollout.wall_s:.2f}")
print(f"cost_usd: ${rollout.cost_usd}")
print(f"mp4 size: {len(rollout.mp4_bytes):,} bytes")
rollout.save("rollout.mp4")

Expected:

request_id:    994aff25-40fd-4169-a915-9ef738562308
engine_wall_s: 2.44
cost_usd:      $0.0245
mp4 size:      322,882 bytes

engine_wall_s is purely the H100's Engine.predict() wall — transit adds ~3-4 s on a warm container, longer (~75 s) if Modal cold-starts.

What the rollout looks like

The mp4 is 48 frames at 480×640, 10 fps — about 5 seconds of video. Visually: the GR-1 humanoid arm executes the sequence of actions starting from the conditioning frame. Compare against the regression suite's reference rollout for the same (start_frame, actions) to verify quality.

Score quality if you want metrics

The synthetic example doesn't ship per-rollout scoring. For real metrics, run our dreamengine.bench.regression suite which computes PSNR / SSIM / LPIPS against ground-truth rollouts. Per-rollout scoring via the API (passing a ground-truth video as a quality reference) is on the roadmap — not in v0.1.0.

Next

  • predict_batch — same fixture, K candidates in one fused forward pass.
  • Errors & retries — what dream.RateLimitError, ModelNotFoundError, etc. mean in practice.