Dream Engines

Quickstart

Five lines from pip install to a saved rollout. Walltime budget on a warm container: ~3 seconds.

1. Install

BASH
pip install dream-engine

Requires Python ≥3.10. Pulls httpx, numpy, pillow. For mp4 → numpy frame decoding (lazy on rollout.frames access), add the [decode] extra:

BASH
pip install "dream-engine[decode]"

2. Get an API key

During the v0.1 beta, email hello@dreamengines.run for a key. Self-service signup at https://dreamengines.run/dashboard lands later. Then:

BASH
export DREAM_API_KEY="dre_..."

The SDK reads DREAM_API_KEY automatically; pass api_key= to Client(...) if you'd rather inject explicitly.

If your account is on a private Modal deployment (typical during beta), also set the base URL:

BASH
export DREAM_BASE_URL="https://<your-deployment>.modal.run"

The SDK's default https://api.dreamengines.run is reserved for the public production endpoint, which lands at GA.

3. Run a rollout

PYTHON
import dream
client = dream.Client()
model = client.models.get("dreamdojo-2b-gr1")
rollout = model.predict(
start_frame="start.png",
actions="actions.npy", # shape (48, 384) float32
)
print(f"cost: ${rollout.cost_usd}, wall: {rollout.wall_s:.2f}s")
rollout.save("rollout.mp4")

That's it. The mp4 contains 48 frames at 480×640, 10 fps.

What happens under the hood

  1. models.get(slug) hits GET /v1/models/{slug} to confirm the spec exists and is the active model on the server.
  2. model.predict(...) POSTs the frame (PNG) and actions (npy) as multipart to /v1/predict. The server runs DreamDojo's chunked rectified-flow rollout on H100.
  3. The response is the raw mp4. The SDK parses the X-DreamEngine-Estimated-Charge-USD and X-DreamEngine-Engine-Wall-Ms response headers into rollout.cost_usd / rollout.wall_s.

Don't have a frame and actions handy?

The SDK ships a deterministic synthetic example for smoke tests:

PYTHON
import dream
img, actions = dream.examples.dreamdojo_grasp()
# img: (480, 640, 3) uint8 — gradient + crosshair
# actions: (48, 384) float32 — low-amplitude sinusoid
rollout = client.models.get("dreamdojo-2b-gr1").predict(
start_frame=img, actions=actions
)

The synthetic actions are out-of-distribution — the rollout will look nonsensical, but the wire shape is exactly what the engine expects. For a realistic example with real teleop data, see the DreamDojo worked example.

Next

  • Authentication — how API keys + rate limits actually work.
  • predict reference — every argument, including start_frame=np.ndarray / PIL.Image paths.
  • Visual MPC — score K candidate rollouts in one server roundtrip.

Got more than one row?

For bulk inference over a whole dataset (HF, S3, or local), see the bulk inference quickstart — one client.predict_many(...) call covers it.