mjlab.sim#

Simulation core.

Classes:

MujocoCfg

Configuration for MuJoCo simulation parameters.

Simulation

GPU-accelerated MuJoCo simulation powered by MJWarp.

SimulationCfg

SimulationCfg(*, nconmax: int | None = None, njmax: int | None = None, ls_parallel: bool = True, contact_sensor_maxmatch: int = 64, mujoco: mjlab.sim.sim.MujocoCfg = <factory>, nan_guard: mjlab.utils.nan_guard.NanGuardCfg = <factory>)

TorchArray

Warp array that behaves like a torch.Tensor with shared memory.

WarpBridge

Wraps mjwarp objects to expose Warp arrays as PyTorch tensors.

class mjlab.sim.MujocoCfg[source]#

Bases: object

Configuration for MuJoCo simulation parameters.

Attributes:

Methods:

apply(model)

Apply configuration settings to a compiled MjModel.

__init__([timestep, integrator, impratio, ...])

timestep: float = 0.002#
integrator: Literal['euler', 'implicitfast'] = 'implicitfast'#
impratio: float = 1.0#
cone: Literal['pyramidal', 'elliptic'] = 'pyramidal'#
jacobian: Literal['auto', 'dense', 'sparse'] = 'auto'#
solver: Literal['newton', 'cg', 'pgs'] = 'newton'#
iterations: int = 100#
tolerance: float = 1e-08#
ls_iterations: int = 50#
ls_tolerance: float = 0.01#
ccd_iterations: int = 50#
gravity: tuple[float, float, float] = (0, 0, -9.81)#
apply(model: MjModel) None[source]#

Apply configuration settings to a compiled MjModel.

__init__(timestep: float = 0.002, integrator: Literal['euler', 'implicitfast'] = 'implicitfast', impratio: float = 1.0, cone: Literal['pyramidal', 'elliptic'] = 'pyramidal', jacobian: Literal['auto', 'dense', 'sparse'] = 'auto', solver: Literal['newton', 'cg', 'pgs'] = 'newton', iterations: int = 100, tolerance: float = 1e-08, ls_iterations: int = 50, ls_tolerance: float = 0.01, ccd_iterations: int = 50, gravity: tuple[float, float, float] = (0, 0, -9.81)) None#
class mjlab.sim.Simulation[source]#

Bases: object

GPU-accelerated MuJoCo simulation powered by MJWarp.

CUDA Graph Capture#

On CUDA devices with memory pools enabled, the simulation captures CUDA graphs for step(), forward(), and reset() operations. Graph capture records a sequence of GPU kernels and their memory addresses, then replays the entire sequence with a single kernel launch, eliminating CPU overhead from repeated kernel dispatches.

Important: A captured graph holds pointers to the GPU arrays that existed at capture time. If those arrays are later replaced (e.g., via expand_model_fields()), the graph will still read from the old arrays, silently ignoring any new values. The expand_model_fields() method handles this automatically by calling create_graph() after replacing arrays.

If you write code that replaces model or data arrays after simulation initialization, you must call create_graph() afterward to re-capture the graphs with the new memory addresses.

Methods:

__init__(num_envs, cfg, model, device)

create_graph()

Capture CUDA graphs for step, forward, and reset operations.

expand_model_fields(fields)

Expand model fields to support per-environment parameters.

get_default_field(field)

Get the default value for a model field, caching for reuse.

forward()

step()

reset([env_ids])

Attributes:

mj_model

mj_data

wp_model

wp_data

data

model

default_model_fields

Default values for expanded model fields, used in domain randomization.

__init__(num_envs: int, cfg: SimulationCfg, model: MjModel, device: str)[source]#
create_graph() None[source]#

Capture CUDA graphs for step, forward, and reset operations.

This method must be called whenever GPU arrays in the model or data are replaced after initialization. The captured graphs hold pointers to the arrays that existed at capture time. If those arrays are replaced, the graphs will silently read from the old arrays, ignoring any new values.

Called automatically by: - __init__() during simulation initialization - expand_model_fields() after replacing model arrays

On CPU devices or when memory pools are disabled, this is a no-op.

property mj_model: MjModel#
property mj_data: MjData#
property wp_model: mujoco_warp.Model#
property wp_data: mujoco_warp.Data#
property data: WarpBridge#
property model: WarpBridge#
property default_model_fields: dict[str, Tensor]#

Default values for expanded model fields, used in domain randomization.

expand_model_fields(fields: tuple[str, ...]) None[source]#

Expand model fields to support per-environment parameters.

get_default_field(field: str) Tensor[source]#

Get the default value for a model field, caching for reuse.

Returns the original values from the C MuJoCo model (mj_model), obtained from the final compiled scene spec before any randomization is applied. Not to be confused with the GPU Warp model (wp_model) which may have randomized values.

forward() None[source]#
step() None[source]#
reset(env_ids: Tensor | None = None) None[source]#
class mjlab.sim.SimulationCfg[source]#

Bases: object

SimulationCfg(*, nconmax: int | None = None, njmax: int | None = None, ls_parallel: bool = True, contact_sensor_maxmatch: int = 64, mujoco: mjlab.sim.sim.MujocoCfg = <factory>, nan_guard: mjlab.utils.nan_guard.NanGuardCfg = <factory>)

Attributes:

nconmax

Number of contacts to allocate per world.

njmax

Number of constraints to allocate per world.

ls_parallel

contact_sensor_maxmatch

mujoco

nan_guard

Methods:

__init__(*[, nconmax, njmax, ls_parallel, ...])

nconmax: int | None = None#

Number of contacts to allocate per world.

Contacts exist in large heterogenous arrays: one world may have more than nconmax contacts. If None, a heuristic value is used.

njmax: int | None = None#

Number of constraints to allocate per world.

Constraint arrays are batched by world: no world may have more than njmax constraints. If None, a heuristic value is used.

ls_parallel: bool = True#
contact_sensor_maxmatch: int = 64#
mujoco: MujocoCfg#
nan_guard: NanGuardCfg#
__init__(*, nconmax: int | None = None, njmax: int | None = None, ls_parallel: bool = True, contact_sensor_maxmatch: int = 64, mujoco: ~mjlab.sim.sim.MujocoCfg = <factory>, nan_guard: ~mjlab.utils.nan_guard.NanGuardCfg = <factory>) None#
class mjlab.sim.TorchArray[source]#

Bases: object

Warp array that behaves like a torch.Tensor with shared memory.

Methods:

__init__(wp_array[, nworld])

Initialize the tensor proxy with a Warp array.

Attributes:

__init__(wp_array: warp.array, nworld: int | None = None) None[source]#

Initialize the tensor proxy with a Warp array.

property wp_array: warp.array#
class mjlab.sim.WarpBridge[source]#

Bases: Generic[T]

Wraps mjwarp objects to expose Warp arrays as PyTorch tensors.

Automatically converts Warp array attributes to TorchArray objects on access, enabling direct PyTorch operations on simulation data. Recursively wraps nested structures that contain Warp arrays.

IMPORTANT: This wrapper is read-only. To modify array data, use in-place operations like obj.field[:] = value. Direct assignment like obj.field = new_array will raise an AttributeError to prevent accidental memory address changes that break CUDA graphs.

Methods:

__init__(struct[, nworld])

clear_cache()

Clear the wrapped cache to force re-wrapping of arrays.

Attributes:

struct

Access the underlying wrapped struct.

__init__(struct: T, nworld: int | None = None) None[source]#
property struct: T#

Access the underlying wrapped struct.

clear_cache() None[source]#

Clear the wrapped cache to force re-wrapping of arrays.

This should be called after operations that modify the underlying warp arrays, such as expand_model_fields(), to ensure the cache reflects the updated arrays.