mjlab.sim#
Simulation core.
Classes:
Configuration for MuJoCo simulation parameters. |
|
GPU-accelerated MuJoCo simulation powered by MJWarp. |
|
SimulationCfg(*, nconmax: int | None = None, njmax: int | None = None, ls_parallel: bool = True, contact_sensor_maxmatch: int = 64, mujoco: mjlab.sim.sim.MujocoCfg = <factory>, nan_guard: mjlab.utils.nan_guard.NanGuardCfg = <factory>) |
|
Warp array that behaves like a torch.Tensor with shared memory. |
|
Wraps mjwarp objects to expose Warp arrays as PyTorch tensors. |
- class mjlab.sim.MujocoCfg[source]#
Bases:
objectConfiguration for MuJoCo simulation parameters.
Attributes:
Methods:
apply(model)Apply configuration settings to a compiled MjModel.
__init__([timestep, integrator, impratio, ...])- __init__(timestep: float = 0.002, integrator: Literal['euler', 'implicitfast'] = 'implicitfast', impratio: float = 1.0, cone: Literal['pyramidal', 'elliptic'] = 'pyramidal', jacobian: Literal['auto', 'dense', 'sparse'] = 'auto', solver: Literal['newton', 'cg', 'pgs'] = 'newton', iterations: int = 100, tolerance: float = 1e-08, ls_iterations: int = 50, ls_tolerance: float = 0.01, ccd_iterations: int = 50, gravity: tuple[float, float, float] = (0, 0, -9.81)) None#
- class mjlab.sim.Simulation[source]#
Bases:
objectGPU-accelerated MuJoCo simulation powered by MJWarp.
CUDA Graph Capture#
On CUDA devices with memory pools enabled, the simulation captures CUDA graphs for
step(),forward(), andreset()operations. Graph capture records a sequence of GPU kernels and their memory addresses, then replays the entire sequence with a single kernel launch, eliminating CPU overhead from repeated kernel dispatches.Important: A captured graph holds pointers to the GPU arrays that existed at capture time. If those arrays are later replaced (e.g., via
expand_model_fields()), the graph will still read from the old arrays, silently ignoring any new values. Theexpand_model_fields()method handles this automatically by callingcreate_graph()after replacing arrays.If you write code that replaces model or data arrays after simulation initialization, you must call
create_graph()afterward to re-capture the graphs with the new memory addresses.Methods:
__init__(num_envs, cfg, model, device)Capture CUDA graphs for step, forward, and reset operations.
expand_model_fields(fields)Expand model fields to support per-environment parameters.
get_default_field(field)Get the default value for a model field, caching for reuse.
forward()step()reset([env_ids])Attributes:
Default values for expanded model fields, used in domain randomization.
- __init__(num_envs: int, cfg: SimulationCfg, model: MjModel, device: str)[source]#
- create_graph() None[source]#
Capture CUDA graphs for step, forward, and reset operations.
This method must be called whenever GPU arrays in the model or data are replaced after initialization. The captured graphs hold pointers to the arrays that existed at capture time. If those arrays are replaced, the graphs will silently read from the old arrays, ignoring any new values.
Called automatically by: -
__init__()during simulation initialization -expand_model_fields()after replacing model arraysOn CPU devices or when memory pools are disabled, this is a no-op.
- property mj_model: MjModel#
- property mj_data: MjData#
- property wp_model: mujoco_warp.Model#
- property wp_data: mujoco_warp.Data#
- property data: WarpBridge#
- property model: WarpBridge#
- property default_model_fields: dict[str, Tensor]#
Default values for expanded model fields, used in domain randomization.
- expand_model_fields(fields: tuple[str, ...]) None[source]#
Expand model fields to support per-environment parameters.
- get_default_field(field: str) Tensor[source]#
Get the default value for a model field, caching for reuse.
Returns the original values from the C MuJoCo model (mj_model), obtained from the final compiled scene spec before any randomization is applied. Not to be confused with the GPU Warp model (wp_model) which may have randomized values.
- class mjlab.sim.SimulationCfg[source]#
Bases:
objectSimulationCfg(*, nconmax: int | None = None, njmax: int | None = None, ls_parallel: bool = True, contact_sensor_maxmatch: int = 64, mujoco: mjlab.sim.sim.MujocoCfg = <factory>, nan_guard: mjlab.utils.nan_guard.NanGuardCfg = <factory>)
Attributes:
Number of contacts to allocate per world.
Number of constraints to allocate per world.
Methods:
__init__(*[, nconmax, njmax, ls_parallel, ...])- nconmax: int | None = None#
Number of contacts to allocate per world.
Contacts exist in large heterogenous arrays: one world may have more than nconmax contacts. If None, a heuristic value is used.
- njmax: int | None = None#
Number of constraints to allocate per world.
Constraint arrays are batched by world: no world may have more than njmax constraints. If None, a heuristic value is used.
- nan_guard: NanGuardCfg#
- class mjlab.sim.TorchArray[source]#
Bases:
objectWarp array that behaves like a torch.Tensor with shared memory.
Methods:
__init__(wp_array[, nworld])Initialize the tensor proxy with a Warp array.
Attributes:
- __init__(wp_array: warp.array, nworld: int | None = None) None[source]#
Initialize the tensor proxy with a Warp array.
- property wp_array: warp.array#
- class mjlab.sim.WarpBridge[source]#
Bases:
Generic[T]Wraps mjwarp objects to expose Warp arrays as PyTorch tensors.
Automatically converts Warp array attributes to TorchArray objects on access, enabling direct PyTorch operations on simulation data. Recursively wraps nested structures that contain Warp arrays.
IMPORTANT: This wrapper is read-only. To modify array data, use in-place operations like obj.field[:] = value. Direct assignment like obj.field = new_array will raise an AttributeError to prevent accidental memory address changes that break CUDA graphs.
Methods:
__init__(struct[, nworld])Clear the wrapped cache to force re-wrapping of arrays.
Attributes:
Access the underlying wrapped struct.
- property struct: T#
Access the underlying wrapped struct.