mjlab.sensor

Contents

mjlab.sensor#

Classes

Base#

class mjlab.sensor.Sensor[source]#

Bases: ABC, Generic[T]

Base sensor interface with typed data and per-step caching.

Type parameter T specifies the type of data returned by the sensor. For example: - Sensor[torch.Tensor] for sensors returning raw tensors - Sensor[ContactData] for sensors returning structured contact data

Subclasses should not forget to: - Call super().__init__() in their __init__ method - If overriding reset() or update(), call super() FIRST to invalidate cache

requires_sensor_context: bool = False#

Whether this sensor needs a SensorContext (render context).

__init__() None[source]#
abstractmethod edit_spec(scene_spec: mujoco.MjSpec, entities: dict[str, Entity]) None[source]#

Edit the scene spec to add this sensor.

This is called during scene construction to add sensor elements to the MjSpec.

Parameters:
  • scene_spec – The scene MjSpec to edit.

  • entities – Dictionary of entities in the scene, keyed by name.

abstractmethod initialize(mj_model: MjModel, model: mujoco_warp.Model, data: mujoco_warp.Data, device: str) None[source]#

Initialize the sensor after model compilation.

This is called after the MjSpec is compiled into an MjModel and the simulation is ready to run. Use this to cache sensor indices, allocate buffers, etc.

Parameters:
  • mj_model – The compiled MuJoCo model.

  • model – The mjwarp model wrapper.

  • data – The mjwarp data arrays.

  • device – Device for tensor operations (e.g., “cuda”, “cpu”).

property data: T#

Get the current sensor data, using cached value if available.

This property returns the sensor’s current data in its specific type. The data type is specified by the type parameter T. The data is cached per-step and recomputed only when the cache is invalidated (after reset() or update() is called).

Returns:

The sensor data in the format specified by type parameter T.

reset(env_ids: Tensor | slice | None = None) None[source]#

Reset sensor state for specified environments.

Invalidates the data cache. Override in subclasses that maintain internal state, but call super().reset(env_ids) FIRST.

Parameters:

env_ids – Environment indices to reset. If None, reset all environments.

update(dt: float) None[source]#

Update sensor state after a simulation step.

Invalidates the data cache. Override in subclasses that need per-step updates, but call super().update(dt) FIRST.

Parameters:

dt – Time step in seconds.

debug_vis(visualizer: DebugVisualizer) None[source]#

Visualize sensor data for debugging.

Base implementation does nothing. Override in subclasses that support debug visualization.

Parameters:

visualizer – The debug visualizer to draw to.

class mjlab.sensor.SensorCfg[source]#

Base configuration for a sensor.

name: str#
abstractmethod build() Sensor[Any][source]#

Build sensor instance from this config.

class mjlab.sensor.SensorContext[source]#

Container for shared sensing resources.

Manages the RenderContext used by both camera sensors (for rendering) and raycast sensors (for BVH-accelerated ray intersection). The actual graph capture and execution is handled by Simulation.

__init__(mj_model: mujoco.MjModel, model: mjwarp.Model, data: mjwarp.Data, camera_sensors: list[CameraSensor], raycast_sensors: list[RayCastSensor], device: str)[source]#
recreate(mj_model: MjModel, expanded_fields: set[str] | None = None) None[source]#

Recreate the render context after model fields are expanded.

Called by Simulation.expand_model_fields() for domain randomization.

prepare() None[source]#

Pre-graph: transform rays to world frame.

finalize() None[source]#

Post-graph: compute raycast hit positions.

get_rgb(cam_idx: int) Tensor[source]#

Get unpacked RGB data for a camera.

Parameters:

cam_idx – MuJoCo camera ID.

Returns:

Tensor of shape [num_envs, height, width, 3] (uint8).

get_depth(cam_idx: int) Tensor[source]#

Get depth data for a camera.

Parameters:

cam_idx – MuJoCo camera ID.

Returns:

Tensor of shape [num_envs, height, width, 1] (float32).

unpack_rgb() None[source]#

Unpack packed uint32 RGB data into separate channels.

Called from Simulation._sense_kernel() so it gets captured in the CUDA graph. No-op if no cameras need RGB.

Builtin Sensor#

class mjlab.sensor.BuiltinSensor[source]#

Bases: Sensor[Tensor]

Wrapper over MuJoCo builtin sensors.

Can add a new sensor to the spec, or wrap an existing sensor from entity XML. Returns raw MuJoCo sensordata as torch.Tensor with shape depending on sensor type (e.g., accelerometer: (N, 3), framequat: (N, 4)).

Note: Caching provides minimal benefit here since data access is just a tensor slice view into MuJoCo’s sensordata buffer.

__init__(cfg: BuiltinSensorCfg | None = None, name: str | None = None) None[source]#
classmethod from_existing(name: str) BuiltinSensor[source]#

Wrap an existing sensor already defined in entity XML.

edit_spec(scene_spec: MjSpec, entities: dict[str, Entity]) None[source]#

Edit the scene spec to add this sensor.

This is called during scene construction to add sensor elements to the MjSpec.

Parameters:
  • scene_spec – The scene MjSpec to edit.

  • entities – Dictionary of entities in the scene, keyed by name.

initialize(mj_model: MjModel, model: mujoco_warp.Model, data: mujoco_warp.Data, device: str) None[source]#

Initialize the sensor after model compilation.

This is called after the MjSpec is compiled into an MjModel and the simulation is ready to run. Use this to cache sensor indices, allocate buffers, etc.

Parameters:
  • mj_model – The compiled MuJoCo model.

  • model – The mjwarp model wrapper.

  • data – The mjwarp data arrays.

  • device – Device for tensor operations (e.g., “cuda”, “cpu”).

class mjlab.sensor.BuiltinSensorCfg[source]#
sensor_type: Literal['accelerometer', 'velocimeter', 'gyro', 'force', 'torque', 'magnetometer', 'rangefinder', 'jointpos', 'jointvel', 'jointlimitpos', 'jointlimitvel', 'jointlimitfrc', 'jointactuatorfrc', 'tendonpos', 'tendonvel', 'tendonactuatorfrc', 'actuatorpos', 'actuatorvel', 'actuatorfrc', 'framepos', 'framequat', 'framexaxis', 'frameyaxis', 'framezaxis', 'framelinvel', 'frameangvel', 'framelinacc', 'frameangacc', 'subtreecom', 'subtreelinvel', 'subtreeangmom', 'e_potential', 'e_kinetic', 'clock']#

Which builtin sensor to use.

obj: ObjRef | None = None#

The type and name of the object the sensor is attached to.

ref: ObjRef | None = None#

The type and name of object to which the frame-of-reference is attached to.

cutoff: float = 0.0#

When this value is positive, it limits the absolute value of the sensor output.

build() BuiltinSensor[source]#

Build sensor instance from this config.

class mjlab.sensor.ObjRef[source]#

Reference to a MuJoCo object (body, joint, site, etc.).

Used to specify which object a sensor is attached to and its frame of reference. The entity field allows scoping objects to specific entity namespaces.

type: Literal['body', 'xbody', 'joint', 'geom', 'site', 'actuator', 'tendon', 'camera']#

Type of the object.

name: str#

Name of the object.

entity: str | None = None#

Optional entity prefix for the object name.

prefixed_name() str[source]#

Get the full name with entity prefix if applicable.

Contact Sensor#

class mjlab.sensor.ContactSensor[source]#

Bases: Sensor[ContactData]

Tracks contacts with automatic pattern expansion to multiple MuJoCo sensors.

__init__(cfg: ContactSensorCfg) None[source]#
edit_spec(scene_spec: MjSpec, entities: dict[str, Entity]) None[source]#

Expand patterns and add MuJoCo sensors (one per primary x field pair).

initialize(mj_model: MjModel, model: mujoco_warp.Model, data: mujoco_warp.Data, device: str) None[source]#

Map sensors to sensordata buffer and allocate air time state.

reset(env_ids: Tensor | slice | None = None) None[source]#

Reset sensor state for specified environments.

Invalidates the data cache. Override in subclasses that maintain internal state, but call super().reset(env_ids) FIRST.

Parameters:

env_ids – Environment indices to reset. If None, reset all environments.

update(dt: float) None[source]#

Update sensor state after a simulation step.

Invalidates the data cache. Override in subclasses that need per-step updates, but call super().update(dt) FIRST.

Parameters:

dt – Time step in seconds.

compute_first_contact(dt: float, abs_tol: float = 1e-08) Tensor[source]#

Returns [B, N] bool: True for contacts established within last dt seconds.

compute_first_air(dt: float, abs_tol: float = 1e-08) Tensor[source]#

Returns [B, N] bool: True for contacts broken within last dt seconds.

class mjlab.sensor.ContactSensorCfg[source]#

Tracks contacts between primary and secondary patterns.

Output shape: [B, N * num_slots] or [B, N * num_slots, 3] where N = # of primaries

Fields (choose subset):
  • found: 0=no contact, >0=match count before reduction

  • force, torque: 3D vectors in contact frame (or global if reduce=”netforce”)

  • dist: penetration depth

  • pos, normal, tangent: 3D vectors in global frame (normal: primary→secondary)

Reduction modes (selects top num_slots from all matches):
  • “none”: fast, non-deterministic

  • “mindist”, “maxforce”: closest/strongest contacts

  • “netforce”: sum all forces (global frame)

Policies:
  • secondary_policy: “first”, “any”, or “error” when secondary matches multiple

  • track_air_time: enables landing/takeoff detection

  • global_frame: rotates force/torque to global (requires normal+tangent fields)

primary: ContactMatch#
secondary: ContactMatch | None = None#
fields: tuple[str, ...] = ('found', 'force')#
reduce: Literal['none', 'mindist', 'maxforce', 'netforce'] = 'maxforce'#
num_slots: int = 1#
secondary_policy: Literal['first', 'any', 'error'] = 'first'#
track_air_time: bool = False#
global_frame: bool = False#
history_length: int = 0#

Number of substeps to store in history buffer for force/torque/dist fields.

When 0 (default): No history buffer is allocated. History fields (force_history, torque_history, dist_history) are None. Use the regular fields (force, torque, dist) for the current instantaneous values.

When >0: Allocates a history buffer that stores the last N substeps of contact data. Shape is [B, N, history_length, …] where index 0 is the most recent substep. Set to your decimation value to capture all substeps within one policy step.

Note: history_length=1 is redundant with the regular fields but provides a consistent [B, N, H, …] shape if your code expects a history dimension.

debug: bool = False#
build() ContactSensor[source]#

Build sensor instance from this config.

class mjlab.sensor.ContactData[source]#

Contact sensor output (only requested fields are populated).

found: Tensor | None = None#

[B, N] 0=no contact, >0=match count

force: Tensor | None = None#

[B, N, 3] contact frame (global if reduce=”netforce” or global_frame=True)

torque: Tensor | None = None#

[B, N, 3] contact frame (global if reduce=”netforce” or global_frame=True)

dist: Tensor | None = None#

[B, N] penetration depth

pos: Tensor | None = None#

[B, N, 3] global frame

normal: Tensor | None = None#

[B, N, 3] global frame, primary→secondary

tangent: Tensor | None = None#

[B, N, 3] global frame

current_air_time: Tensor | None = None#

[B, N] time in air (if track_air_time=True)

last_air_time: Tensor | None = None#

[B, N] duration of last air phase (if track_air_time=True)

current_contact_time: Tensor | None = None#

[B, N] time in contact (if track_air_time=True)

last_contact_time: Tensor | None = None#

[B, N] duration of last contact phase (if track_air_time=True)

force_history: Tensor | None = None#

[B, N, H, 3] contact forces over last H substeps (index 0 = most recent)

torque_history: Tensor | None = None#

[B, N, H, 3] contact torques over last H substeps (index 0 = most recent)

dist_history: Tensor | None = None#

[B, N, H] penetration depth over last H substeps (index 0 = most recent)

__init__(found: Tensor | None = None, force: Tensor | None = None, torque: Tensor | None = None, dist: Tensor | None = None, pos: Tensor | None = None, normal: Tensor | None = None, tangent: Tensor | None = None, current_air_time: Tensor | None = None, last_air_time: Tensor | None = None, current_contact_time: Tensor | None = None, last_contact_time: Tensor | None = None, force_history: Tensor | None = None, torque_history: Tensor | None = None, dist_history: Tensor | None = None) None#
class mjlab.sensor.ContactMatch[source]#

Specifies what to match on one side of a contact.

mode: “geom”, “body”, or “subtree” pattern: Regex or tuple of regexes (expands within entity if specified) entity: Entity name to search within (None = treat pattern as literal MuJoCo name) exclude: Filter out matches using these regex patterns or exact names.

mode: Literal['geom', 'body', 'subtree']#
pattern: str | tuple[str, ...]#
entity: str | None = None#
exclude: tuple[str, ...] = ()#

Ray Cast Sensor#

class mjlab.sensor.RayCastSensor[source]#

Bases: Sensor[RayCastData]

Raycast sensor for terrain and obstacle detection.

requires_sensor_context: bool = True#

Whether this sensor needs a SensorContext (render context).

__init__(cfg: RayCastSensorCfg) None[source]#
edit_spec(scene_spec: MjSpec, entities: dict[str, Entity]) None[source]#

Edit the scene spec to add this sensor.

This is called during scene construction to add sensor elements to the MjSpec.

Parameters:
  • scene_spec – The scene MjSpec to edit.

  • entities – Dictionary of entities in the scene, keyed by name.

initialize(mj_model: MjModel, model: mujoco_warp.Model, data: mujoco_warp.Data, device: str) None[source]#

Initialize the sensor after model compilation.

This is called after the MjSpec is compiled into an MjModel and the simulation is ready to run. Use this to cache sensor indices, allocate buffers, etc.

Parameters:
  • mj_model – The compiled MuJoCo model.

  • model – The mjwarp model wrapper.

  • data – The mjwarp data arrays.

  • device – Device for tensor operations (e.g., “cuda”, “cpu”).

set_context(ctx: SensorContext) None[source]#

Wire this sensor to a SensorContext for BVH-accelerated raycasting.

debug_vis(visualizer: DebugVisualizer) None[source]#

Visualize sensor data for debugging.

Base implementation does nothing. Override in subclasses that support debug visualization.

Parameters:

visualizer – The debug visualizer to draw to.

prepare_rays() None[source]#

PRE-GRAPH: Transform local rays to world frame.

Reads body/site/geom poses via PyTorch and writes world-frame ray origins and directions into Warp arrays. Caches the frame pose and world-frame tensors for postprocess_rays().

raycast_kernel(rc: mujoco_warp.RenderContext) None[source]#

IN-GRAPH: Execute BVH-accelerated raycast kernel.

postprocess_rays() None[source]#

POST-GRAPH: Convert Warp outputs to PyTorch, compute hit positions.

class mjlab.sensor.RayCastSensorCfg[source]#

Raycast sensor configuration.

Supports multiple ray patterns (grid, pinhole camera) and alignment modes.

class VizCfg[source]#

Visualization settings for debug rendering.

hit_color: tuple[float, float, float, float] = (0.0, 1.0, 0.0, 0.8)#

RGBA color for rays that hit a surface.

miss_color: tuple[float, float, float, float] = (1.0, 0.0, 0.0, 0.4)#

RGBA color for rays that miss.

hit_sphere_color: tuple[float, float, float, float] = (0.0, 1.0, 1.0, 1.0)#

RGBA color for spheres drawn at hit points.

hit_sphere_radius: float = 0.5#

Radius of spheres drawn at hit points (multiplier of meansize).

show_rays: bool = False#

Whether to draw ray arrows.

show_normals: bool = False#

Whether to draw surface normals at hit points.

normal_color: tuple[float, float, float, float] = (1.0, 1.0, 0.0, 1.0)#

RGBA color for surface normal arrows.

normal_length: float = 5.0#

Length of surface normal arrows (multiplier of meansize).

frame: ObjRef#

Body or site to attach rays to.

pattern: PatternCfg#

Ray pattern configuration. Defaults to GridPatternCfg.

ray_alignment: RayAlignment = 'base'#

How rays align with the frame.

  • “base”: Full position + rotation (default).

  • “yaw”: Position + yaw only, ignores pitch/roll (good for height maps).

  • “world”: Fixed in world frame, position only follows body.

max_distance: float = 10.0#

Maximum ray distance. Rays beyond this report -1.

exclude_parent_body: bool = True#

Exclude parent body from ray intersection tests.

include_geom_groups: tuple[int, ...] | None = (0, 1, 2)#

Geom groups (0-5) to include in raycasting.

Defaults to (0, 1, 2). Set to None to include all groups.

debug_vis: bool = False#

Enable debug visualization.

viz: VizCfg#

Visualization settings.

build() RayCastSensor[source]#

Build sensor instance from this config.

class mjlab.sensor.RayCastData[source]#

Raycast sensor output data.

Note

Fields are views into GPU buffers and are valid until the next sense() call.

distances: Tensor#

[B, N] Distance to hit point. -1 if no hit.

normals_w: Tensor#

[B, N, 3] Surface normal at hit point (world frame). Zero if no hit.

hit_pos_w: Tensor#

[B, N, 3] Hit position in world frame. Ray origin if no hit.

pos_w: Tensor#

[B, 3] Frame position in world coordinates.

quat_w: Tensor#

[B, 4] Frame orientation quaternion (w, x, y, z) in world coordinates.

__init__(distances: Tensor, normals_w: Tensor, hit_pos_w: Tensor, pos_w: Tensor, quat_w: Tensor) None#
class mjlab.sensor.GridPatternCfg[source]#

Grid pattern - parallel rays in a 2D grid.

size: tuple[float, float] = (1.0, 1.0)#

Grid size (length, width) in meters.

resolution: float = 0.1#

Spacing between rays in meters.

direction: tuple[float, float, float] = (0.0, 0.0, -1.0)#

Ray direction in frame-local coordinates.

generate_rays(mj_model: MjModel | None, device: str) tuple[Tensor, Tensor][source]#

Generate ray pattern.

Parameters:
  • mj_model – MuJoCo model (unused for grid pattern).

  • device – Device for tensor operations.

Returns:

Tuple of (local_offsets [N, 3], local_directions [N, 3]).

class mjlab.sensor.PinholeCameraPatternCfg[source]#

Pinhole camera pattern - rays diverging from origin like a camera.

Can be configured with explicit parameters (width, height, fovy) or created via factory methods like from_mujoco_camera() or from_intrinsic_matrix().

width: int = 16#

Image width in pixels.

height: int = 12#

Image height in pixels.

fovy: float = 45.0#

Vertical field of view in degrees (matches MuJoCo convention).

classmethod from_mujoco_camera(camera_name: str) PinholeCameraPatternCfg[source]#

Create config that references a MuJoCo camera.

Camera parameters (resolution, FOV) are resolved at runtime from the model.

Parameters:

camera_name – Name of the MuJoCo camera to reference.

Returns:

Config that will resolve parameters from the MuJoCo camera.

classmethod from_intrinsic_matrix(intrinsic_matrix: list[float], width: int, height: int) PinholeCameraPatternCfg[source]#

Create from 3x3 intrinsic matrix [fx, 0, cx, 0, fy, cy, 0, 0, 1].

Parameters:
  • intrinsic_matrix – Flattened 3x3 intrinsic matrix.

  • width – Image width in pixels.

  • height – Image height in pixels.

Returns:

Config with fovy computed from the intrinsic matrix.

generate_rays(mj_model: MjModel | None, device: str) tuple[Tensor, Tensor][source]#

Generate ray pattern.

Parameters:
  • mj_model – MuJoCo model (required if using from_mujoco_camera).

  • device – Device for tensor operations.

Returns:

Tuple of (local_offsets [N, 3], local_directions [N, 3]).

Camera Sensor#

class mjlab.sensor.CameraSensor[source]#

Bases: Sensor[CameraSensorData]

Camera sensor for RGB and depth rendering.

requires_sensor_context: bool = True#

Whether this sensor needs a SensorContext (render context).

__init__(cfg: CameraSensorCfg) None[source]#
edit_spec(scene_spec: MjSpec, entities: dict[str, Entity]) None[source]#

Edit the scene spec to add this sensor.

This is called during scene construction to add sensor elements to the MjSpec.

Parameters:
  • scene_spec – The scene MjSpec to edit.

  • entities – Dictionary of entities in the scene, keyed by name.

initialize(mj_model: MjModel, model: mujoco_warp.Model, data: mujoco_warp.Data, device: str) None[source]#

Initialize the sensor after model compilation.

This is called after the MjSpec is compiled into an MjModel and the simulation is ready to run. Use this to cache sensor indices, allocate buffers, etc.

Parameters:
  • mj_model – The compiled MuJoCo model.

  • model – The mjwarp model wrapper.

  • data – The mjwarp data arrays.

  • device – Device for tensor operations (e.g., “cuda”, “cpu”).

class mjlab.sensor.CameraSensorCfg[source]#

Configuration for a camera sensor.

A camera sensor can either wrap an existing MuJoCo camera (camera_name) or create a new one at the specified pos/quat. New cameras are added to the worldbody by default, or to a specific body via parent_body.

Note

All camera sensors in a scene must share identical values for use_textures, use_shadows, and enabled_geom_groups. This is a constraint of the underlying mujoco_warp rendering system.

camera_name: str | None = None#

Name of an existing MuJoCo camera to wrap.

If None, a new camera is created using pos/quat/fovy. If set, the sensor wraps the named camera instead of creating one.

parent_body: str | None = None#

Parent body to attach a new camera to.

Only used when camera_name is None (creating a new camera). If None, the camera is added to the worldbody. Use the full prefixed name (e.g., “robot/link_6”) to attach to an entity’s body. The pos/quat are then relative to the parent body frame.

pos: tuple[float, float, float] = (0.0, 0.0, 1.0)#

Camera position (used when creating a new camera).

World-frame if parent_body is None, otherwise relative to the parent body frame.

quat: tuple[float, float, float, float] = (1.0, 0.0, 0.0, 0.0)#

Camera orientation quaternion (w, x, y, z).

fovy: float | None = None#

Vertical field of view in degrees. None uses MuJoCo default.

width: int = 160#

Image width in pixels.

height: int = 120#

Image height in pixels.

data_types: tuple[Literal['rgb', 'depth'], ...] = ('rgb',)#

any combination of “rgb” and “depth”.

Type:

Data types to capture

use_textures: bool = True#

Whether to use textures in rendering.

use_shadows: bool = False#

Whether to use shadows in rendering.

enabled_geom_groups: tuple[int, ...] = (0, 1, 2)#

Geom groups (0-5) visible to the camera.

orthographic: bool = False#

Use orthographic projection instead of perspective.

clone_data: bool = False#

If True, clone tensors on each access.

Set to True if you modify the returned data in-place.

build() CameraSensor[source]#

Build sensor instance from this config.

class mjlab.sensor.CameraSensorData[source]#

Camera sensor output data.

Shapes:
  • rgb: [num_envs, height, width, 3] (uint8)

  • depth: [num_envs, height, width, 1] (float32)

rgb: Tensor | None = None#

RGB image [num_envs, height, width, 3] (uint8). None if not enabled.

__init__(rgb: Tensor | None = None, depth: Tensor | None = None) None#
depth: Tensor | None = None#

Depth image [num_envs, height, width, 1] (float32). None if not enabled.