mjlab.sensor

Contents

mjlab.sensor#

Sensor implementations.

Classes:

BuiltinSensor

Wrapper over MuJoCo builtin sensors.

BuiltinSensorCfg

BuiltinSensorCfg(name: 'str', sensor_type: 'SensorType', obj: 'ObjRef | None' = None, ref: 'ObjRef | None' = None, cutoff: 'float' = 0.0)

ObjRef

Reference to a MuJoCo object (body, joint, site, etc.).

CameraSensor

Camera sensor for RGB and depth rendering.

CameraSensorCfg

Configuration for a camera sensor.

CameraSensorData

Camera sensor output data.

ContactData

Contact sensor output (only requested fields are populated).

ContactMatch

Specifies what to match on one side of a contact.

ContactSensor

Tracks contacts with automatic pattern expansion to multiple MuJoCo sensors.

ContactSensorCfg

Tracks contacts between primary and secondary patterns.

GridPatternCfg

Grid pattern - parallel rays in a 2D grid.

PinholeCameraPatternCfg

Pinhole camera pattern - rays diverging from origin like a camera.

RayCastData

Raycast sensor output data.

RayCastSensor

Raycast sensor for terrain and obstacle detection.

RayCastSensorCfg

Raycast sensor configuration.

Sensor

Base sensor interface with typed data and per-step caching.

SensorCfg

Base configuration for a sensor.

SensorContext

Container for shared sensing resources.

class mjlab.sensor.BuiltinSensor[source]#

Bases: Sensor[Tensor]

Wrapper over MuJoCo builtin sensors.

Can add a new sensor to the spec, or wrap an existing sensor from entity XML. Returns raw MuJoCo sensordata as torch.Tensor with shape depending on sensor type (e.g., accelerometer: (N, 3), framequat: (N, 4)).

Note: Caching provides minimal benefit here since data access is just a tensor slice view into MuJoCo’s sensordata buffer.

Methods:

__init__([cfg, name])

from_existing(name)

Wrap an existing sensor already defined in entity XML.

edit_spec(scene_spec, entities)

Edit the scene spec to add this sensor.

initialize(mj_model, model, data, device)

Initialize the sensor after model compilation.

__init__(cfg: BuiltinSensorCfg | None = None, name: str | None = None) None[source]#
classmethod from_existing(name: str) BuiltinSensor[source]#

Wrap an existing sensor already defined in entity XML.

edit_spec(scene_spec: MjSpec, entities: dict[str, Entity]) None[source]#

Edit the scene spec to add this sensor.

This is called during scene construction to add sensor elements to the MjSpec.

Parameters:
  • scene_spec – The scene MjSpec to edit.

  • entities – Dictionary of entities in the scene, keyed by name.

initialize(mj_model: MjModel, model: mujoco_warp.Model, data: mujoco_warp.Data, device: str) None[source]#

Initialize the sensor after model compilation.

This is called after the MjSpec is compiled into an MjModel and the simulation is ready to run. Use this to cache sensor indices, allocate buffers, etc.

Parameters:
  • mj_model – The compiled MuJoCo model.

  • model – The mjwarp model wrapper.

  • data – The mjwarp data arrays.

  • device – Device for tensor operations (e.g., “cuda”, “cpu”).

class mjlab.sensor.BuiltinSensorCfg[source]#

Bases: SensorCfg

BuiltinSensorCfg(name: ‘str’, sensor_type: ‘SensorType’, obj: ‘ObjRef | None’ = None, ref: ‘ObjRef | None’ = None, cutoff: ‘float’ = 0.0)

Attributes:

sensor_type

Which builtin sensor to use.

obj

The type and name of the object the sensor is attached to.

ref

The type and name of object to which the frame-of-reference is attached to.

cutoff

When this value is positive, it limits the absolute value of the sensor output.

Methods:

build()

Build sensor instance from this config.

__init__(name, sensor_type[, obj, ref, cutoff])

sensor_type: Literal['accelerometer', 'velocimeter', 'gyro', 'force', 'torque', 'magnetometer', 'rangefinder', 'jointpos', 'jointvel', 'jointlimitpos', 'jointlimitvel', 'jointlimitfrc', 'jointactuatorfrc', 'tendonpos', 'tendonvel', 'tendonactuatorfrc', 'actuatorpos', 'actuatorvel', 'actuatorfrc', 'framepos', 'framequat', 'framexaxis', 'frameyaxis', 'framezaxis', 'framelinvel', 'frameangvel', 'framelinacc', 'frameangacc', 'subtreecom', 'subtreelinvel', 'subtreeangmom', 'e_potential', 'e_kinetic', 'clock']#

Which builtin sensor to use.

obj: ObjRef | None = None#

The type and name of the object the sensor is attached to.

ref: ObjRef | None = None#

The type and name of object to which the frame-of-reference is attached to.

cutoff: float = 0.0#

When this value is positive, it limits the absolute value of the sensor output.

build() BuiltinSensor[source]#

Build sensor instance from this config.

__init__(name: str, sensor_type: Literal['accelerometer', 'velocimeter', 'gyro', 'force', 'torque', 'magnetometer', 'rangefinder', 'jointpos', 'jointvel', 'jointlimitpos', 'jointlimitvel', 'jointlimitfrc', 'jointactuatorfrc', 'tendonpos', 'tendonvel', 'tendonactuatorfrc', 'actuatorpos', 'actuatorvel', 'actuatorfrc', 'framepos', 'framequat', 'framexaxis', 'frameyaxis', 'framezaxis', 'framelinvel', 'frameangvel', 'framelinacc', 'frameangacc', 'subtreecom', 'subtreelinvel', 'subtreeangmom', 'e_potential', 'e_kinetic', 'clock'], obj: ObjRef | None = None, ref: ObjRef | None = None, cutoff: float = 0.0) None#
name: str#
class mjlab.sensor.ObjRef[source]#

Bases: object

Reference to a MuJoCo object (body, joint, site, etc.).

Used to specify which object a sensor is attached to and its frame of reference. The entity field allows scoping objects to specific entity namespaces.

Attributes:

type

Type of the object.

name

Name of the object.

entity

Optional entity prefix for the object name.

Methods:

prefixed_name()

Get the full name with entity prefix if applicable.

__init__(type, name[, entity])

type: Literal['body', 'xbody', 'joint', 'geom', 'site', 'actuator', 'tendon', 'camera']#

Type of the object.

name: str#

Name of the object.

entity: str | None = None#

Optional entity prefix for the object name.

prefixed_name() str[source]#

Get the full name with entity prefix if applicable.

__init__(type: Literal['body', 'xbody', 'joint', 'geom', 'site', 'actuator', 'tendon', 'camera'], name: str, entity: str | None = None) None#
class mjlab.sensor.CameraSensor[source]#

Bases: Sensor[CameraSensorData]

Camera sensor for RGB and depth rendering.

Attributes:

requires_sensor_context

Whether this sensor needs a SensorContext (render context).

camera_name

camera_idx

Methods:

__init__(cfg)

edit_spec(scene_spec, entities)

Edit the scene spec to add this sensor.

initialize(mj_model, model, data, device)

Initialize the sensor after model compilation.

set_context(ctx)

requires_sensor_context: bool = True#

Whether this sensor needs a SensorContext (render context).

__init__(cfg: CameraSensorCfg) None[source]#
property camera_name: str#
property camera_idx: int#
edit_spec(scene_spec: MjSpec, entities: dict[str, Entity]) None[source]#

Edit the scene spec to add this sensor.

This is called during scene construction to add sensor elements to the MjSpec.

Parameters:
  • scene_spec – The scene MjSpec to edit.

  • entities – Dictionary of entities in the scene, keyed by name.

initialize(mj_model: MjModel, model: mujoco_warp.Model, data: mujoco_warp.Data, device: str) None[source]#

Initialize the sensor after model compilation.

This is called after the MjSpec is compiled into an MjModel and the simulation is ready to run. Use this to cache sensor indices, allocate buffers, etc.

Parameters:
  • mj_model – The compiled MuJoCo model.

  • model – The mjwarp model wrapper.

  • data – The mjwarp data arrays.

  • device – Device for tensor operations (e.g., “cuda”, “cpu”).

set_context(ctx: SensorContext) None[source]#
class mjlab.sensor.CameraSensorCfg[source]#

Bases: SensorCfg

Configuration for a camera sensor.

A camera sensor can either wrap an existing MuJoCo camera (camera_name) or create a new one at the specified pos/quat. New cameras are added to the worldbody by default, or to a specific body via parent_body.

Note

All camera sensors in a scene must share identical values for use_textures, use_shadows, and enabled_geom_groups. This is a constraint of the underlying mujoco_warp rendering system.

Attributes:

camera_name

Name of an existing MuJoCo camera to wrap.

parent_body

Parent body to attach a new camera to.

pos

Camera position (used when creating a new camera).

quat

Camera orientation quaternion (w, x, y, z).

fovy

Vertical field of view in degrees.

width

Image width in pixels.

height

Image height in pixels.

data_types

any combination of "rgb" and "depth".

use_textures

Whether to use textures in rendering.

use_shadows

Whether to use shadows in rendering.

enabled_geom_groups

Geom groups (0-5) visible to the camera.

orthographic

Use orthographic projection instead of perspective.

clone_data

If True, clone tensors on each access.

Methods:

build()

Build sensor instance from this config.

__init__(name[, camera_name, parent_body, ...])

camera_name: str | None = None#

Name of an existing MuJoCo camera to wrap.

If None, a new camera is created using pos/quat/fovy. If set, the sensor wraps the named camera instead of creating one.

parent_body: str | None = None#

Parent body to attach a new camera to.

Only used when camera_name is None (creating a new camera). If None, the camera is added to the worldbody. Use the full prefixed name (e.g., “robot/link_6”) to attach to an entity’s body. The pos/quat are then relative to the parent body frame.

pos: tuple[float, float, float] = (0.0, 0.0, 1.0)#

Camera position (used when creating a new camera).

World-frame if parent_body is None, otherwise relative to the parent body frame.

quat: tuple[float, float, float, float] = (1.0, 0.0, 0.0, 0.0)#

Camera orientation quaternion (w, x, y, z).

fovy: float | None = None#

Vertical field of view in degrees. None uses MuJoCo default.

width: int = 160#

Image width in pixels.

height: int = 120#

Image height in pixels.

data_types: tuple[Literal['rgb', 'depth'], ...] = ('rgb',)#

any combination of “rgb” and “depth”.

Type:

Data types to capture

use_textures: bool = True#

Whether to use textures in rendering.

use_shadows: bool = False#

Whether to use shadows in rendering.

enabled_geom_groups: tuple[int, ...] = (0, 1, 2)#

Geom groups (0-5) visible to the camera.

orthographic: bool = False#

Use orthographic projection instead of perspective.

clone_data: bool = False#

If True, clone tensors on each access.

Set to True if you modify the returned data in-place.

build() CameraSensor[source]#

Build sensor instance from this config.

__init__(name: str, camera_name: str | None = None, parent_body: str | None = None, pos: tuple[float, float, float] = (0.0, 0.0, 1.0), quat: tuple[float, float, float, float] = (1.0, 0.0, 0.0, 0.0), fovy: float | None = None, width: int = 160, height: int = 120, data_types: tuple[Literal['rgb', 'depth'], ...] = ('rgb',), use_textures: bool = True, use_shadows: bool = False, enabled_geom_groups: tuple[int, ...] = (0, 1, 2), orthographic: bool = False, clone_data: bool = False) None#
name: str#
class mjlab.sensor.CameraSensorData[source]#

Bases: object

Camera sensor output data.

Shapes:
  • rgb: [num_envs, height, width, 3] (uint8)

  • depth: [num_envs, height, width, 1] (float32)

Attributes:

rgb

RGB image [num_envs, height, width, 3] (uint8).

depth

Depth image [num_envs, height, width, 1] (float32).

Methods:

__init__([rgb, depth])

rgb: Tensor | None = None#

RGB image [num_envs, height, width, 3] (uint8). None if not enabled.

__init__(rgb: Tensor | None = None, depth: Tensor | None = None) None#
depth: Tensor | None = None#

Depth image [num_envs, height, width, 1] (float32). None if not enabled.

class mjlab.sensor.ContactData[source]#

Bases: object

Contact sensor output (only requested fields are populated).

Attributes:

found

[B, N] 0=no contact, >0=match count

force

[B, N, 3] contact frame (global if reduce="netforce" or global_frame=True)

torque

[B, N, 3] contact frame (global if reduce="netforce" or global_frame=True)

dist

[B, N] penetration depth

pos

[B, N, 3] global frame

normal

[B, N, 3] global frame, primary→secondary

tangent

[B, N, 3] global frame

current_air_time

[B, N] time in air (if track_air_time=True)

last_air_time

[B, N] duration of last air phase (if track_air_time=True)

current_contact_time

[B, N] time in contact (if track_air_time=True)

last_contact_time

[B, N] duration of last contact phase (if track_air_time=True)

force_history

[B, N, H, 3] contact forces over last H substeps (index 0 = most recent)

torque_history

[B, N, H, 3] contact torques over last H substeps (index 0 = most recent)

dist_history

[B, N, H] penetration depth over last H substeps (index 0 = most recent)

Methods:

__init__([found, force, torque, dist, pos, ...])

found: Tensor | None = None#

[B, N] 0=no contact, >0=match count

force: Tensor | None = None#

[B, N, 3] contact frame (global if reduce=”netforce” or global_frame=True)

torque: Tensor | None = None#

[B, N, 3] contact frame (global if reduce=”netforce” or global_frame=True)

dist: Tensor | None = None#

[B, N] penetration depth

pos: Tensor | None = None#

[B, N, 3] global frame

normal: Tensor | None = None#

[B, N, 3] global frame, primary→secondary

tangent: Tensor | None = None#

[B, N, 3] global frame

current_air_time: Tensor | None = None#

[B, N] time in air (if track_air_time=True)

last_air_time: Tensor | None = None#

[B, N] duration of last air phase (if track_air_time=True)

current_contact_time: Tensor | None = None#

[B, N] time in contact (if track_air_time=True)

last_contact_time: Tensor | None = None#

[B, N] duration of last contact phase (if track_air_time=True)

force_history: Tensor | None = None#

[B, N, H, 3] contact forces over last H substeps (index 0 = most recent)

torque_history: Tensor | None = None#

[B, N, H, 3] contact torques over last H substeps (index 0 = most recent)

dist_history: Tensor | None = None#

[B, N, H] penetration depth over last H substeps (index 0 = most recent)

__init__(found: Tensor | None = None, force: Tensor | None = None, torque: Tensor | None = None, dist: Tensor | None = None, pos: Tensor | None = None, normal: Tensor | None = None, tangent: Tensor | None = None, current_air_time: Tensor | None = None, last_air_time: Tensor | None = None, current_contact_time: Tensor | None = None, last_contact_time: Tensor | None = None, force_history: Tensor | None = None, torque_history: Tensor | None = None, dist_history: Tensor | None = None) None#
class mjlab.sensor.ContactMatch[source]#

Bases: object

Specifies what to match on one side of a contact.

mode: “geom”, “body”, or “subtree” pattern: Regex or tuple of regexes (expands within entity if specified) entity: Entity name to search within (None = treat pattern as literal MuJoCo name) exclude: Filter out matches using these regex patterns or exact names.

Attributes:

Methods:

__init__(mode, pattern[, entity, exclude])

mode: Literal['geom', 'body', 'subtree']#
pattern: str | tuple[str, ...]#
entity: str | None = None#
exclude: tuple[str, ...] = ()#
__init__(mode: Literal['geom', 'body', 'subtree'], pattern: str | tuple[str, ...], entity: str | None = None, exclude: tuple[str, ...] = ()) None#
class mjlab.sensor.ContactSensor[source]#

Bases: Sensor[ContactData]

Tracks contacts with automatic pattern expansion to multiple MuJoCo sensors.

Methods:

__init__(cfg)

edit_spec(scene_spec, entities)

Expand patterns and add MuJoCo sensors (one per primary x field pair).

initialize(mj_model, model, data, device)

Map sensors to sensordata buffer and allocate air time state.

reset([env_ids])

Reset sensor state for specified environments.

update(dt)

Update sensor state after a simulation step.

compute_first_contact(dt[, abs_tol])

Returns [B, N] bool: True for contacts established within last dt seconds.

compute_first_air(dt[, abs_tol])

Returns [B, N] bool: True for contacts broken within last dt seconds.

__init__(cfg: ContactSensorCfg) None[source]#
edit_spec(scene_spec: MjSpec, entities: dict[str, Entity]) None[source]#

Expand patterns and add MuJoCo sensors (one per primary x field pair).

initialize(mj_model: MjModel, model: mujoco_warp.Model, data: mujoco_warp.Data, device: str) None[source]#

Map sensors to sensordata buffer and allocate air time state.

reset(env_ids: Tensor | slice | None = None) None[source]#

Reset sensor state for specified environments.

Invalidates the data cache. Override in subclasses that maintain internal state, but call super().reset(env_ids) FIRST.

Parameters:

env_ids – Environment indices to reset. If None, reset all environments.

update(dt: float) None[source]#

Update sensor state after a simulation step.

Invalidates the data cache. Override in subclasses that need per-step updates, but call super().update(dt) FIRST.

Parameters:

dt – Time step in seconds.

compute_first_contact(dt: float, abs_tol: float = 1e-08) Tensor[source]#

Returns [B, N] bool: True for contacts established within last dt seconds.

compute_first_air(dt: float, abs_tol: float = 1e-08) Tensor[source]#

Returns [B, N] bool: True for contacts broken within last dt seconds.

class mjlab.sensor.ContactSensorCfg[source]#

Bases: SensorCfg

Tracks contacts between primary and secondary patterns.

Output shape: [B, N * num_slots] or [B, N * num_slots, 3] where N = # of primaries

Fields (choose subset):
  • found: 0=no contact, >0=match count before reduction

  • force, torque: 3D vectors in contact frame (or global if reduce=”netforce”)

  • dist: penetration depth

  • pos, normal, tangent: 3D vectors in global frame (normal: primary→secondary)

Reduction modes (selects top num_slots from all matches):
  • “none”: fast, non-deterministic

  • “mindist”, “maxforce”: closest/strongest contacts

  • “netforce”: sum all forces (global frame)

Policies:
  • secondary_policy: “first”, “any”, or “error” when secondary matches multiple

  • track_air_time: enables landing/takeoff detection

  • global_frame: rotates force/torque to global (requires normal+tangent fields)

Attributes:

primary

secondary

fields

reduce

num_slots

secondary_policy

track_air_time

global_frame

history_length

Number of substeps to store in history buffer for force/torque/dist fields.

debug

Methods:

build()

Build sensor instance from this config.

__init__(name, primary[, secondary, fields, ...])

primary: ContactMatch#
secondary: ContactMatch | None = None#
fields: tuple[str, ...] = ('found', 'force')#
reduce: Literal['none', 'mindist', 'maxforce', 'netforce'] = 'maxforce'#
num_slots: int = 1#
secondary_policy: Literal['first', 'any', 'error'] = 'first'#
track_air_time: bool = False#
global_frame: bool = False#
history_length: int = 0#

Number of substeps to store in history buffer for force/torque/dist fields.

When 0 (default): No history buffer is allocated. History fields (force_history, torque_history, dist_history) are None. Use the regular fields (force, torque, dist) for the current instantaneous values.

When >0: Allocates a history buffer that stores the last N substeps of contact data. Shape is [B, N, history_length, …] where index 0 is the most recent substep. Set to your decimation value to capture all substeps within one policy step.

Note: history_length=1 is redundant with the regular fields but provides a consistent [B, N, H, …] shape if your code expects a history dimension.

debug: bool = False#
build() ContactSensor[source]#

Build sensor instance from this config.

__init__(name: str, primary: ContactMatch, secondary: ContactMatch | None = None, fields: tuple[str, ...] = ('found', 'force'), reduce: Literal['none', 'mindist', 'maxforce', 'netforce'] = 'maxforce', num_slots: int = 1, secondary_policy: Literal['first', 'any', 'error'] = 'first', track_air_time: bool = False, global_frame: bool = False, history_length: int = 0, debug: bool = False) None#
name: str#
class mjlab.sensor.GridPatternCfg[source]#

Bases: object

Grid pattern - parallel rays in a 2D grid.

Attributes:

size

Grid size (length, width) in meters.

resolution

Spacing between rays in meters.

direction

Ray direction in frame-local coordinates.

Methods:

generate_rays(mj_model, device)

Generate ray pattern.

__init__([size, resolution, direction])

size: tuple[float, float] = (1.0, 1.0)#

Grid size (length, width) in meters.

resolution: float = 0.1#

Spacing between rays in meters.

direction: tuple[float, float, float] = (0.0, 0.0, -1.0)#

Ray direction in frame-local coordinates.

generate_rays(mj_model: MjModel | None, device: str) tuple[Tensor, Tensor][source]#

Generate ray pattern.

Parameters:
  • mj_model – MuJoCo model (unused for grid pattern).

  • device – Device for tensor operations.

Returns:

Tuple of (local_offsets [N, 3], local_directions [N, 3]).

__init__(size: tuple[float, float] = (1.0, 1.0), resolution: float = 0.1, direction: tuple[float, float, float] = (0.0, 0.0, -1.0)) None#
class mjlab.sensor.PinholeCameraPatternCfg[source]#

Bases: object

Pinhole camera pattern - rays diverging from origin like a camera.

Can be configured with explicit parameters (width, height, fovy) or created via factory methods like from_mujoco_camera() or from_intrinsic_matrix().

Attributes:

width

Image width in pixels.

height

Image height in pixels.

fovy

Vertical field of view in degrees (matches MuJoCo convention).

Methods:

from_mujoco_camera(camera_name)

Create config that references a MuJoCo camera.

from_intrinsic_matrix(intrinsic_matrix, ...)

Create from 3x3 intrinsic matrix [fx, 0, cx, 0, fy, cy, 0, 0, 1].

generate_rays(mj_model, device)

Generate ray pattern.

__init__([width, height, fovy, _camera_name])

width: int = 16#

Image width in pixels.

height: int = 12#

Image height in pixels.

fovy: float = 45.0#

Vertical field of view in degrees (matches MuJoCo convention).

classmethod from_mujoco_camera(camera_name: str) PinholeCameraPatternCfg[source]#

Create config that references a MuJoCo camera.

Camera parameters (resolution, FOV) are resolved at runtime from the model.

Parameters:

camera_name – Name of the MuJoCo camera to reference.

Returns:

Config that will resolve parameters from the MuJoCo camera.

classmethod from_intrinsic_matrix(intrinsic_matrix: list[float], width: int, height: int) PinholeCameraPatternCfg[source]#

Create from 3x3 intrinsic matrix [fx, 0, cx, 0, fy, cy, 0, 0, 1].

Parameters:
  • intrinsic_matrix – Flattened 3x3 intrinsic matrix.

  • width – Image width in pixels.

  • height – Image height in pixels.

Returns:

Config with fovy computed from the intrinsic matrix.

generate_rays(mj_model: MjModel | None, device: str) tuple[Tensor, Tensor][source]#

Generate ray pattern.

Parameters:
  • mj_model – MuJoCo model (required if using from_mujoco_camera).

  • device – Device for tensor operations.

Returns:

Tuple of (local_offsets [N, 3], local_directions [N, 3]).

__init__(width: int = 16, height: int = 12, fovy: float = 45.0, _camera_name: str | None = None) None#
class mjlab.sensor.RayCastData[source]#

Bases: object

Raycast sensor output data.

Note

Fields are views into GPU buffers and are valid until the next sense() call.

Attributes:

distances

[B, N] Distance to hit point.

normals_w

[B, N, 3] Surface normal at hit point (world frame).

hit_pos_w

[B, N, 3] Hit position in world frame.

pos_w

[B, 3] Frame position in world coordinates.

quat_w

[B, 4] Frame orientation quaternion (w, x, y, z) in world coordinates.

Methods:

__init__(distances, normals_w, hit_pos_w, ...)

distances: Tensor#

[B, N] Distance to hit point. -1 if no hit.

normals_w: Tensor#

[B, N, 3] Surface normal at hit point (world frame). Zero if no hit.

hit_pos_w: Tensor#

[B, N, 3] Hit position in world frame. Ray origin if no hit.

pos_w: Tensor#

[B, 3] Frame position in world coordinates.

quat_w: Tensor#

[B, 4] Frame orientation quaternion (w, x, y, z) in world coordinates.

__init__(distances: Tensor, normals_w: Tensor, hit_pos_w: Tensor, pos_w: Tensor, quat_w: Tensor) None#
class mjlab.sensor.RayCastSensor[source]#

Bases: Sensor[RayCastData]

Raycast sensor for terrain and obstacle detection.

Attributes:

requires_sensor_context

Whether this sensor needs a SensorContext (render context).

num_rays

Methods:

__init__(cfg)

edit_spec(scene_spec, entities)

Edit the scene spec to add this sensor.

initialize(mj_model, model, data, device)

Initialize the sensor after model compilation.

set_context(ctx)

Wire this sensor to a SensorContext for BVH-accelerated raycasting.

debug_vis(visualizer)

Visualize sensor data for debugging.

prepare_rays()

PRE-GRAPH: Transform local rays to world frame.

raycast_kernel(rc)

IN-GRAPH: Execute BVH-accelerated raycast kernel.

postprocess_rays()

POST-GRAPH: Convert Warp outputs to PyTorch, compute hit positions.

requires_sensor_context: bool = True#

Whether this sensor needs a SensorContext (render context).

__init__(cfg: RayCastSensorCfg) None[source]#
edit_spec(scene_spec: MjSpec, entities: dict[str, Entity]) None[source]#

Edit the scene spec to add this sensor.

This is called during scene construction to add sensor elements to the MjSpec.

Parameters:
  • scene_spec – The scene MjSpec to edit.

  • entities – Dictionary of entities in the scene, keyed by name.

initialize(mj_model: MjModel, model: mujoco_warp.Model, data: mujoco_warp.Data, device: str) None[source]#

Initialize the sensor after model compilation.

This is called after the MjSpec is compiled into an MjModel and the simulation is ready to run. Use this to cache sensor indices, allocate buffers, etc.

Parameters:
  • mj_model – The compiled MuJoCo model.

  • model – The mjwarp model wrapper.

  • data – The mjwarp data arrays.

  • device – Device for tensor operations (e.g., “cuda”, “cpu”).

set_context(ctx: SensorContext) None[source]#

Wire this sensor to a SensorContext for BVH-accelerated raycasting.

property num_rays: int#
debug_vis(visualizer: DebugVisualizer) None[source]#

Visualize sensor data for debugging.

Base implementation does nothing. Override in subclasses that support debug visualization.

Parameters:

visualizer – The debug visualizer to draw to.

prepare_rays() None[source]#

PRE-GRAPH: Transform local rays to world frame.

Reads body/site/geom poses via PyTorch and writes world-frame ray origins and directions into Warp arrays. Caches the frame pose and world-frame tensors for postprocess_rays().

raycast_kernel(rc: mujoco_warp.RenderContext) None[source]#

IN-GRAPH: Execute BVH-accelerated raycast kernel.

postprocess_rays() None[source]#

POST-GRAPH: Convert Warp outputs to PyTorch, compute hit positions.

class mjlab.sensor.RayCastSensorCfg[source]#

Bases: SensorCfg

Raycast sensor configuration.

Supports multiple ray patterns (grid, pinhole camera) and alignment modes.

Classes:

VizCfg

Visualization settings for debug rendering.

Attributes:

frame

Body or site to attach rays to.

pattern

Ray pattern configuration.

ray_alignment

How rays align with the frame.

max_distance

Maximum ray distance.

exclude_parent_body

Exclude parent body from ray intersection tests.

include_geom_groups

Geom groups (0-5) to include in raycasting.

debug_vis

Enable debug visualization.

viz

Visualization settings.

Methods:

build()

Build sensor instance from this config.

__init__(name, frame[, pattern, ...])

class VizCfg[source]#

Bases: object

Visualization settings for debug rendering.

Attributes:

hit_color

RGBA color for rays that hit a surface.

miss_color

RGBA color for rays that miss.

hit_sphere_color

RGBA color for spheres drawn at hit points.

hit_sphere_radius

Radius of spheres drawn at hit points (multiplier of meansize).

show_rays

Whether to draw ray arrows.

show_normals

Whether to draw surface normals at hit points.

normal_color

RGBA color for surface normal arrows.

normal_length

Length of surface normal arrows (multiplier of meansize).

Methods:

__init__([hit_color, miss_color, ...])

hit_color: tuple[float, float, float, float] = (0.0, 1.0, 0.0, 0.8)#

RGBA color for rays that hit a surface.

miss_color: tuple[float, float, float, float] = (1.0, 0.0, 0.0, 0.4)#

RGBA color for rays that miss.

hit_sphere_color: tuple[float, float, float, float] = (0.0, 1.0, 1.0, 1.0)#

RGBA color for spheres drawn at hit points.

hit_sphere_radius: float = 0.5#

Radius of spheres drawn at hit points (multiplier of meansize).

show_rays: bool = False#

Whether to draw ray arrows.

show_normals: bool = False#

Whether to draw surface normals at hit points.

normal_color: tuple[float, float, float, float] = (1.0, 1.0, 0.0, 1.0)#

RGBA color for surface normal arrows.

normal_length: float = 5.0#

Length of surface normal arrows (multiplier of meansize).

__init__(hit_color: tuple[float, float, float, float] = (0.0, 1.0, 0.0, 0.8), miss_color: tuple[float, float, float, float] = (1.0, 0.0, 0.0, 0.4), hit_sphere_color: tuple[float, float, float, float] = (0.0, 1.0, 1.0, 1.0), hit_sphere_radius: float = 0.5, show_rays: bool = False, show_normals: bool = False, normal_color: tuple[float, float, float, float] = (1.0, 1.0, 0.0, 1.0), normal_length: float = 5.0) None#
frame: ObjRef#

Body or site to attach rays to.

pattern: PatternCfg#

Ray pattern configuration. Defaults to GridPatternCfg.

ray_alignment: RayAlignment = 'base'#

How rays align with the frame.

  • “base”: Full position + rotation (default).

  • “yaw”: Position + yaw only, ignores pitch/roll (good for height maps).

  • “world”: Fixed in world frame, position only follows body.

max_distance: float = 10.0#

Maximum ray distance. Rays beyond this report -1.

exclude_parent_body: bool = True#

Exclude parent body from ray intersection tests.

include_geom_groups: tuple[int, ...] | None = (0, 1, 2)#

Geom groups (0-5) to include in raycasting.

Defaults to (0, 1, 2). Set to None to include all groups.

debug_vis: bool = False#

Enable debug visualization.

viz: VizCfg#

Visualization settings.

build() RayCastSensor[source]#

Build sensor instance from this config.

__init__(name: str, frame: ObjRef, pattern: PatternCfg = <factory>, ray_alignment: RayAlignment = 'base', max_distance: float = 10.0, exclude_parent_body: bool = True, include_geom_groups: tuple[int, ...] | None = (0, 1, 2), debug_vis: bool = False, viz: VizCfg = <factory>) None#
name: str#
class mjlab.sensor.Sensor[source]#

Bases: ABC, Generic[T]

Base sensor interface with typed data and per-step caching.

Type parameter T specifies the type of data returned by the sensor. For example: - Sensor[torch.Tensor] for sensors returning raw tensors - Sensor[ContactData] for sensors returning structured contact data

Subclasses should not forget to: - Call super().__init__() in their __init__ method - If overriding reset() or update(), call super() FIRST to invalidate cache

Attributes:

requires_sensor_context

Whether this sensor needs a SensorContext (render context).

data

Get the current sensor data, using cached value if available.

Methods:

__init__()

edit_spec(scene_spec, entities)

Edit the scene spec to add this sensor.

initialize(mj_model, model, data, device)

Initialize the sensor after model compilation.

reset([env_ids])

Reset sensor state for specified environments.

update(dt)

Update sensor state after a simulation step.

debug_vis(visualizer)

Visualize sensor data for debugging.

requires_sensor_context: bool = False#

Whether this sensor needs a SensorContext (render context).

__init__() None[source]#
abstractmethod edit_spec(scene_spec: mujoco.MjSpec, entities: dict[str, Entity]) None[source]#

Edit the scene spec to add this sensor.

This is called during scene construction to add sensor elements to the MjSpec.

Parameters:
  • scene_spec – The scene MjSpec to edit.

  • entities – Dictionary of entities in the scene, keyed by name.

abstractmethod initialize(mj_model: MjModel, model: mujoco_warp.Model, data: mujoco_warp.Data, device: str) None[source]#

Initialize the sensor after model compilation.

This is called after the MjSpec is compiled into an MjModel and the simulation is ready to run. Use this to cache sensor indices, allocate buffers, etc.

Parameters:
  • mj_model – The compiled MuJoCo model.

  • model – The mjwarp model wrapper.

  • data – The mjwarp data arrays.

  • device – Device for tensor operations (e.g., “cuda”, “cpu”).

property data: T#

Get the current sensor data, using cached value if available.

This property returns the sensor’s current data in its specific type. The data type is specified by the type parameter T. The data is cached per-step and recomputed only when the cache is invalidated (after reset() or update() is called).

Returns:

The sensor data in the format specified by type parameter T.

reset(env_ids: Tensor | slice | None = None) None[source]#

Reset sensor state for specified environments.

Invalidates the data cache. Override in subclasses that maintain internal state, but call super().reset(env_ids) FIRST.

Parameters:

env_ids – Environment indices to reset. If None, reset all environments.

update(dt: float) None[source]#

Update sensor state after a simulation step.

Invalidates the data cache. Override in subclasses that need per-step updates, but call super().update(dt) FIRST.

Parameters:

dt – Time step in seconds.

debug_vis(visualizer: DebugVisualizer) None[source]#

Visualize sensor data for debugging.

Base implementation does nothing. Override in subclasses that support debug visualization.

Parameters:

visualizer – The debug visualizer to draw to.

class mjlab.sensor.SensorCfg[source]#

Bases: ABC

Base configuration for a sensor.

Attributes:

Methods:

build()

Build sensor instance from this config.

__init__(name)

name: str#
abstractmethod build() Sensor[Any][source]#

Build sensor instance from this config.

__init__(name: str) None#
class mjlab.sensor.SensorContext[source]#

Bases: object

Container for shared sensing resources.

Manages the RenderContext used by both camera sensors (for rendering) and raycast sensors (for BVH-accelerated ray intersection). The actual graph capture and execution is handled by Simulation.

Methods:

__init__(mj_model, model, data, ...)

recreate(mj_model)

Recreate the render context after model fields are expanded.

prepare()

Pre-graph: transform rays to world frame.

finalize()

Post-graph: compute raycast hit positions.

get_rgb(cam_idx)

Get unpacked RGB data for a camera.

get_depth(cam_idx)

Get depth data for a camera.

unpack_rgb()

Unpack packed uint32 RGB data into separate channels.

Attributes:

__init__(mj_model: mujoco.MjModel, model: mjwarp.Model, data: mjwarp.Data, camera_sensors: list[CameraSensor], raycast_sensors: list[RayCastSensor], device: str)[source]#
property has_cameras: bool#
property has_raycasts: bool#
property render_context: mujoco_warp.RenderContext#
recreate(mj_model: MjModel) None[source]#

Recreate the render context after model fields are expanded.

Called by Simulation.expand_model_fields() for domain randomization.

prepare() None[source]#

Pre-graph: transform rays to world frame.

finalize() None[source]#

Post-graph: compute raycast hit positions.

get_rgb(cam_idx: int) Tensor[source]#

Get unpacked RGB data for a camera.

Parameters:

cam_idx – MuJoCo camera ID.

Returns:

Tensor of shape [num_envs, height, width, 3] (uint8).

get_depth(cam_idx: int) Tensor[source]#

Get depth data for a camera.

Parameters:

cam_idx – MuJoCo camera ID.

Returns:

Tensor of shape [num_envs, height, width, 1] (float32).

unpack_rgb() None[source]#

Unpack packed uint32 RGB data into separate channels.

Called from Simulation._sense_kernel() so it gets captured in the CUDA graph. No-op if no cameras need RGB.