mjlab.sensor

Contents

mjlab.sensor#

Sensor implementations.

class mjlab.sensor.BuiltinSensor[source]#

Bases: Sensor[Tensor]

Wrapper over MuJoCo builtin sensors.

Can add a new sensor to the spec, or wrap an existing sensor from entity XML. Returns raw MuJoCo sensordata as torch.Tensor with shape depending on sensor type (e.g., accelerometer: (N, 3), framequat: (N, 4)).

Note: Caching provides minimal benefit here since data access is just a tensor slice view into MuJoCo’s sensordata buffer.

__init__(cfg: BuiltinSensorCfg | None = None, name: str | None = None) None[source]#
edit_spec(scene_spec: MjSpec, entities: dict[str, Entity]) None[source]#

Edit the scene spec to add this sensor.

This is called during scene construction to add sensor elements to the MjSpec.

Parameters:
  • scene_spec – The scene MjSpec to edit.

  • entities – Dictionary of entities in the scene, keyed by name.

classmethod from_existing(name: str) BuiltinSensor[source]#

Wrap an existing sensor already defined in entity XML.

initialize(mj_model: MjModel, model: mujoco_warp.Model, data: mujoco_warp.Data, device: str) None[source]#

Initialize the sensor after model compilation.

This is called after the MjSpec is compiled into an MjModel and the simulation is ready to run. Use this to cache sensor indices, allocate buffers, etc.

Parameters:
  • mj_model – The compiled MuJoCo model.

  • model – The mjwarp model wrapper.

  • data – The mjwarp data arrays.

  • device – Device for tensor operations (e.g., “cuda”, “cpu”).

class mjlab.sensor.BuiltinSensorCfg[source]#

Bases: SensorCfg

__init__(name: str, sensor_type: Literal['accelerometer', 'velocimeter', 'gyro', 'force', 'torque', 'magnetometer', 'rangefinder', 'jointpos', 'jointvel', 'jointlimitpos', 'jointlimitvel', 'jointlimitfrc', 'jointactuatorfrc', 'tendonpos', 'tendonvel', 'tendonactuatorfrc', 'actuatorpos', 'actuatorvel', 'actuatorfrc', 'framepos', 'framequat', 'framexaxis', 'frameyaxis', 'framezaxis', 'framelinvel', 'frameangvel', 'framelinacc', 'frameangacc', 'subtreecom', 'subtreelinvel', 'subtreeangmom', 'e_potential', 'e_kinetic', 'clock'], obj: ObjRef | None = None, ref: ObjRef | None = None, cutoff: float = 0.0) None#
build() BuiltinSensor[source]#

Build sensor instance from this config.

cutoff: float = 0.0#

When this value is positive, it limits the absolute value of the sensor output.

obj: ObjRef | None = None#

The type and name of the object the sensor is attached to.

ref: ObjRef | None = None#

The type and name of object to which the frame-of-reference is attached to.

sensor_type: Literal['accelerometer', 'velocimeter', 'gyro', 'force', 'torque', 'magnetometer', 'rangefinder', 'jointpos', 'jointvel', 'jointlimitpos', 'jointlimitvel', 'jointlimitfrc', 'jointactuatorfrc', 'tendonpos', 'tendonvel', 'tendonactuatorfrc', 'actuatorpos', 'actuatorvel', 'actuatorfrc', 'framepos', 'framequat', 'framexaxis', 'frameyaxis', 'framezaxis', 'framelinvel', 'frameangvel', 'framelinacc', 'frameangacc', 'subtreecom', 'subtreelinvel', 'subtreeangmom', 'e_potential', 'e_kinetic', 'clock']#

Which builtin sensor to use.

class mjlab.sensor.ObjRef[source]#

Bases: object

Reference to a MuJoCo object (body, joint, site, etc.).

Used to specify which object a sensor is attached to and its frame of reference. The entity field allows scoping objects to specific entity namespaces.

__init__(type: Literal['body', 'xbody', 'joint', 'geom', 'site', 'actuator', 'tendon', 'camera'], name: str, entity: str | None = None) None#
entity: str | None = None#

Optional entity prefix for the object name.

prefixed_name() str[source]#

Get the full name with entity prefix if applicable.

type: Literal['body', 'xbody', 'joint', 'geom', 'site', 'actuator', 'tendon', 'camera']#

Type of the object.

name: str#

Name of the object.

class mjlab.sensor.CameraSensor[source]#

Bases: Sensor[CameraSensorData]

Camera sensor for RGB and depth rendering.

__init__(cfg: CameraSensorCfg) None[source]#
property camera_idx: int#
property camera_name: str#
edit_spec(scene_spec: MjSpec, entities: dict[str, Entity]) None[source]#

Edit the scene spec to add this sensor.

This is called during scene construction to add sensor elements to the MjSpec.

Parameters:
  • scene_spec – The scene MjSpec to edit.

  • entities – Dictionary of entities in the scene, keyed by name.

initialize(mj_model: MjModel, model: mujoco_warp.Model, data: mujoco_warp.Data, device: str) None[source]#

Initialize the sensor after model compilation.

This is called after the MjSpec is compiled into an MjModel and the simulation is ready to run. Use this to cache sensor indices, allocate buffers, etc.

Parameters:
  • mj_model – The compiled MuJoCo model.

  • model – The mjwarp model wrapper.

  • data – The mjwarp data arrays.

  • device – Device for tensor operations (e.g., “cuda”, “cpu”).

requires_sensor_context: bool = True#

Whether this sensor needs a SensorContext (render context).

set_context(ctx: SensorContext) None[source]#
class mjlab.sensor.CameraSensorCfg[source]#

Bases: SensorCfg

Configuration for a camera sensor.

A camera sensor can either wrap an existing MuJoCo camera (camera_name) or create a new one at the specified pos/quat. New cameras are added to the worldbody by default, or to a specific body via parent_body.

Note

All camera sensors in a scene must share identical values for use_textures, use_shadows, and enabled_geom_groups. This is a constraint of the underlying mujoco_warp rendering system.

__init__(name: str, camera_name: str | None = None, parent_body: str | None = None, pos: tuple[float, float, float] = (0.0, 0.0, 1.0), quat: tuple[float, float, float, float] = (1.0, 0.0, 0.0, 0.0), fovy: float | None = None, width: int = 160, height: int = 120, data_types: tuple[Literal['rgb', 'depth'], ...] = ('rgb',), use_textures: bool = True, use_shadows: bool = False, enabled_geom_groups: tuple[int, ...] = (0, 1, 2), orthographic: bool = False, clone_data: bool = False) None#
build() CameraSensor[source]#

Build sensor instance from this config.

camera_name: str | None = None#

Name of an existing MuJoCo camera to wrap.

If None, a new camera is created using pos/quat/fovy. If set, the sensor wraps the named camera instead of creating one.

clone_data: bool = False#

If True, clone tensors on each access.

Set to True if you modify the returned data in-place.

data_types: tuple[Literal['rgb', 'depth'], ...] = ('rgb',)#

any combination of “rgb” and “depth”.

Type:

Data types to capture

enabled_geom_groups: tuple[int, ...] = (0, 1, 2)#

Geom groups (0-5) visible to the camera.

fovy: float | None = None#

Vertical field of view in degrees. None uses MuJoCo default.

height: int = 120#

Image height in pixels.

orthographic: bool = False#

Use orthographic projection instead of perspective.

parent_body: str | None = None#

Parent body to attach a new camera to.

Only used when camera_name is None (creating a new camera). If None, the camera is added to the worldbody. Use the full prefixed name (e.g., “robot/link_6”) to attach to an entity’s body. The pos/quat are then relative to the parent body frame.

pos: tuple[float, float, float] = (0.0, 0.0, 1.0)#

Camera position (used when creating a new camera).

World-frame if parent_body is None, otherwise relative to the parent body frame.

quat: tuple[float, float, float, float] = (1.0, 0.0, 0.0, 0.0)#

Camera orientation quaternion (w, x, y, z).

use_shadows: bool = False#

Whether to use shadows in rendering.

use_textures: bool = True#

Whether to use textures in rendering.

width: int = 160#

Image width in pixels.

class mjlab.sensor.CameraSensorData[source]#

Bases: object

Camera sensor output data.

Shapes:
  • rgb: [num_envs, height, width, 3] (uint8)

  • depth: [num_envs, height, width, 1] (float32)

__init__(rgb: Tensor | None = None, depth: Tensor | None = None) None#
depth: Tensor | None = None#

Depth image [num_envs, height, width, 1] (float32). None if not enabled.

rgb: Tensor | None = None#

RGB image [num_envs, height, width, 3] (uint8). None if not enabled.

class mjlab.sensor.ContactData[source]#

Bases: object

Contact sensor output (only requested fields are populated).

__init__(found: Tensor | None = None, force: Tensor | None = None, torque: Tensor | None = None, dist: Tensor | None = None, pos: Tensor | None = None, normal: Tensor | None = None, tangent: Tensor | None = None, current_air_time: Tensor | None = None, last_air_time: Tensor | None = None, current_contact_time: Tensor | None = None, last_contact_time: Tensor | None = None, force_history: Tensor | None = None, torque_history: Tensor | None = None, dist_history: Tensor | None = None) None#
current_air_time: Tensor | None = None#

[B, N] time in air (if track_air_time=True)

current_contact_time: Tensor | None = None#

[B, N] time in contact (if track_air_time=True)

dist: Tensor | None = None#

[B, N] penetration depth

dist_history: Tensor | None = None#

[B, N, H] penetration depth over last H substeps (index 0 = most recent)

force: Tensor | None = None#

[B, N, 3] contact frame (global if reduce=”netforce” or global_frame=True)

force_history: Tensor | None = None#

[B, N, H, 3] contact forces over last H substeps (index 0 = most recent)

found: Tensor | None = None#

[B, N] 0=no contact, >0=match count

last_air_time: Tensor | None = None#

[B, N] duration of last air phase (if track_air_time=True)

last_contact_time: Tensor | None = None#

[B, N] duration of last contact phase (if track_air_time=True)

normal: Tensor | None = None#

[B, N, 3] global frame, primary→secondary

pos: Tensor | None = None#

[B, N, 3] global frame

tangent: Tensor | None = None#

[B, N, 3] global frame

torque: Tensor | None = None#

[B, N, 3] contact frame (global if reduce=”netforce” or global_frame=True)

torque_history: Tensor | None = None#

[B, N, H, 3] contact torques over last H substeps (index 0 = most recent)

class mjlab.sensor.ContactMatch[source]#

Bases: object

Specifies what to match on one side of a contact.

mode: “geom”, “body”, or “subtree” pattern: Regex or tuple of regexes (expands within entity if specified) entity: Entity name to search within (None = treat pattern as literal MuJoCo name) exclude: Filter out matches using these regex patterns or exact names.

__init__(mode: Literal['geom', 'body', 'subtree'], pattern: str | tuple[str, ...], entity: str | None = None, exclude: tuple[str, ...] = ()) None#
entity: str | None = None#
exclude: tuple[str, ...] = ()#
mode: Literal['geom', 'body', 'subtree']#
pattern: str | tuple[str, ...]#
class mjlab.sensor.ContactSensor[source]#

Bases: Sensor[ContactData]

Tracks contacts with automatic pattern expansion to multiple MuJoCo sensors.

__init__(cfg: ContactSensorCfg) None[source]#
compute_first_air(dt: float, abs_tol: float = 1e-08) Tensor[source]#

Returns [B, N] bool: True for contacts broken within last dt seconds.

compute_first_contact(dt: float, abs_tol: float = 1e-08) Tensor[source]#

Returns [B, N] bool: True for contacts established within last dt seconds.

edit_spec(scene_spec: MjSpec, entities: dict[str, Entity]) None[source]#

Expand patterns and add MuJoCo sensors (one per primary x field pair).

initialize(mj_model: MjModel, model: mujoco_warp.Model, data: mujoco_warp.Data, device: str) None[source]#

Map sensors to sensordata buffer and allocate air time state.

reset(env_ids: Tensor | slice | None = None) None[source]#

Reset sensor state for specified environments.

Invalidates the data cache. Override in subclasses that maintain internal state, but call super().reset(env_ids) FIRST.

Parameters:

env_ids – Environment indices to reset. If None, reset all environments.

update(dt: float) None[source]#

Update sensor state after a simulation step.

Invalidates the data cache. Override in subclasses that need per-step updates, but call super().update(dt) FIRST.

Parameters:

dt – Time step in seconds.

class mjlab.sensor.ContactSensorCfg[source]#

Bases: SensorCfg

Tracks contacts between primary and secondary patterns.

Output shape: [B, N * num_slots] or [B, N * num_slots, 3] where N = # of primaries

Fields (choose subset):
  • found: 0=no contact, >0=match count before reduction

  • force, torque: 3D vectors in contact frame (or global if reduce=”netforce”)

  • dist: penetration depth

  • pos, normal, tangent: 3D vectors in global frame (normal: primary→secondary)

Reduction modes (selects top num_slots from all matches):
  • “none”: fast, non-deterministic

  • “mindist”, “maxforce”: closest/strongest contacts

  • “netforce”: sum all forces (global frame)

Policies:
  • secondary_policy: “first”, “any”, or “error” when secondary matches multiple

  • track_air_time: enables landing/takeoff detection

  • global_frame: rotates force/torque to global (requires normal+tangent fields)

__init__(name: str, primary: ContactMatch, secondary: ContactMatch | None = None, fields: tuple[str, ...] = ('found', 'force'), reduce: Literal['none', 'mindist', 'maxforce', 'netforce'] = 'maxforce', num_slots: int = 1, secondary_policy: Literal['first', 'any', 'error'] = 'first', track_air_time: bool = False, global_frame: bool = False, history_length: int = 0, debug: bool = False) None#
build() ContactSensor[source]#

Build sensor instance from this config.

debug: bool = False#
fields: tuple[str, ...] = ('found', 'force')#
global_frame: bool = False#
history_length: int = 0#

Number of substeps to store in history buffer for force/torque/dist fields.

When 0 (default): No history buffer is allocated. History fields (force_history, torque_history, dist_history) are None. Use the regular fields (force, torque, dist) for the current instantaneous values.

When >0: Allocates a history buffer that stores the last N substeps of contact data. Shape is [B, N, history_length, …] where index 0 is the most recent substep. Set to your decimation value to capture all substeps within one policy step.

Note: history_length=1 is redundant with the regular fields but provides a consistent [B, N, H, …] shape if your code expects a history dimension.

num_slots: int = 1#
reduce: Literal['none', 'mindist', 'maxforce', 'netforce'] = 'maxforce'#
secondary: ContactMatch | None = None#
secondary_policy: Literal['first', 'any', 'error'] = 'first'#
track_air_time: bool = False#
primary: ContactMatch#
class mjlab.sensor.GridPatternCfg[source]#

Bases: object

Grid pattern - parallel rays in a 2D grid.

__init__(size: tuple[float, float] = (1.0, 1.0), resolution: float = 0.1, direction: tuple[float, float, float] = (0.0, 0.0, -1.0)) None#
direction: tuple[float, float, float] = (0.0, 0.0, -1.0)#

Ray direction in frame-local coordinates.

generate_rays(mj_model: MjModel | None, device: str) tuple[Tensor, Tensor][source]#

Generate ray pattern.

Parameters:
  • mj_model – MuJoCo model (unused for grid pattern).

  • device – Device for tensor operations.

Returns:

Tuple of (local_offsets [N, 3], local_directions [N, 3]).

resolution: float = 0.1#

Spacing between rays in meters.

size: tuple[float, float] = (1.0, 1.0)#

Grid size (length, width) in meters.

class mjlab.sensor.PinholeCameraPatternCfg[source]#

Bases: object

Pinhole camera pattern - rays diverging from origin like a camera.

Can be configured with explicit parameters (width, height, fovy) or created via factory methods like from_mujoco_camera() or from_intrinsic_matrix().

__init__(width: int = 16, height: int = 12, fovy: float = 45.0, _camera_name: str | None = None) None#
fovy: float = 45.0#

Vertical field of view in degrees (matches MuJoCo convention).

classmethod from_intrinsic_matrix(intrinsic_matrix: list[float], width: int, height: int) PinholeCameraPatternCfg[source]#

Create from 3x3 intrinsic matrix [fx, 0, cx, 0, fy, cy, 0, 0, 1].

Parameters:
  • intrinsic_matrix – Flattened 3x3 intrinsic matrix.

  • width – Image width in pixels.

  • height – Image height in pixels.

Returns:

Config with fovy computed from the intrinsic matrix.

classmethod from_mujoco_camera(camera_name: str) PinholeCameraPatternCfg[source]#

Create config that references a MuJoCo camera.

Camera parameters (resolution, FOV) are resolved at runtime from the model.

Parameters:

camera_name – Name of the MuJoCo camera to reference.

Returns:

Config that will resolve parameters from the MuJoCo camera.

generate_rays(mj_model: MjModel | None, device: str) tuple[Tensor, Tensor][source]#

Generate ray pattern.

Parameters:
  • mj_model – MuJoCo model (required if using from_mujoco_camera).

  • device – Device for tensor operations.

Returns:

Tuple of (local_offsets [N, 3], local_directions [N, 3]).

height: int = 12#

Image height in pixels.

width: int = 16#

Image width in pixels.

class mjlab.sensor.RayCastData[source]#

Bases: object

Raycast sensor output data.

Note

Fields are views into GPU buffers and are valid until the next sense() call.

__init__(distances: Tensor, normals_w: Tensor, hit_pos_w: Tensor, pos_w: Tensor, quat_w: Tensor) None#
distances: Tensor#

[B, N] Distance to hit point. -1 if no hit.

normals_w: Tensor#

[B, N, 3] Surface normal at hit point (world frame). Zero if no hit.

hit_pos_w: Tensor#

[B, N, 3] Hit position in world frame. Ray origin if no hit.

pos_w: Tensor#

[B, 3] Frame position in world coordinates.

quat_w: Tensor#

[B, 4] Frame orientation quaternion (w, x, y, z) in world coordinates.

class mjlab.sensor.RayCastSensor[source]#

Bases: Sensor[RayCastData]

Raycast sensor for terrain and obstacle detection.

__init__(cfg: RayCastSensorCfg) None[source]#
debug_vis(visualizer: DebugVisualizer) None[source]#

Visualize sensor data for debugging.

Base implementation does nothing. Override in subclasses that support debug visualization.

Parameters:

visualizer – The debug visualizer to draw to.

edit_spec(scene_spec: MjSpec, entities: dict[str, Entity]) None[source]#

Edit the scene spec to add this sensor.

This is called during scene construction to add sensor elements to the MjSpec.

Parameters:
  • scene_spec – The scene MjSpec to edit.

  • entities – Dictionary of entities in the scene, keyed by name.

initialize(mj_model: MjModel, model: mujoco_warp.Model, data: mujoco_warp.Data, device: str) None[source]#

Initialize the sensor after model compilation.

This is called after the MjSpec is compiled into an MjModel and the simulation is ready to run. Use this to cache sensor indices, allocate buffers, etc.

Parameters:
  • mj_model – The compiled MuJoCo model.

  • model – The mjwarp model wrapper.

  • data – The mjwarp data arrays.

  • device – Device for tensor operations (e.g., “cuda”, “cpu”).

property num_rays: int#
postprocess_rays() None[source]#

POST-GRAPH: Convert Warp outputs to PyTorch, compute hit positions.

prepare_rays() None[source]#

PRE-GRAPH: Transform local rays to world frame.

Reads body/site/geom poses via PyTorch and writes world-frame ray origins and directions into Warp arrays. Caches the frame pose and world-frame tensors for postprocess_rays().

raycast_kernel(rc: mujoco_warp.RenderContext) None[source]#

IN-GRAPH: Execute BVH-accelerated raycast kernel.

requires_sensor_context: bool = True#

Whether this sensor needs a SensorContext (render context).

set_context(ctx: SensorContext) None[source]#

Wire this sensor to a SensorContext for BVH-accelerated raycasting.

class mjlab.sensor.RayCastSensorCfg[source]#

Bases: SensorCfg

Raycast sensor configuration.

Supports multiple ray patterns (grid, pinhole camera) and alignment modes.

class VizCfg[source]#

Bases: object

Visualization settings for debug rendering.

__init__(hit_color: tuple[float, float, float, float] = (0.0, 1.0, 0.0, 0.8), miss_color: tuple[float, float, float, float] = (1.0, 0.0, 0.0, 0.4), hit_sphere_color: tuple[float, float, float, float] = (0.0, 1.0, 1.0, 1.0), hit_sphere_radius: float = 0.5, show_rays: bool = False, show_normals: bool = False, normal_color: tuple[float, float, float, float] = (1.0, 1.0, 0.0, 1.0), normal_length: float = 5.0) None#
hit_color: tuple[float, float, float, float] = (0.0, 1.0, 0.0, 0.8)#

RGBA color for rays that hit a surface.

hit_sphere_color: tuple[float, float, float, float] = (0.0, 1.0, 1.0, 1.0)#

RGBA color for spheres drawn at hit points.

hit_sphere_radius: float = 0.5#

Radius of spheres drawn at hit points (multiplier of meansize).

miss_color: tuple[float, float, float, float] = (1.0, 0.0, 0.0, 0.4)#

RGBA color for rays that miss.

normal_color: tuple[float, float, float, float] = (1.0, 1.0, 0.0, 1.0)#

RGBA color for surface normal arrows.

normal_length: float = 5.0#

Length of surface normal arrows (multiplier of meansize).

show_normals: bool = False#

Whether to draw surface normals at hit points.

show_rays: bool = False#

Whether to draw ray arrows.

__init__(name: str, frame: ObjRef, pattern: PatternCfg = <factory>, ray_alignment: RayAlignment = 'base', max_distance: float = 10.0, exclude_parent_body: bool = True, include_geom_groups: tuple[int, ...] | None = (0, 1, 2), debug_vis: bool = False, viz: VizCfg = <factory>) None#
build() RayCastSensor[source]#

Build sensor instance from this config.

debug_vis: bool = False#

Enable debug visualization.

exclude_parent_body: bool = True#

Exclude parent body from ray intersection tests.

include_geom_groups: tuple[int, ...] | None = (0, 1, 2)#

Geom groups (0-5) to include in raycasting.

Defaults to (0, 1, 2). Set to None to include all groups.

max_distance: float = 10.0#

Maximum ray distance. Rays beyond this report -1.

ray_alignment: RayAlignment = 'base'#

How rays align with the frame.

  • “base”: Full position + rotation (default).

  • “yaw”: Position + yaw only, ignores pitch/roll (good for height maps).

  • “world”: Fixed in world frame, position only follows body.

frame: ObjRef#

Body or site to attach rays to.

pattern: PatternCfg#

Ray pattern configuration. Defaults to GridPatternCfg.

viz: VizCfg#

Visualization settings.

class mjlab.sensor.Sensor[source]#

Bases: ABC, Generic[T]

Base sensor interface with typed data and per-step caching.

Type parameter T specifies the type of data returned by the sensor. For example: - Sensor[torch.Tensor] for sensors returning raw tensors - Sensor[ContactData] for sensors returning structured contact data

Subclasses should not forget to: - Call super().__init__() in their __init__ method - If overriding reset() or update(), call super() FIRST to invalidate cache

__init__() None[source]#
property data: T#

Get the current sensor data, using cached value if available.

This property returns the sensor’s current data in its specific type. The data type is specified by the type parameter T. The data is cached per-step and recomputed only when the cache is invalidated (after reset() or update() is called).

Returns:

The sensor data in the format specified by type parameter T.

debug_vis(visualizer: DebugVisualizer) None[source]#

Visualize sensor data for debugging.

Base implementation does nothing. Override in subclasses that support debug visualization.

Parameters:

visualizer – The debug visualizer to draw to.

abstractmethod edit_spec(scene_spec: mujoco.MjSpec, entities: dict[str, Entity]) None[source]#

Edit the scene spec to add this sensor.

This is called during scene construction to add sensor elements to the MjSpec.

Parameters:
  • scene_spec – The scene MjSpec to edit.

  • entities – Dictionary of entities in the scene, keyed by name.

abstractmethod initialize(mj_model: MjModel, model: mujoco_warp.Model, data: mujoco_warp.Data, device: str) None[source]#

Initialize the sensor after model compilation.

This is called after the MjSpec is compiled into an MjModel and the simulation is ready to run. Use this to cache sensor indices, allocate buffers, etc.

Parameters:
  • mj_model – The compiled MuJoCo model.

  • model – The mjwarp model wrapper.

  • data – The mjwarp data arrays.

  • device – Device for tensor operations (e.g., “cuda”, “cpu”).

requires_sensor_context: bool = False#

Whether this sensor needs a SensorContext (render context).

reset(env_ids: Tensor | slice | None = None) None[source]#

Reset sensor state for specified environments.

Invalidates the data cache. Override in subclasses that maintain internal state, but call super().reset(env_ids) FIRST.

Parameters:

env_ids – Environment indices to reset. If None, reset all environments.

update(dt: float) None[source]#

Update sensor state after a simulation step.

Invalidates the data cache. Override in subclasses that need per-step updates, but call super().update(dt) FIRST.

Parameters:

dt – Time step in seconds.

class mjlab.sensor.SensorCfg[source]#

Bases: ABC

Base configuration for a sensor.

__init__(name: str) None#
abstractmethod build() Sensor[Any][source]#

Build sensor instance from this config.

name: str#
class mjlab.sensor.SensorContext[source]#

Bases: object

Container for shared sensing resources.

Manages the RenderContext used by both camera sensors (for rendering) and raycast sensors (for BVH-accelerated ray intersection). The actual graph capture and execution is handled by Simulation.

__init__(mj_model: mujoco.MjModel, model: mjwarp.Model, data: mjwarp.Data, camera_sensors: list[CameraSensor], raycast_sensors: list[RayCastSensor], device: str)[source]#
finalize() None[source]#

Post-graph: compute raycast hit positions.

get_depth(cam_idx: int) Tensor[source]#

Get depth data for a camera.

Parameters:

cam_idx – MuJoCo camera ID.

Returns:

Tensor of shape [num_envs, height, width, 1] (float32).

get_rgb(cam_idx: int) Tensor[source]#

Get unpacked RGB data for a camera.

Parameters:

cam_idx – MuJoCo camera ID.

Returns:

Tensor of shape [num_envs, height, width, 3] (uint8).

property has_cameras: bool#
property has_raycasts: bool#
prepare() None[source]#

Pre-graph: transform rays to world frame.

recreate(mj_model: MjModel) None[source]#

Recreate the render context after model fields are expanded.

Called by Simulation.expand_model_fields() for domain randomization.

property render_context: mujoco_warp.RenderContext#
unpack_rgb() None[source]#

Unpack packed uint32 RGB data into separate channels.

Called from Simulation._sense_kernel() so it gets captured in the CUDA graph. No-op if no cameras need RGB.