src.utils.visualization package

Submodules

src.utils.visualization.camera module

class src.utils.visualization.camera.EcmControl(q, yaw_idx, pitch_idx, insertion_idx, yaw_step_rad=np.float64(0.017453292519943295), pitch_step_rad=np.float64(0.017453292519943295), insertion_step_m=0.002)[source]

Bases: object

Mutable joint-space state for interactive ECM camera control.

q

Current joint configuration, shape (dof,).

yaw_idx

Index of the yaw joint in q, or None if absent.

pitch_idx

Index of the pitch joint in q, or None if absent.

insertion_idx

Index of the insertion joint in q, or None if absent.

yaw_step_rad

Angle increment (rad) applied per key press for yaw.

pitch_step_rad

Angle increment (rad) applied per key press for pitch.

insertion_step_m

Translation increment (m) applied per key press for insertion.

insertion_idx: int | None
insertion_step_m: float = 0.002
pitch_idx: int | None
pitch_step_rad: float = np.float64(0.017453292519943295)
q: ndarray
yaw_idx: int | None
yaw_step_rad: float = np.float64(0.017453292519943295)
src.utils.visualization.camera.build_arg_parser()[source]

Return an argument parser for the ECM camera simulation script.

Adds --cameras (1 or 2) and --scope-deg (0 or 30) arguments. Defaults come from the project settings.py.

Return type:

ArgumentParser

Returns:

Configured argparse.ArgumentParser.

src.utils.visualization.camera.build_ecm_control_from_camera_pose(camera_world_tf, robot_root, initial_q=None)[source]

Build an ECM model and control state so the camera aligns with a desired pose.

Solves for the base transform \(T_\text{base}\) such that the tool pose at the zero (or initial_q) configuration matches camera_world_tf:

\[T_\text{base} = T_\text{camera} \cdot T_\text{zero}^{-1}\]
Parameters:
  • camera_world_tf (ndarray) – Desired (4, 4) camera pose in world frame.

  • robot_root (str | Path) – Path to the URDF package root passed to ECM.

  • initial_q (ndarray | None) – Optional initial joint configuration, shape (dof,). Defaults to all zeros.

Return type:

tuple[ECM, EcmControl]

Returns:

(ecm, ctrl) — ECM model with base transform set and an EcmControl initialised to initial_q.

src.utils.visualization.camera.create_camera_plotter(n_cameras)[source]

Create a PyVista plotter configured for mono or stereo camera display.

Parameters:

n_cameras (int) – 1 for monocular or 2 for stereo side-by-side layout.

Return type:

tuple[Plotter, bool]

Returns:

(plotter, is_stereo) — the configured plotter and a bool that is True when stereo layout was created.

src.utils.visualization.camera.find_joint_indices(ecm)[source]

Find indices of yaw, pitch, and insertion joints in the ECM’s active joint list.

Parameters:

ecm (ECM) – Instantiated ECM model.

Return type:

tuple[int | None, int | None, int | None]

Returns:

Tuple (yaw_idx, pitch_idx, insertion_idx) where each element is the integer index into ecm.active_joint_names, or None if the joint is not present.

src.utils.visualization.camera.register_keys(plotter, ecm, ctrl, optical_tilt_deg, n_cameras, vfov_deg, camera_roll_deg=0.0)[source]

Register keyboard callbacks for interactive ECM camera control.

Key bindings (from settings.py step sizes):

  • Left / Right — yaw joint ± ctrl.yaw_step_rad

  • Up / Down — pitch joint ± ctrl.pitch_step_rad

  • PageUp / PageDown — insertion joint ± ctrl.insertion_step_m

Each key press clamps joint angles, calls update_cameras(), and triggers a plotter render.

Parameters:
  • plotter (Plotter) – PyVista plotter to bind keys on.

  • ecm (ECM) – ECM kinematic model (used for clamping and FK).

  • ctrl (EcmControl) – Mutable EcmControl updated in-place by callbacks.

  • optical_tilt_deg (float) – Scope tilt in degrees forwarded to update_cameras().

  • n_cameras (int) – 1 or 2 forwarded to update_cameras().

  • vfov_deg (float) – Vertical FOV in degrees forwarded to update_cameras().

  • camera_roll_deg (float) – Roll offset forwarded to update_cameras().

Return type:

None

src.utils.visualization.camera.update_cameras(plotter, ecm, ctrl, optical_tilt_deg, n_cameras, vfov_deg, camera_roll_deg=0.0)[source]

Recompute camera poses from the current ECM state and apply them to the plotter.

Supports mono (n_cameras=1) and stereo (n_cameras=2) layouts. An optional roll correction rotates the up-vector about the optical axis:

\[\mathbf{u}' = \cos\phi\,\mathbf{u} + \sin\phi\,(\hat{\mathbf{n}} \times \mathbf{u}) + (1-\cos\phi)(\hat{\mathbf{n}} \cdot \mathbf{u})\hat{\mathbf{n}}\]

where \(\hat{\mathbf{n}}\) is the optical axis and \(\phi\) is camera_roll_deg in radians.

Parameters:
  • plotter (Plotter) – PyVista plotter to update.

  • ecm (ECM) – ECM kinematic model.

  • ctrl (EcmControl) – Current EcmControl joint state.

  • optical_tilt_deg (float) – Scope tilt angle in degrees (0 or 30).

  • n_cameras (int) – 1 (mono) or 2 (stereo).

  • vfov_deg (float) – Vertical field of view in degrees.

  • camera_roll_deg (float) – Roll offset applied to the up-vector in degrees.

Return type:

None

src.utils.visualization.helpers module

src.utils.visualization.helpers.add_world_floor_and_object(plotter, object_type='cube', center=(0.0, 0.0, 0.035), color='tomato', cube_size=(0.05, 0.04, 0.06), sphere_radius=0.02, floor_size=(0.35, 0.35), floor_color='lightgray', floor_opacity=0.9)[source]

Add a floor plane and one target object to a PyVista plotter.

The floor lies in the XY plane at z = 0. The target object is either a cube or a sphere centred at center. World-frame XYZ axes are also drawn.

Parameters:
  • plotter (Plotter) – PyVista plotter to add meshes to.

  • object_type (str) – "cube" or "sphere".

  • center (tuple[float, float, float]) – World-frame centre of the target object (x, y, z) in metres.

  • color (str) – Colour of the target object.

  • cube_size (tuple[float, float, float]) – (x_length, y_length, z_length) for cube objects.

  • sphere_radius (float) – Radius in metres for sphere objects.

  • floor_size (tuple[float, float]) – (i_size, j_size) of the XY floor plane in metres.

  • floor_color (str) – Colour of the floor plane.

  • floor_opacity (float) – Opacity [0, 1] of the floor plane.

Return type:

None

src.utils.visualization.helpers.apply_camera_pose(plotter, pose, vfov_deg, near, far)[source]

Apply a camera pose to a PyVista plotter.

Sets position, focal point, up vector, view angle, and clipping range from the fields of pose, which must expose: position, focal_point, and viewup attributes.

Parameters:
  • plotter (Plotter) – PyVista plotter whose camera will be configured.

  • pose (Any) – Camera pose object with position, focal_point, and viewup array-like attributes.

  • vfov_deg (float) – Vertical field of view in degrees (camera view angle).

  • near (float) – Near clipping plane distance in metres.

  • far (float) – Far clipping plane distance in metres.

Return type:

None

src.utils.visualization.helpers.hfov_to_vfov_deg(hfov_deg, width_px, height_px)[source]

Convert a horizontal field-of-view angle to a vertical field-of-view angle.

Uses the pinhole camera relationship:

\[\text{VFOV} = 2 \arctan\!\left(\frac{\tan(\text{HFOV}/2)}{w/h}\right)\]
Parameters:
  • hfov_deg (float) – Horizontal field of view in degrees.

  • width_px (int) – Image width in pixels.

  • height_px (int) – Image height in pixels.

Return type:

float

Returns:

Vertical field of view in degrees.

src.utils.visualization.robot_points module

src.utils.visualization.robot_points.create_point_poly(point)[source]

Create a single-point pyvista.PolyData marker.

Parameters:

point (ndarray) – 3-D point coordinates, shape (3,).

Return type:

PolyData

Returns:

pyvista.PolyData containing exactly one point.

src.utils.visualization.robot_points.tool_position_world(robot, theta, base_transform=None)[source]

Compute tool-tip position in world coordinates.

Runs forward kinematics up to robot.tool_link and applies the optional base transform to produce a 3-vector in world frame.

Parameters:
  • robot (RobotTreeLike) – Robot model exposing tool_link and joint structure.

  • theta (ndarray) – Joint configuration, shape (dof,).

  • base_transform (Optional[ndarray]) – Optional (4, 4) world-from-base transform. Defaults to identity.

Return type:

ndarray

Returns:

Position vector \(\mathbf{p} \in \mathbb{R}^3\).

src.utils.visualization.robot_points.update_point_poly(poly, point)[source]

Update an existing single-point pyvista.PolyData marker in-place.

Replaces poly.points and calls poly.Modified() so that the PyVista renderer picks up the change without re-adding the actor.

Parameters:
  • poly (PolyData) – Marker previously created by create_point_poly().

  • point (ndarray) – New 3-D point coordinates, shape (3,).

Return type:

None

src.utils.visualization.teleop_common module

class src.utils.visualization.teleop_common.ClutchEvent(value)[source]

Bases: str, Enum

Edge-triggered clutch event detected on a single control loop tick.

NONE

No state transition occurred; clutch button state is unchanged.

PRESSED

Rising edge — the clutch button was just pressed this tick.

RELEASED

Falling edge — the clutch button was just released this tick.

NONE = 'none'
PRESSED = 'pressed'
RELEASED = 'released'
src.utils.visualization.teleop_common.build_dual_phantom_base_transforms(phantom_dual_y_distance)[source]

Return left/right Phantom base transforms for a symmetric dual-device setup.

Both devices are placed symmetrically about the world origin along the Y axis:

\[ \begin{align}\begin{aligned}\begin{split}T_{\text{left}} = \begin{bmatrix} I_{3} & [0,\ -d/2,\ 0]^\top \\ 0 & 1 \end{bmatrix}\end{split}\\\begin{split}T_{\text{right}} = \begin{bmatrix} I_{3} & [0,\ +d/2,\ 0]^\top \\ 0 & 1 \end{bmatrix}\end{split}\end{aligned}\end{align} \]

where \(d\) is phantom_dual_y_distance.

Parameters:

phantom_dual_y_distance (float) – Total Y separation (metres) between the two Phantom device bases.

Return type:

tuple[ndarray, ndarray]

Returns:

(T_left, T_right) — each a 4×4 homogeneous transform in world frame.

src.utils.visualization.teleop_common.build_dual_psm_base_transforms(psm_base_x, psm_base_y_distance, psm_base_z, psm_base_x_rotation_split_deg)[source]

Return left/right PSM base transforms for a symmetric dual-arm setup.

Each PSM is rotated about the X axis by ±half of psm_base_x_rotation_split_deg and then about the Z axis by \(-\pi/2\) to align the instrument shaft downward into the surgical field:

\[\begin{split}R_{\text{left}} &= R_x(+\alpha/2)\, R_z(-\pi/2) \\ R_{\text{right}} &= R_x(-\alpha/2)\, R_z(-\pi/2)\end{split}\]

where \(\alpha = \text{psm\_base\_x\_rotation\_split\_deg}\) in radians.

The translation is:

\[t_{\text{left}} = [x,\ -d/2,\ z]^\top, \quad t_{\text{right}} = [x,\ +d/2,\ z]^\top\]

where \(d\) is psm_base_y_distance.

Parameters:
  • psm_base_x (float) – X offset of both PSM bases in the world frame (metres).

  • psm_base_y_distance (float) – Total Y separation (metres) between the two PSM bases.

  • psm_base_z (float) – Z offset of both PSM bases in the world frame (metres).

  • psm_base_x_rotation_split_deg (float) – Total X-axis rotation spread (degrees) shared symmetrically between the two arms.

Return type:

tuple[ndarray, ndarray]

Returns:

(T_left, T_right) — each a 4×4 homogeneous base transform in world frame.

src.utils.visualization.teleop_common.camera_pose_from_psm_tools(p_left_world, p_right_world, camera_z_m)[source]

Compute an ECM-style camera pose from the PSM tool home positions.

The camera is placed at the midpoint of the two PSM home positions projected onto the XY plane, at a fixed height camera_z_m:

\[p_{\text{cam}} = \left[\, \frac{x_L + x_R}{2},\; \frac{y_L + y_R}{2},\; z_{\text{cam}} \,\right]^\top\]

The camera orientation is set so that the image plane faces downward (i.e., the Z axis of the camera frame points toward the surgical field):

\[\begin{split}R_{\text{cam}} = \begin{bmatrix} 1 & 0 & 0 \\ 0 & -1 & 0 \\ 0 & 0 & -1 \end{bmatrix}\end{split}\]
Parameters:
  • p_left_world (ndarray) – 3-vector (x, y, z) of the left PSM home position in world frame.

  • p_right_world (ndarray) – 3-vector (x, y, z) of the right PSM home position in world frame.

  • camera_z_m (float) – Fixed camera height above the world origin (metres).

Return type:

ndarray

Returns:

4×4 homogeneous camera pose \(T_{\text{cam}}^{\text{world}}\).

src.utils.visualization.teleop_common.compute_desired_pose_world(*, clutch_active, desired_hold_world, phantom_tool_world, phantom_home_world, psm_base_world, psm_home_local, teleoperation_gain)[source]

Compute the desired PSM tool pose in world frame (full-pose teleoperation).

When the clutch is active the robot holds its last commanded pose:

\[T_{\text{desired}} = T_{\text{hold}}\]

When the clutch is inactive the desired pose is computed in two parts:

Translate — scaled displacement from the Phantom home position:

\[\begin{split}T_{\text{PSM home}} &= T_{\text{PSM base}}^{\text{world}} \cdot T_{\text{PSM home}}^{\text{local}} \\ p_{\text{desired}} &= p_{\text{PSM home}} + k\,\bigl(p_{\text{Phantom}} - p_{\text{Phantom home}}\bigr)\end{split}\]

where \(k\) is teleoperation_gain and all \(p\) terms are the translation columns of their respective 4×4 transforms.

Rotate — relative rotation from Phantom home to current Phantom pose, applied to the PSM home orientation:

\[\begin{split}R_{\Delta} &= R_{\text{Phantom home}}^\top \cdot R_{\text{Phantom}} \\ R_{\text{desired}} &= R_{\text{PSM home}} \cdot R_{\Delta}\end{split}\]

The desired pose is then assembled as:

\[\begin{split}T_{\text{desired}} = \begin{bmatrix} R_{\text{desired}} & p_{\text{desired}} \\ 0 & 1 \end{bmatrix}\end{split}\]
Parameters:
  • clutch_active (bool) – When True the robot holds desired_hold_world.

  • desired_hold_world (ndarray) – 4×4 transform — pose to hold while clutch is active.

  • phantom_tool_world (ndarray) – 4×4 homogeneous transform of the Phantom stylus tip in world frame (current).

  • phantom_home_world (ndarray) – 4×4 homogeneous transform of the Phantom stylus tip in world frame at teleoperation start.

  • psm_base_world (ndarray) – 4×4 homogeneous transform of the PSM base in world frame (\(T_{\text{PSM base}}^{\text{world}}\)).

  • psm_home_local (ndarray) – 4×4 homogeneous transform of the PSM tool tip in the PSM base frame at teleoperation start (\(T_{\text{PSM home}}^{\text{local}}\)).

  • teleoperation_gain (float) – Scalar gain \(k\) mapping Phantom workspace displacements to PSM workspace displacements.

Return type:

ndarray

Returns:

4×4 homogeneous transform representing the desired PSM tool pose in world frame.

src.utils.visualization.teleop_common.compute_desired_position_world(*, clutch_active, desired_hold_world, phantom_tool_world, phantom_home_world, psm_home_world, teleoperation_gain)[source]

Compute the desired PSM tool position in world frame (position-only teleoperation).

When the clutch is active the robot holds its last commanded position:

\[p_{\text{desired}} = p_{\text{hold}}\]

When the clutch is inactive a scaled displacement from the Phantom home position is added to the PSM home position:

\[p_{\text{desired}} = p_{\text{PSM home}} + k \,(p_{\text{Phantom}} - p_{\text{Phantom home}})\]

where \(k\) is teleoperation_gain.

Parameters:
  • clutch_active (bool) – When True the robot holds desired_hold_world.

  • desired_hold_world (ndarray) – 3-vector — position to hold while clutch is active.

  • phantom_tool_world (ndarray) – 3-vector — current Phantom stylus tip position in world frame.

  • phantom_home_world (ndarray) – 3-vector — Phantom stylus position at the moment teleoperation began (reference frame origin).

  • psm_home_world (ndarray) – 3-vector — PSM tool tip position in world frame at the moment teleoperation began.

  • teleoperation_gain (float) – Scalar gain \(k\) mapping Phantom workspace displacements to PSM workspace displacements.

Return type:

ndarray

Returns:

3-vector representing the desired PSM tool position in world frame.

src.utils.visualization.teleop_common.set_robot_mesh_color(robots_by_name, robot_name, color)[source]

Apply a uniform color to every mesh actor belonging to one robot.

Iterates over all mesh items stored in the robot’s visualization state and sets their PyVista actor property color. A no-op if robot_name is not present in robots_by_name.

Parameters:
  • robots_by_name (Mapping[str, Any]) – Mapping from robot name to its visualization state object (e.g., DvrkRealtimeViz._robots).

  • robot_name (str) – Key identifying the target robot (e.g., "left").

  • color (str) – Color string accepted by PyVista / VTK (e.g., "red", "#FF0000").

Return type:

None

src.utils.visualization.teleop_common.update_clutch_state(clutch_pressed, prev_clutch_pressed, clutch_active)[source]

Detect rising/falling edges of the clutch button and update clutch state.

Edge detection is performed on the transition between the previous and current button readings:

  • Rising edge (clutch_pressed and not prev_clutch_pressed): clutch becomes active; event is ClutchEvent.PRESSED.

  • Falling edge (not clutch_pressed and prev_clutch_pressed): clutch becomes inactive; event is ClutchEvent.RELEASED.

  • No edge: clutch active state is unchanged; event is ClutchEvent.NONE.

Parameters:
  • clutch_pressed (bool) – Current (this-tick) state of the clutch button.

  • prev_clutch_pressed (bool) – State of the clutch button on the previous tick.

  • clutch_active (bool) – Current clutch active flag before this update.

Return type:

tuple[bool, bool, ClutchEvent]

Returns:

(new_clutch_active, new_prev_clutch_pressed, clutch_event)

  • new_clutch_active — updated clutch active flag.

  • new_prev_clutch_pressed — value to store as prev_clutch_pressed for the next call (equals clutch_pressed).

  • clutch_event — the edge event that occurred this tick.

src.utils.visualization.teleop_common.update_jaw_command(jaw_cmd, gripper_pressed, close_speed, open_speed, jaw_min, jaw_max)[source]

Integrate jaw angle command from the gripper button and clamp to joint limits.

On each control tick the jaw command is incremented or decremented based on the gripper button state, then clamped to [jaw_min, jaw_max]:

\[\begin{split}q_{\text{jaw}}^{\text{new}} = \text{clip}\!\left( q_{\text{jaw}} + \begin{cases} -s_{\text{close}} & \text{if gripper pressed} \\ +s_{\text{open}} & \text{otherwise} \end{cases}, \; q_{\min}, q_{\max} \right)\end{split}\]
Parameters:
  • jaw_cmd (float) – Current jaw joint angle command (radians).

  • gripper_pressed (bool) – True when the operator is squeezing the gripper button (close command).

  • close_speed (float) – Angular rate (rad / tick) at which the jaw closes.

  • open_speed (float) – Angular rate (rad / tick) at which the jaw opens.

  • jaw_min (float) – Minimum allowed jaw angle (radians).

  • jaw_max (float) – Maximum allowed jaw angle (radians).

Return type:

float

Returns:

Updated jaw joint angle command clamped to [jaw_min, jaw_max].

src.utils.visualization.viewers module

class src.utils.visualization.viewers.DvrkRealtimeViz(title='dVRK Real-Time Visualization', window_size=(1400, 900), background='white', show_frames=True, alpha=0.6, frame_scale=0.03, marker_radius=0.004)[source]

Bases: object

Interactive PyVista-based real-time visualiser for dVRK robots.

Manages a single pyvista.Plotter window and supports adding multiple robots by name, updating their joint configurations in real time, and overlaying joint-frame axes, waypoint sets, and FK markers.

Example usage:

viz = DvrkRealtimeViz(show_frames=True)
viz.add_robot("psm", psm_robot, theta=theta0)
# in a callback:
viz.update_robot("psm", theta_new)
viz.show()
add_canvas(center=None, width=0.72, height=0.35, color='white', opacity=0.9, show_border=True, border_color='black', border_width=3, border_offset=0.001)[source]

Add a flat rectangular canvas mesh to the plotter (for 2-D projections).

The canvas lies in the YZ plane (X = center[0]). An optional border polyline is drawn slightly in front (offset along X).

Parameters:
  • center (Optional[ndarray]) – Centre of the canvas (x, y, z) in metres. Defaults to a position suitable for the standard dVRK setup.

  • width (float) – Canvas width (Y direction) in metres.

  • height (float) – Canvas height (Z direction) in metres.

  • color (str) – Fill colour of the canvas mesh.

  • opacity (float) – Opacity [0, 1] of the canvas.

  • show_border (bool) – Whether to draw a border polyline.

  • border_color (str) – Colour of the border polyline.

  • border_width (int) – Line width of the border in pixels.

  • border_offset (float) – Offset of the border along X to avoid Z-fighting.

Returns:

(canvas, border) — the canvas pyvista.PolyData and the border pyvista.PolyData (or None if show_border is False).

add_canvas_waypoints(name, waypoints_xyz, color='blue', point_size=15, alpha=0.8, label_color='white', label_size=15, label_z_offset=0.0)[source]

Add a named set of labelled waypoint markers to the plotter.

Waypoints are rendered as spheres with sequential numeric labels 1, 2, …, N. Individual points can be hidden with hide_canvas_waypoint() and restored with reset_canvas_waypoints().

Parameters:
  • name (str) – Unique identifier for this waypoint set.

  • waypoints_xyz (ndarray) – Waypoint positions, shape (N, 3).

  • color (str) – Sphere colour.

  • point_size (int) – Rendered sphere size in pixels.

  • alpha (float) – Opacity [0, 1].

  • label_color (str) – Text colour for the numeric labels.

  • label_size (int) – Font size of the labels.

  • label_z_offset (float) – Additional Z offset applied to label positions.

Returns:

The newly created _WaypointSet.

Raises:

ValueError – If name already exists or waypoints_xyz is not (N, 3).

add_fk_marker(size=10, color='blue')[source]

Add a single movable point marker to the plotter and return its PolyData.

The returned pyvista.PolyData can be updated in-place to move the marker without re-adding it to the scene.

Parameters:
  • size (int) – Rendered point size in pixels.

  • color (str) – Marker colour.

Return type:

PolyData

Returns:

Single-point pyvista.PolyData (initially at the origin).

add_polyline_mesh(points)[source]

Create a connected polyline PolyData from an ordered array of points.

Parameters:

points (ndarray) – Point array, shape (N, 3).

Return type:

PolyData

Returns:

pyvista.PolyData with N points and N-1 line cells (or just points if N < 2).

add_robot(name, robot, theta=None, base_transform=None, color=None)[source]

Load and add a robot’s meshes (and optionally joint frames) to the plotter.

Parameters:
  • name (str) – Unique robot identifier used for subsequent update_robot() calls.

  • robot (VisualRobotLike) – Robot model conforming to VisualRobotLike.

  • theta (Optional[ndarray]) – Initial joint configuration, shape (dof,). Defaults to the robot’s current configuration.

  • base_transform (Optional[ndarray]) – Optional (4, 4) world-from-base transform. Defaults to identity.

  • color (Optional[str]) – Solid colour for all meshes, or None to use mesh file colours.

Raises:

ValueError – If name is already registered in this viewer.

Return type:

None

add_waypoint_highlight_marker(size=28, color='yellow')[source]

Add a large point marker used to highlight an active waypoint.

A convenience wrapper around add_fk_marker() with defaults suitable for highlighting a selected waypoint.

Parameters:
  • size (int) – Rendered point size in pixels.

  • color (str) – Marker colour.

Return type:

PolyData

Returns:

Single-point pyvista.PolyData (initially at the origin).

hide_canvas_waypoint(name, index)[source]

Hide a single waypoint from a named set.

Removes the point from the rendered poly and hides its label actor. Has no effect if the waypoint is already hidden.

Parameters:
  • name (str) – Waypoint set identifier.

  • index (int) – 0-based index of the waypoint to hide.

Raises:
  • KeyError – If name is not registered.

  • IndexError – If index is out of range.

Return type:

None

make_line()[source]

Create a two-point degenerate line segment PolyData suitable for in-place updates.

Both endpoints start at the origin. Update poly.points to reposition the line without re-adding it to the scene.

Return type:

PolyData

Returns:

pyvista.PolyData with two coincident points and one line cell.

reset_camera()[source]

Reset the camera to fit all visible scene contents and re-render.

Return type:

None

reset_canvas_waypoints(name)[source]

Restore all waypoints in a named set to their original positions/visibility.

Parameters:

name (str) – Waypoint set identifier.

Raises:

KeyError – If name is not registered.

Return type:

None

set_camera(position=(1.0, 1.0, 1.0), focal_point=(0.0, 0.0, 0.0), viewup=(0.0, 0.0, 1.0))[source]

Set the camera position, focal point, and up-vector.

Parameters:
  • position – Camera position in world coordinates (x, y, z).

  • focal_point – Point the camera looks at (x, y, z).

  • viewup – Camera up-vector (x, y, z).

Return type:

None

show()[source]

Open the PyVista window and block until it is closed.

Return type:

None

update_robot(name, theta, base_transform=None)[source]

Update mesh positions (and joint frames) for a previously added robot.

Transforms mesh vertices for the new configuration in-place and calls poly.Modified() so PyVista picks up the change without re-adding actors.

Parameters:
  • name (str) – Robot identifier previously passed to add_robot().

  • theta (ndarray) – New joint configuration, shape (dof,).

  • base_transform (Optional[ndarray]) – Optional (4, 4) world-from-base transform. Defaults to the identity matrix.

Raises:
  • KeyError – If name is not a registered robot.

  • ValueError – If theta.size != robot.dof.

Return type:

None

update_waypoint_highlight(marker, point)[source]

Move or hide the waypoint highlight marker.

Parameters:
  • marker (PolyData) – Marker previously returned by add_waypoint_highlight_marker().

  • point (ndarray | None) – New 3-D position, or None to hide the marker.

Return type:

None

class src.utils.visualization.viewers.PlotterRobotMeshState(robot, poly_by_key, base_vertices_by_key)[source]

Bases: object

base_vertices_by_key: Dict[str, ndarray]
poly_by_key: Dict[str, pyvista.PolyData]
robot: VisualRobotLike
src.utils.visualization.viewers.add_robot_meshes_to_plotter(plotter, name, robot, theta=None, base_transform=None, color=None, alpha=1.0)[source]

Add only robot meshes to a plain PyVista plotter and return update state.

Return type:

PlotterRobotMeshState

src.utils.visualization.viewers.demo_two_robots(ecm, psm, theta_ecm=None, theta_psm=None)[source]
Return type:

None

src.utils.visualization.viewers.set_camera_view(scene, eye, target)[source]

Point a viewer or plotter camera at a target from a given eye position.

Accepts either a DvrkRealtimeViz (calls set_camera()) or a raw pyvista.Plotter. Up-vector is always +Z.

Parameters:
  • sceneDvrkRealtimeViz or pyvista.Plotter.

  • eye – Camera position (x, y, z) in world coordinates.

  • target – Focal point (x, y, z) in world coordinates.

src.utils.visualization.viewers.update_robot_meshes_on_plotter(state, theta, base_transform=None)[source]

Update mesh vertices previously added by add_robot_meshes_to_plotter.

Return type:

None

src.utils.visualization.viewers.visualize(robot, theta=None, show_frames=False, alpha=1.0)[source]

Create a DvrkRealtimeViz pre-loaded with a single robot.

Convenience factory for quick visualisation scripts.

Parameters:
  • robot – Robot model conforming to VisualRobotLike.

  • theta (Optional[ndarray]) – Joint configuration, shape (dof,). Defaults to the robot’s current configuration.

  • show_frames (bool) – Whether to draw joint-frame axis arrows.

  • alpha (float) – Mesh opacity [0, 1].

Return type:

DvrkRealtimeViz

Returns:

DvrkRealtimeViz with the robot added but the window not yet shown. Call DvrkRealtimeViz.show() to open the window.

Module contents

class src.utils.visualization.ClutchEvent(value)[source]

Bases: str, Enum

Edge-triggered clutch event detected on a single control loop tick.

NONE

No state transition occurred; clutch button state is unchanged.

PRESSED

Rising edge — the clutch button was just pressed this tick.

RELEASED

Falling edge — the clutch button was just released this tick.

NONE = 'none'
PRESSED = 'pressed'
RELEASED = 'released'
class src.utils.visualization.DvrkRealtimeViz(title='dVRK Real-Time Visualization', window_size=(1400, 900), background='white', show_frames=True, alpha=0.6, frame_scale=0.03, marker_radius=0.004)[source]

Bases: object

Interactive PyVista-based real-time visualiser for dVRK robots.

Manages a single pyvista.Plotter window and supports adding multiple robots by name, updating their joint configurations in real time, and overlaying joint-frame axes, waypoint sets, and FK markers.

Example usage:

viz = DvrkRealtimeViz(show_frames=True)
viz.add_robot("psm", psm_robot, theta=theta0)
# in a callback:
viz.update_robot("psm", theta_new)
viz.show()
add_canvas(center=None, width=0.72, height=0.35, color='white', opacity=0.9, show_border=True, border_color='black', border_width=3, border_offset=0.001)[source]

Add a flat rectangular canvas mesh to the plotter (for 2-D projections).

The canvas lies in the YZ plane (X = center[0]). An optional border polyline is drawn slightly in front (offset along X).

Parameters:
  • center (Optional[ndarray]) – Centre of the canvas (x, y, z) in metres. Defaults to a position suitable for the standard dVRK setup.

  • width (float) – Canvas width (Y direction) in metres.

  • height (float) – Canvas height (Z direction) in metres.

  • color (str) – Fill colour of the canvas mesh.

  • opacity (float) – Opacity [0, 1] of the canvas.

  • show_border (bool) – Whether to draw a border polyline.

  • border_color (str) – Colour of the border polyline.

  • border_width (int) – Line width of the border in pixels.

  • border_offset (float) – Offset of the border along X to avoid Z-fighting.

Returns:

(canvas, border) — the canvas pyvista.PolyData and the border pyvista.PolyData (or None if show_border is False).

add_canvas_waypoints(name, waypoints_xyz, color='blue', point_size=15, alpha=0.8, label_color='white', label_size=15, label_z_offset=0.0)[source]

Add a named set of labelled waypoint markers to the plotter.

Waypoints are rendered as spheres with sequential numeric labels 1, 2, …, N. Individual points can be hidden with hide_canvas_waypoint() and restored with reset_canvas_waypoints().

Parameters:
  • name (str) – Unique identifier for this waypoint set.

  • waypoints_xyz (ndarray) – Waypoint positions, shape (N, 3).

  • color (str) – Sphere colour.

  • point_size (int) – Rendered sphere size in pixels.

  • alpha (float) – Opacity [0, 1].

  • label_color (str) – Text colour for the numeric labels.

  • label_size (int) – Font size of the labels.

  • label_z_offset (float) – Additional Z offset applied to label positions.

Returns:

The newly created _WaypointSet.

Raises:

ValueError – If name already exists or waypoints_xyz is not (N, 3).

add_fk_marker(size=10, color='blue')[source]

Add a single movable point marker to the plotter and return its PolyData.

The returned pyvista.PolyData can be updated in-place to move the marker without re-adding it to the scene.

Parameters:
  • size (int) – Rendered point size in pixels.

  • color (str) – Marker colour.

Return type:

PolyData

Returns:

Single-point pyvista.PolyData (initially at the origin).

add_polyline_mesh(points)[source]

Create a connected polyline PolyData from an ordered array of points.

Parameters:

points (ndarray) – Point array, shape (N, 3).

Return type:

PolyData

Returns:

pyvista.PolyData with N points and N-1 line cells (or just points if N < 2).

add_robot(name, robot, theta=None, base_transform=None, color=None)[source]

Load and add a robot’s meshes (and optionally joint frames) to the plotter.

Parameters:
  • name (str) – Unique robot identifier used for subsequent update_robot() calls.

  • robot (VisualRobotLike) – Robot model conforming to VisualRobotLike.

  • theta (Optional[ndarray]) – Initial joint configuration, shape (dof,). Defaults to the robot’s current configuration.

  • base_transform (Optional[ndarray]) – Optional (4, 4) world-from-base transform. Defaults to identity.

  • color (Optional[str]) – Solid colour for all meshes, or None to use mesh file colours.

Raises:

ValueError – If name is already registered in this viewer.

Return type:

None

add_waypoint_highlight_marker(size=28, color='yellow')[source]

Add a large point marker used to highlight an active waypoint.

A convenience wrapper around add_fk_marker() with defaults suitable for highlighting a selected waypoint.

Parameters:
  • size (int) – Rendered point size in pixels.

  • color (str) – Marker colour.

Return type:

PolyData

Returns:

Single-point pyvista.PolyData (initially at the origin).

hide_canvas_waypoint(name, index)[source]

Hide a single waypoint from a named set.

Removes the point from the rendered poly and hides its label actor. Has no effect if the waypoint is already hidden.

Parameters:
  • name (str) – Waypoint set identifier.

  • index (int) – 0-based index of the waypoint to hide.

Raises:
  • KeyError – If name is not registered.

  • IndexError – If index is out of range.

Return type:

None

make_line()[source]

Create a two-point degenerate line segment PolyData suitable for in-place updates.

Both endpoints start at the origin. Update poly.points to reposition the line without re-adding it to the scene.

Return type:

PolyData

Returns:

pyvista.PolyData with two coincident points and one line cell.

plotter: Any
reset_camera()[source]

Reset the camera to fit all visible scene contents and re-render.

Return type:

None

reset_canvas_waypoints(name)[source]

Restore all waypoints in a named set to their original positions/visibility.

Parameters:

name (str) – Waypoint set identifier.

Raises:

KeyError – If name is not registered.

Return type:

None

set_camera(position=(1.0, 1.0, 1.0), focal_point=(0.0, 0.0, 0.0), viewup=(0.0, 0.0, 1.0))[source]

Set the camera position, focal point, and up-vector.

Parameters:
  • position – Camera position in world coordinates (x, y, z).

  • focal_point – Point the camera looks at (x, y, z).

  • viewup – Camera up-vector (x, y, z).

Return type:

None

show()[source]

Open the PyVista window and block until it is closed.

Return type:

None

update_robot(name, theta, base_transform=None)[source]

Update mesh positions (and joint frames) for a previously added robot.

Transforms mesh vertices for the new configuration in-place and calls poly.Modified() so PyVista picks up the change without re-adding actors.

Parameters:
  • name (str) – Robot identifier previously passed to add_robot().

  • theta (ndarray) – New joint configuration, shape (dof,).

  • base_transform (Optional[ndarray]) – Optional (4, 4) world-from-base transform. Defaults to the identity matrix.

Raises:
  • KeyError – If name is not a registered robot.

  • ValueError – If theta.size != robot.dof.

Return type:

None

update_waypoint_highlight(marker, point)[source]

Move or hide the waypoint highlight marker.

Parameters:
  • marker (PolyData) – Marker previously returned by add_waypoint_highlight_marker().

  • point (ndarray | None) – New 3-D position, or None to hide the marker.

Return type:

None

class src.utils.visualization.EcmControl(q, yaw_idx, pitch_idx, insertion_idx, yaw_step_rad=np.float64(0.017453292519943295), pitch_step_rad=np.float64(0.017453292519943295), insertion_step_m=0.002)[source]

Bases: object

Mutable joint-space state for interactive ECM camera control.

q

Current joint configuration, shape (dof,).

yaw_idx

Index of the yaw joint in q, or None if absent.

pitch_idx

Index of the pitch joint in q, or None if absent.

insertion_idx

Index of the insertion joint in q, or None if absent.

yaw_step_rad

Angle increment (rad) applied per key press for yaw.

pitch_step_rad

Angle increment (rad) applied per key press for pitch.

insertion_step_m

Translation increment (m) applied per key press for insertion.

insertion_idx: int | None
insertion_step_m: float = 0.002
pitch_idx: int | None
pitch_step_rad: float = np.float64(0.017453292519943295)
q: ndarray
yaw_idx: int | None
yaw_step_rad: float = np.float64(0.017453292519943295)
class src.utils.visualization.PlotterRobotMeshState(robot, poly_by_key, base_vertices_by_key)[source]

Bases: object

base_vertices_by_key: Dict[str, ndarray]
poly_by_key: Dict[str, pyvista.PolyData]
robot: VisualRobotLike
src.utils.visualization.add_robot_meshes_to_plotter(plotter, name, robot, theta=None, base_transform=None, color=None, alpha=1.0)[source]

Add only robot meshes to a plain PyVista plotter and return update state.

Return type:

PlotterRobotMeshState

src.utils.visualization.add_world_floor_and_object(plotter, object_type='cube', center=(0.0, 0.0, 0.035), color='tomato', cube_size=(0.05, 0.04, 0.06), sphere_radius=0.02, floor_size=(0.35, 0.35), floor_color='lightgray', floor_opacity=0.9)[source]

Add a floor plane and one target object to a PyVista plotter.

The floor lies in the XY plane at z = 0. The target object is either a cube or a sphere centred at center. World-frame XYZ axes are also drawn.

Parameters:
  • plotter (Plotter) – PyVista plotter to add meshes to.

  • object_type (str) – "cube" or "sphere".

  • center (tuple[float, float, float]) – World-frame centre of the target object (x, y, z) in metres.

  • color (str) – Colour of the target object.

  • cube_size (tuple[float, float, float]) – (x_length, y_length, z_length) for cube objects.

  • sphere_radius (float) – Radius in metres for sphere objects.

  • floor_size (tuple[float, float]) – (i_size, j_size) of the XY floor plane in metres.

  • floor_color (str) – Colour of the floor plane.

  • floor_opacity (float) – Opacity [0, 1] of the floor plane.

Return type:

None

src.utils.visualization.apply_camera_pose(plotter, pose, vfov_deg, near, far)[source]

Apply a camera pose to a PyVista plotter.

Sets position, focal point, up vector, view angle, and clipping range from the fields of pose, which must expose: position, focal_point, and viewup attributes.

Parameters:
  • plotter (Plotter) – PyVista plotter whose camera will be configured.

  • pose (Any) – Camera pose object with position, focal_point, and viewup array-like attributes.

  • vfov_deg (float) – Vertical field of view in degrees (camera view angle).

  • near (float) – Near clipping plane distance in metres.

  • far (float) – Far clipping plane distance in metres.

Return type:

None

src.utils.visualization.build_arg_parser()[source]

Return an argument parser for the ECM camera simulation script.

Adds --cameras (1 or 2) and --scope-deg (0 or 30) arguments. Defaults come from the project settings.py.

Return type:

ArgumentParser

Returns:

Configured argparse.ArgumentParser.

src.utils.visualization.build_dual_phantom_base_transforms(phantom_dual_y_distance)[source]

Return left/right Phantom base transforms for a symmetric dual-device setup.

Both devices are placed symmetrically about the world origin along the Y axis:

\[ \begin{align}\begin{aligned}\begin{split}T_{\text{left}} = \begin{bmatrix} I_{3} & [0,\ -d/2,\ 0]^\top \\ 0 & 1 \end{bmatrix}\end{split}\\\begin{split}T_{\text{right}} = \begin{bmatrix} I_{3} & [0,\ +d/2,\ 0]^\top \\ 0 & 1 \end{bmatrix}\end{split}\end{aligned}\end{align} \]

where \(d\) is phantom_dual_y_distance.

Parameters:

phantom_dual_y_distance (float) – Total Y separation (metres) between the two Phantom device bases.

Return type:

tuple[ndarray, ndarray]

Returns:

(T_left, T_right) — each a 4×4 homogeneous transform in world frame.

src.utils.visualization.build_dual_psm_base_transforms(psm_base_x, psm_base_y_distance, psm_base_z, psm_base_x_rotation_split_deg)[source]

Return left/right PSM base transforms for a symmetric dual-arm setup.

Each PSM is rotated about the X axis by ±half of psm_base_x_rotation_split_deg and then about the Z axis by \(-\pi/2\) to align the instrument shaft downward into the surgical field:

\[\begin{split}R_{\text{left}} &= R_x(+\alpha/2)\, R_z(-\pi/2) \\ R_{\text{right}} &= R_x(-\alpha/2)\, R_z(-\pi/2)\end{split}\]

where \(\alpha = \text{psm\_base\_x\_rotation\_split\_deg}\) in radians.

The translation is:

\[t_{\text{left}} = [x,\ -d/2,\ z]^\top, \quad t_{\text{right}} = [x,\ +d/2,\ z]^\top\]

where \(d\) is psm_base_y_distance.

Parameters:
  • psm_base_x (float) – X offset of both PSM bases in the world frame (metres).

  • psm_base_y_distance (float) – Total Y separation (metres) between the two PSM bases.

  • psm_base_z (float) – Z offset of both PSM bases in the world frame (metres).

  • psm_base_x_rotation_split_deg (float) – Total X-axis rotation spread (degrees) shared symmetrically between the two arms.

Return type:

tuple[ndarray, ndarray]

Returns:

(T_left, T_right) — each a 4×4 homogeneous base transform in world frame.

src.utils.visualization.build_ecm_control_from_camera_pose(camera_world_tf, robot_root, initial_q=None)[source]

Build an ECM model and control state so the camera aligns with a desired pose.

Solves for the base transform \(T_\text{base}\) such that the tool pose at the zero (or initial_q) configuration matches camera_world_tf:

\[T_\text{base} = T_\text{camera} \cdot T_\text{zero}^{-1}\]
Parameters:
  • camera_world_tf (ndarray) – Desired (4, 4) camera pose in world frame.

  • robot_root (str | Path) – Path to the URDF package root passed to ECM.

  • initial_q (ndarray | None) – Optional initial joint configuration, shape (dof,). Defaults to all zeros.

Return type:

tuple[ECM, EcmControl]

Returns:

(ecm, ctrl) — ECM model with base transform set and an EcmControl initialised to initial_q.

src.utils.visualization.camera_pose_from_psm_tools(p_left_world, p_right_world, camera_z_m)[source]

Compute an ECM-style camera pose from the PSM tool home positions.

The camera is placed at the midpoint of the two PSM home positions projected onto the XY plane, at a fixed height camera_z_m:

\[p_{\text{cam}} = \left[\, \frac{x_L + x_R}{2},\; \frac{y_L + y_R}{2},\; z_{\text{cam}} \,\right]^\top\]

The camera orientation is set so that the image plane faces downward (i.e., the Z axis of the camera frame points toward the surgical field):

\[\begin{split}R_{\text{cam}} = \begin{bmatrix} 1 & 0 & 0 \\ 0 & -1 & 0 \\ 0 & 0 & -1 \end{bmatrix}\end{split}\]
Parameters:
  • p_left_world (ndarray) – 3-vector (x, y, z) of the left PSM home position in world frame.

  • p_right_world (ndarray) – 3-vector (x, y, z) of the right PSM home position in world frame.

  • camera_z_m (float) – Fixed camera height above the world origin (metres).

Return type:

ndarray

Returns:

4×4 homogeneous camera pose \(T_{\text{cam}}^{\text{world}}\).

src.utils.visualization.compute_desired_pose_world(*, clutch_active, desired_hold_world, phantom_tool_world, phantom_home_world, psm_base_world, psm_home_local, teleoperation_gain)[source]

Compute the desired PSM tool pose in world frame (full-pose teleoperation).

When the clutch is active the robot holds its last commanded pose:

\[T_{\text{desired}} = T_{\text{hold}}\]

When the clutch is inactive the desired pose is computed in two parts:

Translate — scaled displacement from the Phantom home position:

\[\begin{split}T_{\text{PSM home}} &= T_{\text{PSM base}}^{\text{world}} \cdot T_{\text{PSM home}}^{\text{local}} \\ p_{\text{desired}} &= p_{\text{PSM home}} + k\,\bigl(p_{\text{Phantom}} - p_{\text{Phantom home}}\bigr)\end{split}\]

where \(k\) is teleoperation_gain and all \(p\) terms are the translation columns of their respective 4×4 transforms.

Rotate — relative rotation from Phantom home to current Phantom pose, applied to the PSM home orientation:

\[\begin{split}R_{\Delta} &= R_{\text{Phantom home}}^\top \cdot R_{\text{Phantom}} \\ R_{\text{desired}} &= R_{\text{PSM home}} \cdot R_{\Delta}\end{split}\]

The desired pose is then assembled as:

\[\begin{split}T_{\text{desired}} = \begin{bmatrix} R_{\text{desired}} & p_{\text{desired}} \\ 0 & 1 \end{bmatrix}\end{split}\]
Parameters:
  • clutch_active (bool) – When True the robot holds desired_hold_world.

  • desired_hold_world (ndarray) – 4×4 transform — pose to hold while clutch is active.

  • phantom_tool_world (ndarray) – 4×4 homogeneous transform of the Phantom stylus tip in world frame (current).

  • phantom_home_world (ndarray) – 4×4 homogeneous transform of the Phantom stylus tip in world frame at teleoperation start.

  • psm_base_world (ndarray) – 4×4 homogeneous transform of the PSM base in world frame (\(T_{\text{PSM base}}^{\text{world}}\)).

  • psm_home_local (ndarray) – 4×4 homogeneous transform of the PSM tool tip in the PSM base frame at teleoperation start (\(T_{\text{PSM home}}^{\text{local}}\)).

  • teleoperation_gain (float) – Scalar gain \(k\) mapping Phantom workspace displacements to PSM workspace displacements.

Return type:

ndarray

Returns:

4×4 homogeneous transform representing the desired PSM tool pose in world frame.

src.utils.visualization.compute_desired_position_world(*, clutch_active, desired_hold_world, phantom_tool_world, phantom_home_world, psm_home_world, teleoperation_gain)[source]

Compute the desired PSM tool position in world frame (position-only teleoperation).

When the clutch is active the robot holds its last commanded position:

\[p_{\text{desired}} = p_{\text{hold}}\]

When the clutch is inactive a scaled displacement from the Phantom home position is added to the PSM home position:

\[p_{\text{desired}} = p_{\text{PSM home}} + k \,(p_{\text{Phantom}} - p_{\text{Phantom home}})\]

where \(k\) is teleoperation_gain.

Parameters:
  • clutch_active (bool) – When True the robot holds desired_hold_world.

  • desired_hold_world (ndarray) – 3-vector — position to hold while clutch is active.

  • phantom_tool_world (ndarray) – 3-vector — current Phantom stylus tip position in world frame.

  • phantom_home_world (ndarray) – 3-vector — Phantom stylus position at the moment teleoperation began (reference frame origin).

  • psm_home_world (ndarray) – 3-vector — PSM tool tip position in world frame at the moment teleoperation began.

  • teleoperation_gain (float) – Scalar gain \(k\) mapping Phantom workspace displacements to PSM workspace displacements.

Return type:

ndarray

Returns:

3-vector representing the desired PSM tool position in world frame.

src.utils.visualization.create_camera_plotter(n_cameras)[source]

Create a PyVista plotter configured for mono or stereo camera display.

Parameters:

n_cameras (int) – 1 for monocular or 2 for stereo side-by-side layout.

Return type:

tuple[Plotter, bool]

Returns:

(plotter, is_stereo) — the configured plotter and a bool that is True when stereo layout was created.

src.utils.visualization.create_point_poly(point)[source]

Create a single-point pyvista.PolyData marker.

Parameters:

point (ndarray) – 3-D point coordinates, shape (3,).

Return type:

PolyData

Returns:

pyvista.PolyData containing exactly one point.

src.utils.visualization.demo_two_robots(ecm, psm, theta_ecm=None, theta_psm=None)[source]
Return type:

None

src.utils.visualization.find_joint_indices(ecm)[source]

Find indices of yaw, pitch, and insertion joints in the ECM’s active joint list.

Parameters:

ecm (ECM) – Instantiated ECM model.

Return type:

tuple[int | None, int | None, int | None]

Returns:

Tuple (yaw_idx, pitch_idx, insertion_idx) where each element is the integer index into ecm.active_joint_names, or None if the joint is not present.

src.utils.visualization.hfov_to_vfov_deg(hfov_deg, width_px, height_px)[source]

Convert a horizontal field-of-view angle to a vertical field-of-view angle.

Uses the pinhole camera relationship:

\[\text{VFOV} = 2 \arctan\!\left(\frac{\tan(\text{HFOV}/2)}{w/h}\right)\]
Parameters:
  • hfov_deg (float) – Horizontal field of view in degrees.

  • width_px (int) – Image width in pixels.

  • height_px (int) – Image height in pixels.

Return type:

float

Returns:

Vertical field of view in degrees.

src.utils.visualization.register_keys(plotter, ecm, ctrl, optical_tilt_deg, n_cameras, vfov_deg, camera_roll_deg=0.0)[source]

Register keyboard callbacks for interactive ECM camera control.

Key bindings (from settings.py step sizes):

  • Left / Right — yaw joint ± ctrl.yaw_step_rad

  • Up / Down — pitch joint ± ctrl.pitch_step_rad

  • PageUp / PageDown — insertion joint ± ctrl.insertion_step_m

Each key press clamps joint angles, calls update_cameras(), and triggers a plotter render.

Parameters:
  • plotter (Plotter) – PyVista plotter to bind keys on.

  • ecm (ECM) – ECM kinematic model (used for clamping and FK).

  • ctrl (EcmControl) – Mutable EcmControl updated in-place by callbacks.

  • optical_tilt_deg (float) – Scope tilt in degrees forwarded to update_cameras().

  • n_cameras (int) – 1 or 2 forwarded to update_cameras().

  • vfov_deg (float) – Vertical FOV in degrees forwarded to update_cameras().

  • camera_roll_deg (float) – Roll offset forwarded to update_cameras().

Return type:

None

src.utils.visualization.set_camera_view(scene, eye, target)[source]

Point a viewer or plotter camera at a target from a given eye position.

Accepts either a DvrkRealtimeViz (calls set_camera()) or a raw pyvista.Plotter. Up-vector is always +Z.

Parameters:
  • sceneDvrkRealtimeViz or pyvista.Plotter.

  • eye – Camera position (x, y, z) in world coordinates.

  • target – Focal point (x, y, z) in world coordinates.

src.utils.visualization.set_robot_mesh_color(robots_by_name, robot_name, color)[source]

Apply a uniform color to every mesh actor belonging to one robot.

Iterates over all mesh items stored in the robot’s visualization state and sets their PyVista actor property color. A no-op if robot_name is not present in robots_by_name.

Parameters:
  • robots_by_name (Mapping[str, Any]) – Mapping from robot name to its visualization state object (e.g., DvrkRealtimeViz._robots).

  • robot_name (str) – Key identifying the target robot (e.g., "left").

  • color (str) – Color string accepted by PyVista / VTK (e.g., "red", "#FF0000").

Return type:

None

src.utils.visualization.tool_position_world(robot, theta, base_transform=None)[source]

Compute tool-tip position in world coordinates.

Runs forward kinematics up to robot.tool_link and applies the optional base transform to produce a 3-vector in world frame.

Parameters:
  • robot (RobotTreeLike) – Robot model exposing tool_link and joint structure.

  • theta (ndarray) – Joint configuration, shape (dof,).

  • base_transform (Optional[ndarray]) – Optional (4, 4) world-from-base transform. Defaults to identity.

Return type:

ndarray

Returns:

Position vector \(\mathbf{p} \in \mathbb{R}^3\).

src.utils.visualization.update_cameras(plotter, ecm, ctrl, optical_tilt_deg, n_cameras, vfov_deg, camera_roll_deg=0.0)[source]

Recompute camera poses from the current ECM state and apply them to the plotter.

Supports mono (n_cameras=1) and stereo (n_cameras=2) layouts. An optional roll correction rotates the up-vector about the optical axis:

\[\mathbf{u}' = \cos\phi\,\mathbf{u} + \sin\phi\,(\hat{\mathbf{n}} \times \mathbf{u}) + (1-\cos\phi)(\hat{\mathbf{n}} \cdot \mathbf{u})\hat{\mathbf{n}}\]

where \(\hat{\mathbf{n}}\) is the optical axis and \(\phi\) is camera_roll_deg in radians.

Parameters:
  • plotter (Plotter) – PyVista plotter to update.

  • ecm (ECM) – ECM kinematic model.

  • ctrl (EcmControl) – Current EcmControl joint state.

  • optical_tilt_deg (float) – Scope tilt angle in degrees (0 or 30).

  • n_cameras (int) – 1 (mono) or 2 (stereo).

  • vfov_deg (float) – Vertical field of view in degrees.

  • camera_roll_deg (float) – Roll offset applied to the up-vector in degrees.

Return type:

None

src.utils.visualization.update_clutch_state(clutch_pressed, prev_clutch_pressed, clutch_active)[source]

Detect rising/falling edges of the clutch button and update clutch state.

Edge detection is performed on the transition between the previous and current button readings:

  • Rising edge (clutch_pressed and not prev_clutch_pressed): clutch becomes active; event is ClutchEvent.PRESSED.

  • Falling edge (not clutch_pressed and prev_clutch_pressed): clutch becomes inactive; event is ClutchEvent.RELEASED.

  • No edge: clutch active state is unchanged; event is ClutchEvent.NONE.

Parameters:
  • clutch_pressed (bool) – Current (this-tick) state of the clutch button.

  • prev_clutch_pressed (bool) – State of the clutch button on the previous tick.

  • clutch_active (bool) – Current clutch active flag before this update.

Return type:

tuple[bool, bool, ClutchEvent]

Returns:

(new_clutch_active, new_prev_clutch_pressed, clutch_event)

  • new_clutch_active — updated clutch active flag.

  • new_prev_clutch_pressed — value to store as prev_clutch_pressed for the next call (equals clutch_pressed).

  • clutch_event — the edge event that occurred this tick.

src.utils.visualization.update_jaw_command(jaw_cmd, gripper_pressed, close_speed, open_speed, jaw_min, jaw_max)[source]

Integrate jaw angle command from the gripper button and clamp to joint limits.

On each control tick the jaw command is incremented or decremented based on the gripper button state, then clamped to [jaw_min, jaw_max]:

\[\begin{split}q_{\text{jaw}}^{\text{new}} = \text{clip}\!\left( q_{\text{jaw}} + \begin{cases} -s_{\text{close}} & \text{if gripper pressed} \\ +s_{\text{open}} & \text{otherwise} \end{cases}, \; q_{\min}, q_{\max} \right)\end{split}\]
Parameters:
  • jaw_cmd (float) – Current jaw joint angle command (radians).

  • gripper_pressed (bool) – True when the operator is squeezing the gripper button (close command).

  • close_speed (float) – Angular rate (rad / tick) at which the jaw closes.

  • open_speed (float) – Angular rate (rad / tick) at which the jaw opens.

  • jaw_min (float) – Minimum allowed jaw angle (radians).

  • jaw_max (float) – Maximum allowed jaw angle (radians).

Return type:

float

Returns:

Updated jaw joint angle command clamped to [jaw_min, jaw_max].

src.utils.visualization.update_point_poly(poly, point)[source]

Update an existing single-point pyvista.PolyData marker in-place.

Replaces poly.points and calls poly.Modified() so that the PyVista renderer picks up the change without re-adding the actor.

Parameters:
  • poly (PolyData) – Marker previously created by create_point_poly().

  • point (ndarray) – New 3-D point coordinates, shape (3,).

Return type:

None

src.utils.visualization.update_robot_meshes_on_plotter(state, theta, base_transform=None)[source]

Update mesh vertices previously added by add_robot_meshes_to_plotter.

Return type:

None

src.utils.visualization.visualize(robot, theta=None, show_frames=False, alpha=1.0)[source]

Create a DvrkRealtimeViz pre-loaded with a single robot.

Convenience factory for quick visualisation scripts.

Parameters:
  • robot – Robot model conforming to VisualRobotLike.

  • theta (Optional[ndarray]) – Joint configuration, shape (dof,). Defaults to the robot’s current configuration.

  • show_frames (bool) – Whether to draw joint-frame axis arrows.

  • alpha (float) – Mesh opacity [0, 1].

Return type:

DvrkRealtimeViz

Returns:

DvrkRealtimeViz with the robot added but the window not yet shown. Call DvrkRealtimeViz.show() to open the window.