Node Topology and Data Flow
Node topology
To make the runtime easy to follow for new readers, this section is organized around who receives which inputs and produces which outputs.
| Node | Main role | Key inputs | Key outputs / behavior |
|---|---|---|---|
cobiz_bridge_node | device registration/update + Track/action composition | config.yaml (videos, audios, speaker, lidar, topics, actions, control) | publishes /device_info |
manager_node | automatically starts/restarts child nodes based on /device_info | subscribes to /device_info, reads config.yaml | manages video_node, topic_node, audio_node, speaker_node, lidar_node, sensor_node, occupancy_grid_node, control_node, health_check_node, and request_manager |
health_check_node | receives and filters server events | server websocket (TASK_REGISTERED, TASK_ABORTED) | publishes /task_event |
request_manager | forwards plugin status to centralized HTTP endpoints | subscribes to /task_state | POST {COBIZ_API_ADDRESS}/api/tasks/{task_id}/{task_type} |
video_node | encodes and forwards RTSP/UVC video sources | type=rtsp/uvc, source, format, width, height, fps | websocket transfer (H.265/H.264/VP9) |
topic_node | encodes and forwards ROS image topics | source, format, width, height, fps, message_type | websocket transfer + diagnostic logging |
audio_node | encodes and forwards ALSA/Pulse/RTSP audio sources with Opus | type=alsa/pulse/rtsp, source, format, sample_rate, channels | websocket transfer |
speaker_node | receives server-side Opus and plays it through speakers | websocket audio stream, type, source, rate, channels | ALSA/Pulse playback |
lidar_node | compresses and forwards PointCloud2 | topic (sensor_msgs/PointCloud2) | Draco-compressed payload over websocket |
sensor_node | serializes and forwards arbitrary ROS topics | topic, topic_type (optional) | websocket transfer |
occupancy_grid_node | specialized transmission for OccupancyGrid | topic (nav_msgs/OccupancyGrid) | PNG + metadata transfer |
control_node | bridges remote control input to Joy topics | websocket control messages | publishes sensor_msgs/Joy |
Nodes started by manager_node per Track type
VIDEOTrack- if
videos[].type == topic, runtopic_node - otherwise for
rtspanduvc, runvideo_node
- if
AUDIOTrack:audio_nodeSPEAKERTrack:speaker_nodeLIDARTrack:lidar_nodeCONTROLTrack:control_nodeMAPTrack:occupancy_grid_nodeTOPIC,BATTERY,ODOMETRY,TRAJECTORYTracks:sensor_node
Key data flows
Core flow
cobiz_bridge_node → /device_info → manager_node → child nodes
This path is how the endpoint declares the sensors, media, and control capabilities it owns and automatically starts the corresponding execution nodes.
Task-state flow
health_check_node → /task_event → plugin → /task_state → request_manager → Server API
This is the standard plugin path. A plugin receives /task_event, runs its domain logic, and publishes the result to /task_state. The actual server API call is handled by request_manager.
Role separation from an operational perspective
cobiz_bridge_node: the starting point for device registration and capability declarationmanager_node: the node lifecycle orchestratorhealth_check_node: the adapter that converts server events into ROS 2 internal events- plugins: execute Task domain logic
request_manager: standardizes server reporting and owns the external HTTP boundary
Because of this structure, plugins can focus on execution results and state transitions, not on network implementation details.