• RAE
  • yolov4 and mobile net node run errors

Starting a luxonis/rae-ros:humble container like this (I'm using /data/persistent/home/root/ros_ws for persistent ros2 dev packages) :

`docker run -it --restart=unless-stopped -v /dev/bus/usb:/dev/bus/usb --device-cgroup-rule='c 189:* rmw' -v /dev/:/dev/ -v /sys/:/sys/ -v /data/persistent/home/root/ros_ws/:/data/persistent/home/root/ros_ws/ --privileged --net=host -e ROS_DOMAIN_ID='1' luxonis/rae-ros-robot:humble

And then using packages in the container's underlay_ws directory (from the depthai-ros repository):

root@rae:/# ros2 launch depthai_examples yolov4_publisher.launch.py

I get the error:

[INFO] [launch]: All log files can be found below /root/.ros/log/2024-02-14-15-22-40-511307-rae-292
[INFO] [launch]: Default logging verbosity is set to INFO
Default resources path..............
/underlay_ws/install/depthai_examples/share/depthai_examples/resources
[INFO] [robot_state_publisher-1]: process started with pid [294]
[INFO] [yolov4_spatial_node-2]: process started with pid [296]
[robot_state_publisher-1] [INFO] [1707924162.033983431] [oak_state_publisher]: got segment oak-d-base-frame
[robot_state_publisher-1] [INFO] [1707924162.034422930] [oak_state_publisher]: got segment oak-d_frame
[robot_state_publisher-1] [INFO] [1707924162.034477263] [oak_state_publisher]: got segment oak_imu_frame
[robot_state_publisher-1] [INFO] [1707924162.034515638] [oak_state_publisher]: got segment oak_left_camera_frame
[robot_state_publisher-1] [INFO] [1707924162.034552263] [oak_state_publisher]: got segment oak_left_camera_optical_frame
[robot_state_publisher-1] [INFO] [1707924162.034588638] [oak_state_publisher]: got segment oak_model_origin
[robot_state_publisher-1] [INFO] [1707924162.034623763] [oak_state_publisher]: got segment oak_rgb_camera_frame
[robot_state_publisher-1] [INFO] [1707924162.034659137] [oak_state_publisher]: got segment oak_rgb_camera_optical_frame
[robot_state_publisher-1] [INFO] [1707924162.034694596] [oak_state_publisher]: got segment oak_right_camera_frame
[robot_state_publisher-1] [INFO] [1707924162.034729804] [oak_state_publisher]: got segment oak_right_camera_optical_frame
[yolov4_spatial_node-2] [2024-02-14 15:22:43.341] [depthai] [warning] USB protocol not available - If running in a container, make sure that the following is set: "-v /dev/bus/usb:/dev/bus/usb --device-cgroup-rule='c 189:* rmw'"
[yolov4_spatial_node-2] terminate called after throwing an instance of 'std::invalid_argument'
[yolov4_spatial_node-2] what(): Streams have duplcate name 'detections'
[ERROR] [yolov4_spatial_node-2]: process has died [pid 296, exit code -6, cmd '/underlay_ws/install/depthai_examples/lib/depthai_examples/yolov4_spatial_node --ros-args --params-file /tmp/launch_params_8cr7w2tq --params-file /tmp/launch_params_0v92kdg1 --params-file /tmp/launch_params_6sjmjfp7 --params-file /tmp/launch_params_fy_u9gzv --params-file /tmp/launch_params_dkjrponb --params-file /tmp/launch_params_yk6azf6r --params-file /tmp/launch_params_q2531l2v'].

I receive a similar warning running:

root@rae:/# ros2 launch depthai_examples mobilenet_publisher.launch.py

Are these nodes tested on rae yet?

Hey,

We are slowly moving to hosting our docker images on our own github packages - this is latest one.

depthai_examples scripts are not tested and we dont expect them to work with rae. If you want to run a simple detection network - they already work in apps we have published through RH.

This node should be able to run detections, tho I will have to fix it up to bring it up to date.

Alternatively we are developing python bindings that will make development much easier. Here is one script that should run detections on front stereo, just copy paste this code and run it on image I mentioned above:

import depthai as dai
import time
import depthai_ros_py_bindings as dai_ros

class Manager:
    def __init__(self):
        pipeline = dai.Pipeline()

        imu = pipeline.create(dai.node.IMU)
        imu.enableIMUSensor(dai.IMUSensor.ACCELEROMETER_RAW, 400)
        imu.enableIMUSensor(dai.IMUSensor.GYROSCOPE_RAW, 400)
        imu.enableIMUSensor(dai.IMUSensor.ROTATION_VECTOR, 400)
        imu.setBatchReportThreshold(1)
        imu.setMaxBatchReports(10)
        xout_imu = pipeline.create(dai.node.XLinkOut)
        xout_imu.setStreamName("imu")
        imu.out.link(xout_imu.input)

        left = pipeline.create(dai.node.ColorCamera)
        left.setBoardSocket(dai.CameraBoardSocket.CAM_B)
        left.setResolution(dai.ColorCameraProperties.SensorResolution.THE_800_P)
        left.setColorOrder(dai.ColorCameraProperties.ColorOrder.BGR)
        left.setFps(30)
        left.setVideoSize(640, 400)
        left.setPreviewSize(416, 416)
        left.setInterleaved(False)

        encoder_left = pipeline.create(dai.node.VideoEncoder)
        encoder_left.setDefaultProfilePreset(30, dai.VideoEncoderProperties.Profile.MJPEG)
        encoder_left.setQuality(50)
        left.video.link(encoder_left.input)
        
xout_left = pipeline.create(dai.node.XLinkOut) xout_left.setStreamName("left") xout_left.input.setBlocking(False) right = pipeline.create(dai.node.ColorCamera) right.setBoardSocket(dai.CameraBoardSocket.CAM_C) right.setResolution(dai.ColorCameraProperties.SensorResolution.THE_800_P) left.setColorOrder(dai.ColorCameraProperties.ColorOrder.BGR) right.setFps(30) right.setVideoSize(640, 400) right.setInterleaved(False) encoder_right = pipeline.create(dai.node.VideoEncoder) encoder_right.setDefaultProfilePreset(30, dai.VideoEncoderProperties.Profile.MJPEG) encoder_right.setQuality(50) right.video.link(encoder_right.input) xout_right = pipeline.create(dai.node.XLinkOut) xout_right.setStreamName("right") xout_right.input.setBlocking(False)
stereo = pipeline.create(dai.node.StereoDepth) detectionNetwork = pipeline.create(dai.node.YoloSpatialDetectionNetwork) detectionNetwork.setConfidenceThreshold(0.6) detectionNetwork.setNumClasses(80) detectionNetwork.setCoordinateSize(4) detectionNetwork.setIouThreshold(0.5) detectionNetwork.setBlobPath("/ws/src/rae-ros/yolov6n_416x416_openvino2022.1_vpux.blob") detectionNetwork.setNumInferenceThreads(8) detectionNetwork.input.setBlocking(False) detectionNetwork.input.setQueueSize(1) left.preview.link(detectionNetwork.input) stereo.depth.link(detectionNetwork.inputDepth) xout_det = pipeline.create(dai.node.XLinkOut) xout_det.setStreamName("detections") detectionNetwork.out.link(xout_det.input) left.video.link(stereo.left) encoder_left.bitstream.link(xout_left.input) right.video.link(stereo.right) encoder_right.bitstream.link(xout_right.input) xout_stereo = pipeline.create(dai.node.XLinkOut) xout_stereo.setStreamName("stereo") xout_stereo.input.setBlocking(False) stereo.depth.link(xout_stereo.input) self.device = dai.Device(pipeline) self.context_manager = dai_ros.ROSContextManager() self.context_manager.init([""], "single_threaded") opts=dai_ros.ROSNodeOptions() self.node = dai_ros.ROSNode("dai", opts) calHandler = self.device.readCalibration() self.streamer_imu = dai_ros.ImuStreamer(self.node, "imu", "imu_frame", dai_ros.ImuSyncMethod.COPY, 0.0, 0.0, 0.0, 0.0, True, False, False) self.streamer_left = dai_ros.ImgStreamer(self.node, calHandler, dai.CameraBoardSocket.CAM_C, "left", "left_frame", False, False) self.streamer_left.convertFromBitStream(dai.RawImgFrame.Type.BGR888i) self.streamer_right = dai_ros.ImgStreamer(self.node, calHandler, dai.CameraBoardSocket.CAM_B, "right", "right_frame", False, False) self.streamer_right.convertFromBitStream(dai.RawImgFrame.Type.BGR888i) self.streamer_stereo = dai_ros.ImgStreamer(self.node, calHandler, dai.CameraBoardSocket.CAM_B, "stereo", "stereo_frame", False, False) self.streamer_detections = dai_ros.SpatialDetectionStreamer(self.node, "detections", "detections_frame", 416, 416, False, False) self.device.getOutputQueue("detections", 8, False).addCallback(self.det_handle) self.device.getOutputQueue("imu", 8, False).addCallback(self.imu_handle) self.device.getOutputQueue("stereo", 8, False).addCallback(self.stream_handle) self.device.getOutputQueue("left", 8, False).addCallback(self.stream_handle) self.device.getOutputQueue("right", 8, False).addCallback(self.stream_handle) self.context_manager.add_node(self.node) self.context_manager.spin() self.node.log("Ready") def det_handle(self, name:str, msg: dai.SpatialImgDetections): self.streamer_detections.publish(name, msg) def imu_handle(self, name:str, msg: dai.IMUData): self.streamer_imu.publish(name, msg) def stream_handle(self, name: str, msg: dai.ImgFrame): if name == "stereo": self.streamer_stereo.publish(name, msg) elif name == "left": self.streamer_left.publish(name, msg) elif name == "right": self.streamer_right.publish(name, msg) if __name__ == "__main__": manager = Manager() while dai_ros.ros_ok(): time.sleep(0.001)```

This is a very good example and I have it working on my RAE via the StarterApp. Now, I would like to visualize the streams with RVIZ. For good reasons RVIZ is not installed on the RAE. So I would like to run RVIZ on my computer (Windows or Ubuntu) and connect to the RAE. Can somebody give some hints how to do that?

    Hi @IvoGermann
    Not to familiar with ROS, but wouldn't just setting the env variables to something like:

    export ROS_MASTER_URI=http://192.168.1.100:11311
    export ROS_IP=192.168.1.101

    allow you to connect with RVIZ?

    Thanks,
    Jaka

    IvoGermann What I did was (for now I am avoiding RH) - using ROS2 Humble. I did not need to set ROS_MASTER_URI or ROS_IP - those maybe for versions of ROS2 prior to Humble??:

    1. Set the Rae's WIFI to a static IP on my internal network.

    2. Used the rae_description and rae_gazebo packages from luxonis/rae-ros and build those packages on a virtual ubuntu 22.04 machine with ros2 humble installed. For me, all sim is done outside of rae.

    3. Made sure the ROS_DOMAIN_ID env variable was set on the ubuntu vm to match the ROS_DOMAIN_ID on ros2 on rae is set (I set this in ~/.bashrc - which is where I also sourced the local ros2 workspace))

    4. run rviz2 on the ubuntu vm

    After launching Rae's cameras (ros2 launch rae_bringup robot.launch.py) you should be able to view the camera images in rviz2 by adding an image panel and subscribing to the topic where the camera is being published. You can see the published topics using ros2 topic list. If you don't see the camera and other topics being listed, check again $ echo $ROS_DOMAIN_ID on both machines and make sure both machines can ping each other.

    Take a look at gazebo.launch.py in the rae_gazebo to launch both gazebo and rviz (ros2 launch rae_gazebo gazebo.launch.py), including publishing robot state publisher, gazebo parameter bridge and other topics needed for rviz/gazebo visualization.

    By running yolo4 script above on rae (again, make sure cameras are running), you should see the detection topic being published (which you will need to handle somewhere else). Note you need to use the docker image ghcr.io/luxonis/rae-ros:v0.2.2-humble, build the yolo4_detect package and run it within the docker container using ros2 run yolo4_detect detect_yolo4

    This is just a general idea. See rae_gazebo/rviz/urdf.rviz to set up run configuration for rviz.

    Is this what you are looking for?

      mjohannessen I did not need to set ROS_MASTER_URI or ROS_IP - those maybe for versions of ROS2 prior to Humble??

      I think it's for ROS1. Apparently ROS2 works OOTB as long as they are on the same domain.