J
jithub

  • Jul 22, 2022
  • Joined Feb 17, 2022
  • 0 best answers
  • erik I am unable to run the docker image contained in the above repo on the Jetson as its host platform (armv8) does not match the specified host platform (amd64). There are many differences in Dockerfile between what I am using and the example provided as I have many more complex dependencies and my application is written in C++ as opposed to Python. I was able to resolve my issues by running the container with --privileged and adding a udev service restart into my entrypoint.sh file to ensure that the rules reload and trigger correctly processed.

  • erik Hi Erik,

    I am not using the depthai-docker repo as I have a very specific use case that requires CUDA forwarding from the host to the container, and specifically Ubuntu 20.04<. I have consulted the files therein to try to troubleshoot my setup. I am using the l4t-ubuntu20-crosscompile image from this repo as my base, building OpenCV from source with the features I need, and using that image to pull and run code that uses DepthAI libraries. Would the depthai-docker repo's issues still be the best place to get help?

    • erik replied to this.
    • Howdy. I am working on deploying a codebase to a Jetson Nano and was hoping to containerize my work as there are some dependencies that are not correctly versioned when using Ubuntu 18.04>, which the Jetson L4T ships with. I've set up a Docker container to work with the Jetson that uses Ubuntu 20.04, and I am able to compile and build my code inside the container. When I run, I receive the [warning] skipping X_LINK_UNBOOTED device having name "<error>" message, and the code terminates with tkill(). I am working with an OAK-D USB camera, and have confirmed I am able to view and access the camera's output on the host machine properly. I've set udev rules inside the container as well, and have been able to compile and run the code with an identical workflow on a different host machine running Ubuntu 20.04.

      Are there any steps I am missing here? The error above is only referenced with respect to udev rules in the Troubleshooting documentation, am I missing something about forwarding/sharing these rules inside the container? The run command I am using is as follows:

      nvidia-docker run -it --rm -v /dev/bus/usb:/dev/bus/usb --device-cgroup-rule='c 198:* rmw' <container name>:latest

      Can anyone point me in the right direction to access the camera from inside my container? I have searched through all documentation and depthai source, including the /ci/ folders in depthai-core and depthai-python to no avail.

      • erik replied to this.
      • Woof, this is a doozy. So, I have created an application which serves DepthAI processed video to an RTSP stream. This code runs on a Jetson Nano, making use of hardware accel to encode and run some CV ops. Problem being, L4T ships with an Ubuntu 18.04 distro... which has old GStreamer library package versions that do not compile when attempting to use any of the RTSP server code for an appsrc. This is not wholly relevant to my post, but I feel the need to explain why I am in this situation and my constraints, in case there is a better solution. In order to avoid having to a) go through an Ubuntu upgrade every time I bring up a new system and b) recompile and build OpenCV from source with CUDA and GStreamer, I was hoping to squash all necessary dependencies into a Docker container running Ubuntu 20.04, and simply run my RTSP server inside there. For the record, the application works flawlessly on a host Jetson. This brings me to my main problem. When I attempt to run my application (it compiles + builds no problem), I receive an error:

        Failed to find device after booting, error message: X_LINK_DEVICE_NOT_FOUND

        I have tried several solutions inspired by what I have seen work on my host systems and various pieces compiled from the Dockerfiles of others. I am currently running the following command to bring the container up:

        docker run --rm --privileged -it -v /dev/bus/usb:/dev/bus/usb --device-cgroup-rule='c 189:* rmw' my-image-name:latest

        The image is configured with an entrypoint that runs my compile and build steps. When I attempt to run the resulting binary, the above error occurs. I am unable to make any fixes with udev, as, to my knowledge, it does not support containers. I have tee'd the rule in anyways, as it is present in the depthai-docker repo Dockerfile. Looking at the files found in depthai-python/ci, there is a step where libusb is installed from source and a setup script is called to ignore udev - is that required to get this all to work?

        Overall, if anyone has any idea where I might be going wrong/how to set up DepthAI over USB to a Docker container that I am building from scratch (mostly), please let me know!

        • erik replied to this.
        • In further debugging, I've been able to confirm that the camera's output queues no longer return True for has() calls. The number of frames found in the queue seems to vary, around 9-15, before it becomes empty. The queue is not being closed, has blocking set to false, and changing the queue calls from the ctx->frame_queue to video (the object created and assigned to getOutputQueue() when called) changes nothing in the behavior. I am mightily confused as to what could be causing this issue.

          • erik replied to this.
          • Howdy!
            I've been working on writing a C++ application to stream video from the DAI camera over RTSP. I am currently using the Oak-D USB camera, which I have confirmed does not have these same issues in Python. After getting the streaming framework itself written and tested, I was hoping to integrate a simple, unprocessed ColorCamera feed. However, on launching the sever and connecting with a client, the camera provides 20 frames of video (counted using system prints) before timing out the client connection. I suspected that this was a problem not with the streaming itself, but the camera providing media to the server. I was able to confirm this by removing any streaming from the loop and just attempting to display the ColorCamera feed using OpenCV. Code snippet follows:

            auto camRgb = pipeline.create<node::ColorCamera>();
                auto xoutVideo = pipeline.create<node::XLinkOut>();
                auto xoutPreview = pipeline.create<node::XLinkOut>();
            
                xoutVideo->setStreamName("video");
                xoutPreview->setStreamName("preview");
            
                camRgb->setPreviewSize(1920, 1080);
                camRgb->setBoardSocket(CameraBoardSocket::RGB);
                camRgb->setResolution(ColorCameraProperties::SensorResolution::THE_1080_P);
                camRgb->setColorOrder(ColorCameraProperties::ColorOrder::BGR);
            
                camRgb->video.link(xoutVideo->input);
                camRgb->preview.link(xoutPreview->input);
                // shared_device = make_shared<Device> (pipeline);
                device_ = new Device(pipeline);
                auto video = device_->getOutputQueue("video", 64, false);
                
                liveCtx * ctx_;
            
                ctx_ = new liveCtx;
                ctx_->frame_queue = video;
                while (true) {
                    auto videoFrame = ctx_->frame_queue->get<ImgFrame>();
                    imshow("video", videoFrame->getCvFrame());
            
                    int key = waitKey(1);
                    if (key == 'q' || key == 'Q') {return 0;}
                }   

            where liveCtx is defined as follows:


            typedef struct {
            std::shared_ptr<DataOutputQueue> frame_queue;
            GstClockTime timestamp;
            } liveCtx;

            Frames will display on program startup, through the camera autofocus, before the camera freezes. No errors are thrown, the program continues to run, but the camera does not seem to provide any more frames. Checking $dmesg -wH, nothing out of the ordinary seems to be occurring on the hardware side. I have also tried creating the Device with usb2Mode=true, which did not seem to change anything. I have to assume that there is something going on in my code which is keeping the camera from providing frames, or which fails to continue to request frames from the camera, but I cannot seem to identify where this would be happening. Any help would be appreciated!

          • Howdy all!
            I know there are some experimental examples of RTSP streaming using encoded video outputs from the OAK cameras. My eventual goal is the streaming of processed video, like stereo depth, object ID/detection, etc, over this RTSP stream. My understanding, from some comments I've read, is that these processed output nodes cannot be linked to encoder nodes in the onboard pipeline. Is this accurate? If so, has anyone had luck getting processed video to stream over RTSP using OpenCV frames direct from pipeline nodes, rather than encoded bitstream like in the example?

          • Hi Erik,
            Thank you for the reply! Unfortunately, my use case has some very specific requirements - RTSP streaming over an IP radio - so I don't think I'll be able to take advantage of the native streaming capabilities, and will have to use a companion computer. I already have an FFC camera and 3 IMX477s, but I was hoping to take the total footprint down by using a board with something built-in. I have some follow-up Q's, but as they are less so related to this topic, I am going to make a new thread.

          • Hey all!
            I have a specific use case where I hope to place an Oak-D board of some SKU on a UAS, with the initial goal of streaming video over RTSP to a ground station, and the eventual integration of the stereo and monocular vision into the control loop for object avoidance and object tracking. I currently have a BW1098FFC and BW1098OBC to play around with. In trying to select the appropriate model, I had a few questions about capability.
            First, am I correct in understanding that all models would require a host or companion device of some kind to achieve this functionality? My options on that end would be a Raspberry Pi CM4 or Jetson Nano, depending on how my testing goes for speed and the eventual requirements for control update rate. I wanted to make sure that there is no board option which would allow me to stream frames, processed or not, directly from the device over an IP radio. If possible, I would like to avoid a companion computer as the system is heavily SWAP constrained.
            Second, I have been looking at the model with the integrated Pi CM, but have reservations about image quality. Given the speeds and distances at which the vehicle will need to operate, I don't think the image sensors embedded on the board are going to cut it. Is there any way for me to replace those sensors with something like an IMX477, or even just add a lens to the sensors in place to achieve a wider field of view?
            Thanks everyone! Super stoked to be diving into these boards!

            • erik replied to this.