• What model is used in the person tracking example video?

erik

Is this video from inference being performed on device? I cannot seem to get my fps anywhere close to this speed on my OAK-D-POE.

    Hi AdamPolak,
    Anything pipeline related computes on the device. You might be getting decreased fps due to limited bandwidth of your ethernet (perhaps you are sending back a high resolution video). Thoughts?

    Regards,
    Jaka

      AdamPolak also note that running pipeline from video (instead of live v ideo from camera sensors) will be much slower, as there's the large overhead of sending frames to the device.

      jakaskerl

      Great point. I am on PoE and likely sending the frames is causing issues.

      1. Could this demo handle compression of frames as well or is that too much of a workload, what is the best image compression technique for either rgb/depth

        Hi AdamPolak

        I think the VideoEncoder has its own hardware block so it probably wouldn't affect performance. I'm not sure what the best technique would be. Just make sure to encode disparity and not depth, since encoder only supports up to INT8 (depth is INT16).

        Hope this helps,
        Jaka

          jakaskerl

          Is it possible to use sequence numbers with videos to be able to sync detections with received videos to display?

          I don't see any examples of streaming video from device and then displaying frames on host.

          Would pulling the video align with preview when sent to host down stream? Should I link the video feed to Xlinkk out in order to send compressed frames?

            AdamPolak

            I used ImageManip to reduce the frame sizes by resizing the pixels but it seems to have slowed down the fps further?

            // ColorCamera setup

            camRgb = pipeline->create<dai::node::ColorCamera>();

            camRgb->setPreviewSize(st->rgb_dai_preview_x, st->rgb_dai_preview_y);

            camRgb->setResolution(st->rgb_dai_res);

            camRgb->setInterleaved(false);

            camRgb->setColorOrder(dai::ColorCameraProperties::ColorOrder::RGB);

            camRgb->setBoardSocket(dai::CameraBoardSocket::RGB);

            camRgbManip = pipeline->create<dai::node::ImageManip>(); // Here I will take the camRgb frames and downscale them before sending them out camRgbManip-> initialConfig.setResize(408, 240); camRgbManip->initialConfig.setFrameType(dai::ImgFrame::Type::BGR888p); camRgb->preview.link(camRgbManip->inputImage);
            rgbOut = pipeline->create<dai::node::XLinkOut>(); rgbOut->setStreamName("rgb"); camRgbManip->out.link(rgbOut->input);

            Does ImageManip take porcessing power away from nns?

              Hi AdamPolak
              ImageManip shouldn't usually take resources from nns. You can check what's using your resources with DEPTHAI_LEVEL=DEBUG python3 <filename>. What you are looking for are cmx slices and shave cores.
              More info here.

              Thanks,
              Jaka