GeonsooKim

  • Nov 17, 2022
  • Joined Aug 25, 2022
  • 0 best answers
  • RyanLee

    Are you in the Discord channel? We can talk through DM in Discord. miknai is my user name.

    Thanks

  • @RyanLee

    Just wanted to say hi. It was pretty cool seeing the Korean waybill in this forum. Seems Luxonis camera is already going global haha.

    • erik

      Thanks for clarification.

      Quick question for your non-/blocking statement, "if host-side queue fills up and is blocking, it will start blocking device-side nodes". I believe, in that way, when the host-side queue has available spot, the old data will be sent from device-side queue, because the old data sits there until it gets used. IIRC, this is the case I encountered while using the oak-d poe pro. I didn't like this queue configuration as I only want the latest image data. If my understanding is wrong, please correct me.

      Another question. You mentioned that, "(if they are configured to blocking as well)". But, I haven't found a way to configure the queue on device side so far. My understanding is that I only can change the queue's size/ behavior on host side. Please correct me if I am wrong.

      I would much appreciate your clarification.
      Thank you

      • erik replied to this.
      • Hello,

        I currently capture color images as Still frames to obtain the images ONLY when I want them. It prevents the camera (oak-d poe pro) from keep sending image data to host PC while I don't need it. I specifically wanted this way due to the limited bandwidth between ethernet switch and host PC.

        I would like to do the same thing when capturing the stereo images. But, I don't see Still-like method in StereoDepth class. Maybe using the script node is the only way to capture stereo images ONLY when I need?

        By the way, my understanding is that the device continuously sends the image data regardless the blocking/non-blocking queue setup on host PC side. Please correct me if this is wrong.

        I would much appreciate your advice. Thank you.

        • erik replied to this.
        • @erik,

          Good to know that it can be done without video encoder.

          What was the purpose of having video encoder node for Still image capture then? Does the video encoder node bring in any benefits in the process of capturing Still image?

          Thank you

          • erik replied to this.
          • @erik,

            I see. Thanks for the reply.

            Quick question. I have checked a couple of code examples capturing Still images - example 1 and example 2, and found out that they compress data by using encoder to transfer Still image from device to host PC. Why does compression (stillEncoder = pipeline.create(dai.node.VideoEncoder)) have to come and play for Still image transfer?

            The video encoder seems like it encodes the frame and sends it to host pc continuously. What I want is that the camera doesn't do anything until I send signal. As soon as I send a signal, the camera captures the Still image and sends it to host PC. Do I need VideoEncoder for this?

            Thank you

            • erik replied to this.
            • Erik,

              Thanks for the quick reply. I will try out the still event first as it seems the simplest and satisfies my needs.

              For the encoded stream (MJPEG), does it use lossless compression algorithm? Or do I lose some data in the process?

              Thank you

              • erik replied to this.
              • I have multiple OAK-D PoE Pro cameras and want to capture image frame only when I want.

                This is my understanding on how cameras transfer image data to host PC: (Please correct me if I am wrong or missing anything)
                => Let's say I have a simple pipeline sending 4K color image. Whenever the camera is booted up with pipeline, it streams data continuously to the queue on host PC. What I can do is deciding how the queue on host PC side behaves. For example,
                if I set the queue as size 1 and non-blocking, the device will fill in the queue continuously, so I can get "the latest" image frame from device.
                If I set it as blocking, the device will fill in the queue only when the image data in queue is consumed.
                In either blocking or non-blocking, the device keeps sending image data to the queue on host PC.

                My issue:
                I don't have enough bandwidth if the cameras keep sending image data continuously. My program crashes because at some point, one camera became not able to finish data transfer within a pre-defined timing window in FW, so the watchdog kicks in and reboots the device, which leads to connection drop. I am facing the network bottleneck when I use multiple OAK-D PoE Pro cameras at the same time.

                Questions:

                1. Is there a way that I can control the device whether it can send image data or not, instead of continuously sending it to device?
                2. Is using Script node or Standalone mode a good approach? I don't fully understand how they work and cannot find much information.
                3. What about using compression? Compress image data, transfer it, and decompress it in host pc?

                I would much appreciate your advice. Thanks!

                • erik replied to this.
                • erik Thanks for taking time to try it. Here's my network setup.

                  • OAK-D PoE Pro is connected to an Ethernet switch
                  • Host PC (my laptop) is connected to the Ethernet switch
                  • the Ethernet switch is connected to main network hub

                  The issue that camera becomes unrecognizable happened randomly and rarely (2 times for 6 hours). I wasn't able to reproduce it on purpose either.

                  FYI. This is the script I used.

                  import cv2
                  import depthai as dai
                  from test_utils import *
                  
                  # Create pipeline
                  pipeline = dai.Pipeline()
                  
                  # Define source and output
                  camRgb = pipeline.create(dai.node.ColorCamera)
                  camRgb.setColorOrder(dai.ColorCameraProperties.ColorOrder.RGB)
                  camRgb.setResolution(dai.ColorCameraProperties.SensorResolution.THE_4_K)  # THE_1080_P, THE_4_K, THE_12_MP
                  
                  xoutRgb = pipeline.create(dai.node.XLinkOut)
                  xoutRgb.setStreamName("rgb")
                  
                  # Linking
                  camRgb.isp.link(xoutRgb.input)
                  
                  print(camRgb.getResolution())
                  w, h = camRgb.getResolutionSize()
                  print(w, h)
                  # print(camRgb.getIspSize())
                  print("booting")
                  # Connect to device and start pipeline
                  with dai.Device(pipeline) as device:
                      print("up")
                  
                      # Output queue will be used to get the rgb frames from the output defined above
                      qRgb = device.getOutputQueue(name="rgb", maxSize=1, blocking=False)  # depthai.DataOutputQueue
                  
                      while True:
                          
                          # wait_beam_trigger()
                          # imRgb = qRgb.get()  # blocking call, will wait until a new data has arrived
                          imRgb = qRgb.tryGet()  # non-blocking call  # depthai.ImgFrame
                          if imRgb is not None:
                              im = imRgb.getCvFrame()  # numpy.ndarray
                              down_size = (w//8, h//8)
                              im_down = cv2.resize(im, down_size, interpolation=cv2.INTER_LINEAR)
                              print(im.shape, im_down.shape)
                              cv2.imshow("rgb", im)
                              cv2.imshow("rgb_down",im_down)
                              cv2.waitKey()
                              cv2.destroyAllWindows()  # cv2.destroyWindow("rgb")
                • Erik,

                  Yesterday, when I had the issues, I expected the watchdog kicks in too, but it did not happen even though I waited for 1020 mins each time. So, I had to disconnect the cable and connect it back to make the camera recognizable again.

                  Thanks for the advice. I will try increasing the timeout/ adding delay next time it happens and see how it behaves.

                  Thanks, Geonsoo

                  • erik replied to this.
                  • OS: Ubuntu 20.04
                    Camera: OAK-D-PoE Pro
                    Depthai version: 2.17.4.0
                    FW version: 0.0.20

                    I am experiencing an issue that the camera sometimes becomes unrecognizable after I run my script N times. More specifically, I get 0 for depthai.Device.getAllAvailableDevices() which means no OAK camera is found even though it's physically connected and has been used for N times. Whenever this happens, I just disconnect the cable and connect back in. Then, the issue gets resolved. Both camera and host PC are connected to an Ethernet switch and they are only devices connected to the switch.

                    This happened twice for last 5 hours so far, so I don't see this issue often. As it happens randomly, it's not reproducible on purpose, which makes it hard to debug. Do you have any idea on this?

                    One thing is that I use control+backslash (Core Dump) in terminal to terminate my program. Does it make an unclean exit and make an effect on FW routine?

                    I much appreciate your feedback. Thank you.

                    • erik replied to this.
                    • Let's assume that I have 800 Mbps (1000 Mbps - protocol overhead) Ethernet connection to my oak-d-pro-poe. I would like to calculate the required bandwidth for each image message when it travels from the device to host PC.

                      Case 1: streaming LEFT mono images (1280×800)
                      => Required bandwidth is 8 Mbit => (1280x800 (wxh) * 1 (each pixel) * 8 (bit/byte) * 1 (number of frame) = 8,192,000)

                      Case 2: streaming 4K color images (3840x2160)
                      =>Required bandwidth is 66 Mbit => (3840x2160 (wxh) * 3 (each pixel) * 8 (bit/byte) * 1 (number of frame) = 66,355,200)

                      Questions:

                      1. Is the calculation correct? If not, which factors am I missing?
                      2. How can I calculate FPS from the info? What info do I need to know more? CPU speed?
                      • erik replied to this.
                      • Ah... I missed that. Sorry about that. Thank you for the reply!

                      • Will do. Thank you so much for your feedback.

                        Best regards,
                        Geonsoo

                      • erik I need to buy a “usb-c to ethernet” adopter, so it might take a couple of days. How about connecting host pc to the etherent switch? Does connecting directly to router make difference compared to connecting to switch from host pc?

                        Edit: ordered this. Limited-time deal: USB C to Ethernet Adapter, uni RJ45 to USB C Thunderbolt 3/Type-C Gigabit Ethernet LAN Network Adapter, Compatible for MacBook Pro 2020/2019/2018/2017, MacBook Air, Dell XPS and More - Gray https://a.co/d/3Drg0Tx

                        • erik replied to this.
                        • @erik you mean use ethernet cable between host pc and router?

                          Currently the POE cameras are connected to ethernet switch and the switch is connected to router.

                          • erik replied to this.
                          • Hey all,

                            So far, I have been having a painful time with OAK-D-POE and OAK-D-POE-PRO cameras. I cannot even capture a frame in 12MP resolution. When I run the script to get 12MP image, the program simply dies with this message, "Ping was missed, closing the device connection". I thought it can be an issue with their firmware, but I have to think about different factors and check they make differences or not.

                            I would like to check with you that what the recommended wifi speed is. Just did internet speed test the google provides (https://www.google.com/search?channel=fs&client=ubuntu&q=internet+speed+test) and got 58.7 Mbps for download and 31.8 Mbps for upload speed. Are they good enough to utilize the full features from POE camera?

                            I am using Ubuntu 20.22 LTS and maybe the insufficient buffer size occurs the connection drop?

                            I really want to make these cameras work for my project. Would much appreciate your feedback.

                            • erik replied to this.
                            • Hey all,

                              I am trying to have video streams from multiple cameras and keep them running for a long time. Let's say I have three cameras and I want to have them stream videos for more than 24 hours non-stop. I have tried the experiment code for multiple devices in https://github.com/luxonis/depthai-experiments/tree/master/gen2-multiple-devices

                              But, when running the main.py in the experiment, I always get connection drop. I elaborated the details in the Github issue. https://github.com/luxonis/depthai-core/issues/558

                              I know there can be numerous possible causes for this connection drop issues. Just want to check with you all about how you deal with this issue if you encountered this before. I want to make it clear that the bootloader and SDK versions are up-to-dates as stating in the Github issue above. I doubt this is a power issue as I have tried using multiple different ethernet switches and injectors. I also have tried using different power source - at an office and at home.

                              I would much appreciate your feedback.

                              • erik replied to this.