R
RyanLee

  • 4 hours ago
  • Joined Nov 9, 2021
  • 0 best answers
  • Hi,

    the Depth performance of one of our camera(OAK-D Pro W) seems abnormal. So could you give us any advice or solution to enhance its depth performance? i attached two videos below. One of that is normal and the other one is abnormal.

    1. Normal (seems working fine)
      Link

    2. Abnormal (seems some issue with depth performance)
      Link

    Best regards,
    Ryan.

    • Hi,

      We are testing with OAK-D-Pro W PoE(imx378) on your depthai program.
      But i can not set the resolution with below. could you help me how to handle this??
      I also attached the video we did.

      <setting resolution>
      RGB: 12MP@15fps
      Mono: 400P@15fps

      <test video>
      google drive link

      Best regards,
      Ryan.

      • @JanCuhel Thank you so much about its great and clear explanation. I got a sense now.

        Happy Christmas!!

        Best regards,
        Ryan.

      • @jakaskerl

        Thank you for your feedback. And i have still a question about it.

        !yolo train model=yolov8n.pt data=VOC.yaml epochs=2 imgsz=640 batch=32 device=0

        As you can see there is imgsz=640. and it means input image is 640x640. But in the tools, ( https://tools.luxonis.com/ ) can be set 640x352 for 16:9 ratio image like 1920x1080.

        So again my first question is how you define 352 height??
        Second, even thought height is reduced from 640, there is no performance degradation??
        Third, when height is reduced are you using crop or resizing.

        i am not sure it is connected with your device so feel free let me know as much as you can. Thanks,

        Best regards,
        Ryan.

      • Hi,

        I am working with yolov8 nowadays. i saw the input size is 640x352 and you choose it. Could you explain about it for me? such as why you choose 640x352. and we train our model with 512x512 then do i need to change it to 640x352??

        <what i checked on your documents>

        https://docs.luxonis.com/software/ai-inference/integrations/yolo

        Open the tools in a browser of your choice. Then upload the downloaded yolov6n.pt weights and set the Input image shape to 640 352 (we choose this input image shape as the aspect ratio is close to 16:9 and throughput and latency are still decent). The rest of the options are left as they are.

        Best regards,

        Ryan.

        • jakaskerl replied to this.
        • Hi @RyanLee,

          YOLO models are robust to input size changes due to their fully convolutional design. This allows the model to process input images of various sizes as long as the dimensions are divisible by the stride of the network's layers (typically powers of 2, e.g., 32 for most common YOLO versions). So even though a YOLO-based model was trained with, let's say, 640x640 input image shape, you can export it using 640x352.

          Regarding the performance degradation, I have personally never measured it, but I have never noticed any big performance gap.

          The reason why we're sometimes reducing the height during export is that having images with an aspect ratio of 16:9 is more realistic for our cameras than 1:1. Furthermore when switching input image shape from 640x640 to 640x352, the model latency improves as the model has fewer pixels to process which is crucial in edge AI.

          I hope this addresses all your questions! Please feel free to reach out if anything remains unclear or if you have additional queries.

          Wishing you a Merry Christmas and a wonderful holiday season!

          Kind regards,
          Jan

        • jakaskerl

          1. Upgrade to bootloader 2.28

          ==> It seems to be 0.028. right??

          Anyway, I updated the bootloader to the latest version, but the IP conflict phenomenon is the same.

          2. Please check if the PoE power is being input safely.

          ==> I am using the adapter sold by your company for PoE power.

          Ubiquiti 802.3AF PoE injector 15W, 48VDC, 0.32A.

          3. I know that the same symptom occurs not only in the Luxonis program but also in the code you created. Please check this part again.

          ==> The symptoms are the same in the DepthAI viewer for Windows, the viewer launched in Python,

          and the Java program we created.

          Best regards,

          Ryan.

          • Hi,

            We prepared some answer for you. Please let me know if you need anything more from us.

            1. Camera recognition is random, sometimes one is caught and sometimes two are caught. So it's not like only one is having problems. Is it possible for multiple recognitions to cause conflicts?

            2. I'll give you the relevant information.

            As you can see, 192.168.9.101 : 0.0.22, 192.168.0.14 : 0.0.28, 192.168.0.2 : 0.0.28,

            3. Currently, only one development PC and three cameras are connected to the router.

            Best regards,

            Ryan.

            • Hi,

              We are working on is to capture images by streaming three cameras and manually or automatically focusing on the object. We do not use the AI ​​model supported by the camera for it now.

              When we run the Windows program provided buy you (Depthai Viewer), three cameras are captured, and when we select one of the three, it runs, but one camera randomly disappears and is connected again, and it works unstably.

              When we tested it with the provided Java source, three cameras were connected and then some time later one disappeared, and it seems the connection was unstable.

              Is there a way to set the IP address value or is there a separate setting program? Is there a way to solve it clear?

              When coding in Java,three generations are detected

              Only 1 unit disconnected

              Best regards,

              Ryan.

              • Hi

                We would like to check the z, x, y when the camera looks downwards. But there is some issue. Please help us to handle this issue.

                Here is the function of our development with your PoE model.

                1. Activate the system for 1 minute every 2 minutes
                2. Save image: Save only the left image
                3. Extract x, y, depth, z values ​​for points of interest (Point_base.dat, Point_wl.dat) and save as text

                Issue we met now.

                1. When the camera is looking straight ahead, x, y, depth values ​​are accurate. no problem.
                2. When the camera is looking diagonally downward, depth(z) is accurate, but x, y errors occur
                3. In my personal opinion, I think that calibration information (left_camera_matrix, right_camera_matrix, rotation_matrix, translation_vector) should be utilized
                4. However, it is not clear whether the calibration information in this code is utilized, and it is understood that the calibration information is inside the camera

                So i think that I need your help to get correct x, y information even though the camera is looking downward direction.

                Here also i added our code so far. For your information.

                source code

                Best regards,

                Ryan.

                • Hi,

                  I tried git restore or removing depthai-python and git clone again. But it is not solving now. Could you help me to solve this issue?

                  Best regards,

                  Ryan.

                • Hi

                  I am working with OAK-D CM4 PoE. I met below issue with several OAK-D CM4 PoE model we have. Could you please check about it for us?


                  i@luxonis:~/depthai-python/examples $ python3 install_requirements.py

                  pip 24.3.1 from /home/pi/.local/lib/python3.9/site-packages/pip (python 3.9)

                  Looking in indexes: https://pypi.org/simple, https://www.piwheels.org/simple

                  Requirement already satisfied: pip in /home/pi/.local/lib/python3.9/site-packages (24.3.1)

                  Looking in indexes: https://pypi.org/simple, https://www.piwheels.org/simple

                  Requirement already satisfied: pyyaml in /home/pi/.local/lib/python3.9/site-packages (6.0)

                  Collecting pyyaml

                  Downloading https://www.piwheels.org/simple/pyyaml/PyYAML-6.0.2-cp39-cp39-linux_armv7l.whl (45 kB)

                  Requirement already satisfied: requests in /home/pi/.local/lib/python3.9/site-packages (2.26.0)

                  Collecting requests

                  Downloading https://www.piwheels.org/simple/requests/requests-2.32.3-py3-none-any.whl (64 kB)

                  Requirement already satisfied: numpy<3.0 in /home/pi/.local/lib/python3.9/site-packages (1.22.3)

                  Collecting numpy<3.0

                  Downloading https://www.piwheels.org/simple/numpy/numpy-2.0.2-cp39-cp39-linux_armv7l.whl (5.8 MB)

                   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 5.8/5.8 MB 109.7 kB/s eta 0:00:00

                  Collecting opencv-python!=4.5.4.58

                  Downloading https://www.piwheels.org/simple/opencv-python/opencv_python-4.6.0.66-cp39-cp39-linux_armv7l.whl (11.3 MB)

                   ━━━━━━━━━━━╸━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 3.3/11.3 MB 109.2 kB/s eta 0:01:14

                  ERROR: THESE PACKAGES DO NOT MATCH THE HASHES FROM THE REQUIREMENTS FILE. If you have updated the package versions, please update the hashes. Otherwise, examine the package contents carefully; someone may have tampered with them.

                  opencv-python!=4.5.4.58 from https://www.piwheels.org/simple/opencv-python/opencv_python-4.6.0.66-cp39-cp39-linux_armv7l.whl#sha256=c1360e46e5ebd47a92e00c1f75c7d293d6ffd00d7f9ff06666f9af05eff2094f:
                  
                      Expected sha256 c1360e46e5ebd47a92e00c1f75c7d293d6ffd00d7f9ff06666f9af05eff2094f
                  
                           Got        e0bac2b7656975184c92f22f3b2eef8f857b21b3c9dc3d981528ea2136e76202

                  Traceback (most recent call last):

                  File "/home/pi/depthai-python/examples/install_requirements.py", line 137, in <module>

                  subprocess.check_call(python_dependencies_cmd)

                  File "/usr/lib/python3.9/subprocess.py", line 373, in check_call

                  raise CalledProcessError(retcode, cmd)

                  subprocess.CalledProcessError: Command '['/usr/bin/python3', '-m', 'pip', 'install', '-U', '--prefer-binary', '--no-cache-dir', '--user', 'pyyaml', 'requests',


                  Best regards,

                  Ryan.

                  • Hi

                    We are using OAK-D W now. and it seems some portion of RGB carmera is out of fucus now. Could you help us to know its solution to overcome it??

                    I added the pictures.

                    1. left size is out of fucus now.

                    1. Right side is out of focus now.

                     3\. Right side is out of focus. you can see the car number plate.

                    1. Right side is out of focus. you can see the big aprtments.

                    1. Right side is out of focus compare to the left side i thin.

                    Best regards,

                    Ryan.

                    • Hi

                      How are you?

                      I am using OAK-1 Auto Focus now. I met two issues below.

                      1. Moisture problem with two OAK-1 AF camera in 100 pcs.

                      1. the error below with OAK-1 AF camera

                        ---

                        ommunication exception - possible device error/misconfiguration. Original message 'Couldn't read data from stream: 'preview' (X_LINK_ERROR)'
                        Failed to find device after booting, error message: X_LINK_DEVICE_NOT_FOUND
                        Failed to find device (1.5), error message: X_LINK_DEVICE_NOT_FOUND
                        [warning] skipping X_LINK_UNBOOTED device having name "1.5"
                        usb 1-5: device descriptor read/all, error -71


                      Please help me how to handle those issues wisely.

                      Best regards,

                      Ryan.

                      • Luxonis-Adam Hi Adam thank you for your support. We already check the link you shared. Based on our test and your words. it seems that because of the limitation of the PoE Interface. Am I right? Then how about USB model. If we use USB model, then could we reduce its latency ??

                        Best regards,

                        Ryan.

                      • Hi

                        We have two questions.

                        1.First One:

                        When we lunch ros with OAK-D Pro W(OV9782). we met some issue like below. We have two OAK-D ProW (ov9782) but it is the same issue we can see. Could you help us how to handle this issue?

                        2.Second one:

                        We are using ROS(RViz) function with OAK-D Pro W(ov9782) now. But its seems 2 seconds delay between OAK-D Pro W and ROS(RViz). Do you think it is general? Or Could we reduce its delay lesser than 2 seconds?

                        Best regards,

                        Ryan.