L
lovro

  • 20 hours ago
  • Joined Dec 3, 2024
  • 1 best answer
  • Looks like the problem is with Slicer itself. Since it works with python3.9 and PythonSlicer, the issue is likely how Slicer handles the environment in the scripted module. Maybe try using subprocess to run the DepthAI code outside and see if that helps.

  • Hi,

    thanks for the update! Since the standalone Python environment works fine, I suggest trying to sync the version of numpy in the 3D Slicer environment to match the standalone setup.

    If that doesn’t resolve the issue, consider adding other missing packages from the standalone environment to the Slicer setup. These differences in package versions and dependencies might be causing the conflict.

    Let me know how it goes, and we can explore further if needed!

  • Labu
    You can use either OAK-D SR or OAK-D ToF for your setup:

    • OAK-D SR: Works great for short-range depth sensing and 3D blob detection. Its global shutter helps reduce motion blur, so it should be able to handle your conveyor speed.
    • OAK-D ToF: Uses a Time-of-Flight (ToF) sensor to measure depth more accurately. It's also better in low light and for precise focus on objects.
  • Hi,

    have you tried running the code with python 3.9 outside of the 3D Slicer environment? It seems like there might be a dependency issue or conflict between 3D Slicer's Python environment and DepthAI's libraries.

  • abhanupr
    Hi,
    Since your left and right cameras are mono cameras, the concatenation should handle grayscale images instead of RGB. To fix this, you need to update two things:

    1. In your PyTorch script, change the channels to 1 for grayscale inputs:
    X = torch.ones((1, 1, 400, 640), dtype=torch.float16)
    2. In your pipeline, set the cast node to output RAW8:
    cast.setOutputFrameType(dai.ImgFrame.Type.RAW8)

    Hope this fixes the issue.

    • abhanupr
      Hi,
      I think the issue with the input shape and the axis you’re concatenating on: (in pytorch_concat.py)
      - The input shape for the dummy tensor should be (1, 3, 400, 640) (batch size, channels, height, width):
      X = torch.ones((1, 3, 400, 640), dtype=torch.float16)
      - You’re concatenating on the wrong axis. Use axis 3 (width) instead of 1 (channel):
      return torch.cat((img1, img2), 3)

      This should fix the problem!

      • Hi,

        For the first camera (MxId 14442C10A1E2F2D600):

        • Make sure you’re using the latest version of DepthAI.
        • Try the example script depthai-python/examples/ColorCamera/rgb_preview.py.
        • Use the DepthAI Viewer to check if the RGB stream works.

        For the second camera (XLINK device not found):

        Let us know how it goes!

        • lerp

          Hey!

          Yes, those look good. For the second part, I think the issue with the display hub is that it’s not designed to provide stable power for active cables or high-power devices like the camera.

          Glad the connection order tweak worked!

        • Hey there,

          Since you're using the OAK-D-PRO-PoE, I just wanted to check if you're powering it with a PoE switch or PoE injector. These devices are essential for powering PoE cameras like yours. Without one, the camera might not work reliably.

          If you're not using a PoE switch or injector, take a look at this guide for more details: PoE Deployment Guide

          • Hey there,

            It sounds like the 15m cable is struggling with either power delivery or signal integrity, even though it’s active. I’d suggest trying a powered USB hub between the cable and your device. It can help give the device the extra power it needs to stay stable.