• HardwareCommunity
  • Input on generating point clouds using 4 synced cameras (OAK-D-SR-POEs?)e

Hello, I'm looking for hardware input for a research project I am working on right now. We are looking to create point clouds of medium sized 3d printed objects in an industrial environment to compare against other runs and the input model. I have been researching using depthai in order to try this. I've made point clouds and streamed recordings of point clouds off of an OAK-D Pro so I am reasonably confident in how that part of the software development is going. I've also seen the experiments on syncing two cameras together and generating a composite point cloud. But only two cameras, not four. So what I'm curious about is:

Is there any technical reason I cannot create a single point cloud from four calibrated camera streams?

Is there any example showing the fidelity of a point cloud created using the OAK-D-SR-POE at a close distance?

With the POE model having max 1Gbps data throughput, should I be concerned with saturating that streaming a point cloud to something like ReRun and saving it as a .ply file intermittently?

What level of host computer hardware would you suggest for something like this? I have been happily working with one OAK-D Pro on a Jetson Nano and my Macbook Pro but that's just one camera without a lot of host-side computing. Should I be considering something at the consumer level say an i7/64Gb RAM/4070(or 80) or something more enterprise like a Nvidia A5500 (or better) or even something in their Jetson line?

Thanks for your input!

  • erik replied to this.

    Hi WestwardWinds ,
    It should work with 4 cameras the same as for only 2 - you would need to first calibrate all 4 cameras together, and then do small adjustment when registering the pointclouds (ipc). Just note that multiple TOFs on the SR-POEs would interfere with each other, so you'd need to use stereo instead for the depth.

    800P depth at 30FPS would be about 500mbps, while pointcloud would be 3 times more. So I would stream depth, and convert that to pointcloud on the host computer.

    Regarding host computer, I would first try out the resource consumption before buying an expensive rig - I don't think that it would be required though, but it really depends on other computation you are planning.
    Thoughts?

      14 days later

      erik Thanks Erik, that is very helpful.

      I am planning on doing the point cloud creation on the host computer so its good to know that the depth image per camera would only be around 500, that gives me some room to add additional features down the line.

      Per the host computer specs, normally I would totally go with the "wait and see" approach but I'm in the odd position that this test platform is actually going to be set up remotely on a research site on my behalf. So I was just hoping to have a starting off point to give them for what we may need for the host computer.

      To give a little more information, we are looking at creating high fidelity point clouds of concrete 3d prints while they are being created and and saving still point clouds at time intervals for later inspection. Eventually we want to implement a trained neural network to flag issues we can classify. I've been making small point clouds using 2x OAK-D-SRs on my Mac just fine but I wasn't sure how that scales up to 4x POEs + eventual NN implementation (although this would be running on the cameras).

      While I'm asking questions- since I have been working with the OAK-D SRs to create point clouds, moving from the OAK-D Pro I can definitely tell a difference in the quality of the depth map, and therefore point cloud, being generated. I know that the SRs are only passive depth but I also saw in the docs that the color cameras all have worse depth performance due to the filter. How much of an effect on quality does the color filter on an active set up have? Do you have any example images you could share comparing active/mono - active/color -passive/color? I'm wondering if I should go ahead and preorder the mono OAK-D SR-POEs so I can get a better depth map or if the decrease in quality is negligible enough as long as the IR is on to wait so we can have the added benefit of the color cameras.

      Thanks for all your help!

      • erik replied to this.

        Hi WestwardWinds ,
        If you have cameras calibrated between each other (translation/rotation known), they you wouldn't need to run ICP algos for every PCL, and basically just do matrix multiplication to align multiple pointclouds, which should be quite fast (depends on the number of points of course).

        We haven't done any tests, but it's mostly due to quantum efficiency - as color cams filter out quite a lot of light, which means SNR is worse. But if you have sufficient lighting, this would be negligible. Regarding active/passive - it really depends on surfaces. Eg. outside, surfaces in nature usually have good texture (a lot of features), so dot projector isn't very useful.