Hello,
I try to convert depth map to point cloud in C++ using this python example : https://docs.luxonis.com/en/latest/pages/tutorials/device-pointcloud/ and this code : https://github.com/luxonis/depthai-experiments/tree/master/gen2-pointcloud
In the python example there are these lines of code (https://github.com/luxonis/depthai-experiments/blob/master/gen2-pointcloud/device-pointcloud/main.py) :
# Creater xyz data and send it to the device - to the pointcloud generation model (NeuralNetwork node)
xyz = create_xyz(resolution[0], resolution[1], np.array(M_right).reshape(3,3))
matrix = np.array([xyz], dtype=np.float16).view(np.int8)
buff = dai.Buffer()
buff.setData(matrix)
device.getInputQueue("xyz_in").send(buff)
The shape of matrix
variable is 1 x 400 x 640 x 6. I think that it is possible to create a similar structure in the c++ but in c++, the dai::Buffer::setData function only takes a std::vector<std::uint8_t>. I am not a NN specialist but I suppose that it is not possible to use directly this model in c++ ? Am I right ? Is there any solution to achieve this ?
If someone could give me some advice on how to calculate a point cloud from depth on a device with an NN model, that would be great!
Thanks,
Fred