Is there a standard function in DepthAI to convert a pixel point + depth value to a 3D point in mm?
Like rs2_deproject_pixel_to_point with Intel realsense.
Thanks in advance,
Pim
Is there a standard function in DepthAI to convert a pixel point + depth value to a 3D point in mm?
Like rs2_deproject_pixel_to_point with Intel realsense.
Thanks in advance,
Pim
Hi Pim
Spatial Location Calculator node does what you want.
Thanks,
Jaka
Hi jakaskerl,
I am looking for a solution to get the positions afterwards, as I need to get multiple 3D points from locations I don't know beforehand. So I would like to convert pixel locations and depth information to an XYZ mm position.
Pim
Hi Pim
Then I suggest you calculate the spatials on host since you have more maneuverability with how you process your data. It doesn't run on the device and will probably be slower, but it achieves what you want. Ofc, you would have to edit the calc.py file to avoid calculating averages and limiting yourself to only a small roi.
Thanks,
Jaka
Ah so there is not a standard function to do this. How about the intrinsic values of the camera, do I need to transform the image or is this done in the camera itself?
Pim SpatialLocationCalculator will handle that already. But if you'd want to get intrinsics, we have an example here for that: https://docs.luxonis.com/projects/api/en/latest/samples/calibration/calibration_reader/#calibration-reader
Thanks for your replies, I'll try to post my code when I have fully tested it.
My implementation of the Pixel to mm conversion yields some strange results, after some research I saw that my FOV calibration values are a bit strange, I have never written anything to the Eeprom so these are factory settings:
The code I used to read these values is as follows:
calibData = device->readCalibration();
this->DepthFOV = (3.14159265359 / 180) * calibData.getFov(dai::CameraBoardSocket(inDepth->getInstanceNum()));
If I roughly measure the Hfov (I guess it stands for horizontal FOV) over the Y axis I get to a value close to 77 degree.
Any idea what I am doing wrong?
Camera model is OAK-D-PRO-POE as can be seen in the attached image.
Pim
Hi Erik,
I'll try to upload a colorized disparity map tomorrow when I have the time to make some good images for comparison.
When running the command calibData.getFov(dai.CameraBoardSocket.CAM_A, useSpec=False)
I get a value of around 77 degrees.
When I use this value in my pixel to mm function I get a wrong result, my function is as follows:
int LuxonisCam::Pix2mm(float Punt_mmtmp[3], float pix[2], float Dist) {
//https://github.com/luxonis/depthai-experiments/blob/master/gen2-calc-spatials-on-host/calc.py
int dx = pix[0] - (DepthXsize / 2);
int dy = pix[1] - (DepthYsize / 2);
float phiX = atan(tan(this->DepthFOV / 2.0) /* 0.82*/ * dx / (DepthXsize / 2.0));
float phiY = atan(tan(this->DepthFOV / 2.0) * dy / (DepthXsize / 2.0));
Punt_mmtmp[0] = -Dist * tan(phiX); //X
Punt_mmtmp[1] = -Dist * tan(phiY); //Y
Punt_mmtmp[2] = Dist; //Z
return 0;
//2_deproject_pixel_to_point(Punt_mmtmp, &intrinsics, pix, Dist);
}
The image is taken from the following scene:
This is the result (background filtered out). As you can see the distance between 5 and 6 is 654mm instead of 685mm. The distance from the belt to the front of the camera is in reality 670mm instead of the 640mm i get from the camera.
any ideas?
I am also new to the OAK-D-Pro. Do we need to calibrate the device with a grid before use?
I was able to get the calibration info with this code:
#!/usr/bin/env python
import depthai
from pathlib import Path
totalCams = 0
calFolder = Path(r"C:\Users\SSI Scan\Documents\Programming\OAK\Calibrate")
calName = "Oak.json"
calFilename = calFolder/calNameprint(calFilename)
if depthai.Device.getAllAvailableDevices():
totalCams = len(depthai.Device.getAllAvailableDevices()) for device in depthai.Device.getAllAvailableDevices(): print(f"{device.mxid} {device.state} Name {device.name} {device.platform}") deviceInfo = depthai.DeviceInfo(device.mxid) print(deviceInfo) calName = "Oak_" + str(device.mxid) + ".json" calFilename = calFolder/calName with depthai.Device(device, depthai.UsbSpeed.HIGH) as oakDevice: oakCal = oakDevice.readCalibration() oakCal.eepromToJsonFile(str(calFilename)) print(calFilename)print(f"Finished with {totalCams}")
I am also new to the OAK-D-Pro. Do we need to calibrate the device with a grid before use?
I was able to get the calibration info with this code:
#!/usr/bin/env python
import depthai
from pathlib import Path
totalCams = 0
calFolder = Path(r"C:\Users\SSI Scan\Documents\Programming\OAK\Calibrate")
calName = "Oak.json"
calFilename = calFolder/calNameprint(calFilename)
if depthai.Device.getAllAvailableDevices():
totalCams = len(depthai.Device.getAllAvailableDevices())
for device in depthai.Device.getAllAvailableDevices():
print(f"{device.mxid} {device.state} Name {device.name} {device.platform}")
deviceInfo = depthai.DeviceInfo(device.mxid)
print(deviceInfo)
calName = "Oak_" + str(device.mxid) + ".json"
calFilename = calFolder/calName
with depthai.Device(device, depthai.UsbSpeed.HIGH) as oakDevice:
oakCal = oakDevice.readCalibration()
oakCal.eepromToJsonFile(str(calFilename))
print(calFilename)print(f"Finished with {totalCams}")
Hi @Hutch07 , no, device should be factory calibrated and users rarely need to calibrate it themselves. Some cases (eg hardware bending/damage), recalibration would be required.
@Pim it looks like we messed up the calibration in your case (as you can also see with the specFOV), please send an email to support@luxonis.com (together with this thread link) and we will replace your device.
Thanks, Erik