I am not 100% sure but the number of markers are different in some images. This might be the problem?
No distortion coefficient and rectification matrix generated by calibration.py
Hi @ShivamSharma
And if you tried to load the calibration file with https://docs.luxonis.com/software/depthai/examples/calibration_reader/?
Thanks,
Jaka
- Edited
When I run the script given in the link you sent, I get the following output:
venvshivam157@ubuntu:~/depthai-python$ python3 get_calib.py
RGB Camera Default intrinsics...
[[3133.269775390625, 0.0, 1922.1737060546875], [0.0, 3133.713623046875, 1102.6722412109375], [0.0, 0.0, 1.0]]
3840
2160
RGB Camera Default intrinsics...
[[3133.269775390625, 0.0, 1922.1737060546875], [0.0, 3133.713623046875, 1102.6722412109375], [0.0, 0.0, 1.0]]
3840
2160
RGB Camera resized intrinsics... 3840 x 2160
[[3.13326978e+03 0.00000000e+00 1.92217371e+03]
[0.00000000e+00 3.13371362e+03 1.10267224e+03]
[0.00000000e+00 0.00000000e+00 1.00000000e+00]]
RGB Camera resized intrinsics... 4056 x 3040
[[3.30951611e+03 0.00000000e+00 2.03029590e+03]
[0.00000000e+00 3.30998486e+03 1.54394751e+03]
[0.00000000e+00 0.00000000e+00 1.00000000e+00]]
LEFT Camera Default intrinsics...
[[693.9341430664062, 0.0, 664.17138671875], [0.0, 694.2032470703125, 406.8681640625], [0.0, 0.0, 1.0]]
1280
800
LEFT Camera resized intrinsics... 1280 x 720
[[693.93414307 0. 664.17138672]
[ 0. 694.20324707 366.86816406]
[ 0. 0. 1. ]]
RIGHT Camera resized intrinsics... 1280 x 720
[[690.59875488 0. 619.99499512]
[ 0. 690.54821777 276.21847534]
[ 0. 0. 1. ]]
LEFT Distortion Coefficients...
k1: -0.14615201950073242
k2: 5.131624221801758
p1: -0.0004437704337760806
p2: -0.0003282237739767879
k3: 0.816307544708252
k4: 0.22460998594760895
k5: 5.021973609924316
k6: 2.6677396297454834
s1: 0.0
s2: 0.0
s3: 0.0
s4: 0.0
τx: 0.0
τy: 0.0
RIGHT Distortion Coefficients...
k1: 1.1050113439559937
k2: 1.1275875568389893
p1: 0.00031773868249729276
p2: 0.0002416888455627486
k3: 0.12704245746135712
k4: 1.4751837253570557
k5: 1.46196711063385
k6: 0.46535491943359375
s1: 0.0
s2: 0.0
s3: 0.0
s4: 0.0
τx: 0.0
τy: 0.0
RGB FOV 68.7938003540039, Mono FOV 110.0
LEFT Camera stereo rectification matrix...
[[ 9.90124752e-01 7.40214346e-03 -3.64586040e+01]
[-7.32702106e-03 9.95746611e-01 -8.60322599e+01]
[-8.12898310e-06 3.72185724e-06 1.00401435e+00]]
RIGHT Camera stereo rectification matrix...
[[ 9.94906762e-01 7.44132256e-03 4.97689224e+00]
[-7.36240841e-03 1.00101704e+00 2.47433011e+00]
[-8.16824369e-06 3.74155680e-06 1.00401153e+00]]
Transformation matrix of where left Camera is W.R.T right Camera's optical center
[[ 9.99970078e-01 2.70099915e-03 7.25283753e-03 5.21282959e+00]
[-2.66335462e-03 9.99982953e-01 -5.19499322e-03 -1.26599353e-02]
[-7.26674590e-03 5.17552113e-03 9.99960184e-01 8.40604864e-03]
[ 0.00000000e+00 0.00000000e+00 0.00000000e+00 1.00000000e+00]]
Transformation matrix of where left Camera is W.R.T RGB Camera's optical center
[[ 9.99992728e-01 -2.86172348e-04 3.81197175e-03 7.94537354e+00]
[ 2.56973435e-04 9.99970675e-01 7.65808392e-03 -3.24345136e+00]
[-3.81405186e-03 -7.65704829e-03 9.99963403e-01 -5.16706049e-01]
[ 0.00000000e+00 0.00000000e+00 0.00000000e+00 1.00000000e+00]]
venvshivam157@ubuntu:~/depthai-python$
Is this calibration data from my camera? I am using two Mono OV9282 camera with each 75 degree FOV. I don't have any rgb camera.
I only used calibration.py file. I don't think I flashed the device with the current setup. My baseline is 14.5 cm in x
Hi ShivamSharma
Looks like the flash didn't succeed.
What is -dbg
parameter? Are you using the latest calibrate script? Also try disabling rgb with -dsb rgb
.
Thanks,
Jaka
Thank you for your reply!
I am using the latest branch of the depthAI library. The dbg parameter is to make the calibration file use the dataset.
I will use the dsb tag with calibration file. Do you have more debugging steps?
ShivamSharma The dbg parameter is to make the calibration file use the dataset.
Use -m process
flag to only run the processing stage and not the image acquisition. The -dbg
flag prevents the flashing part of the script.
Thanks,
Jaka
I used the -m process and -dsb rgb tags you mentioned and the calibration.py file flashed the device and the calibration result was saved in the resources folder. I will check how accurate it is by getting disparity and will let you know the result.
- Edited
Does the following calibration result look correct to you? It does not look completely correct to me because I am using only two sockets on Oak FFC 3P and this data has 3 sockets 0,1, and 2. I don't know how it calculated rotation and translation between 2 and 0 because 2 doesn't exist. Is it safe to use the rotation, translation between 0 and 1? I will still have to calculate the projection matrix for both 0, and 1 from k, R and t.
Here is the result:
{
"batchName": "",
"batchTime": 1679712500,
"boardConf": "IR-C00M05-00",
"boardCustom": "",
"boardName": "DM1090",
"boardOptions": 0,
"boardRev": "R3M0E3",
"cameraData": [
[
2,
{
"cameraType": 0,
"distortionCoeff": [
1.1050113439559937,
1.1275875568389893,
0.00031773868249729276,
0.0002416888455627486,
0.12704245746135712,
1.4751837253570557,
1.46196711063385,
0.46535491943359375,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0
],
"extrinsics": {
"rotationMatrix": [
[
0.9999896287918091,
-0.002969305729493499,
-0.0034563534427434206
],
[
0.003013428533449769,
0.9999131560325623,
0.012831280939280987
],
[
0.003417953150346875,
-0.012841563671827316,
0.9999117255210876
]
],
"specTranslation": {
"x": 2.5999999046325684,
"y": 3.299999952316284,
"z": -0.0
},
"toCameraSocket": 0,
"translation": {
"x": 2.7325892448425293,
"y": -3.2466089725494385,
"z": -0.5430911183357239
}
},
"height": 800,
"intrinsicMatrix": [
[
690.5987548828125,
0.0,
619.9949951171875
],
[
0.0,
690.5482177734375,
316.2184753417969
],
[
0.0,
0.0,
1.0
]
],
"lensPosition": 0,
"specHfovDeg": 110.0,
"width": 1280
}
],
[
0,
{
"cameraType": 0,
"distortionCoeff": [
-2.396423578262329,
-10.477461814880371,
-0.0004928055568598211,
0.0011145062744617462,
26.935871124267578,
-2.4506843090057373,
-10.170122146606445,
26.494205474853516,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0
],
"extrinsics": {
"rotationMatrix": [
[
0.997919499874115,
0.05123051628470421,
-0.03914157673716545
],
[
-0.051841288805007935,
0.9985463619232178,
-0.014751287177205086
],
[
0.03832896426320076,
0.016749747097492218,
0.9991247653961182
]
],
"specTranslation": {
"x": 14.5,
"y": 0.0,
"z": 0.0
},
"toCameraSocket": 1,
"translation": {
"x": -14.075519561767578,
"y": 0.5210347771644592,
"z": -0.29575979709625244
}
},
"height": 800,
"intrinsicMatrix": [
[
909.0518188476563,
0.0,
644.2517700195313
],
[
0.0,
909.3357543945313,
383.7008972167969
],
[
0.0,
0.0,
1.0
]
],
"lensPosition": 135,
"specHfovDeg": 75.0,
"width": 1280
}
],
[
1,
{
"cameraType": 0,
"distortionCoeff": [
-3.9288971424102783,
-3.196507692337036,
-0.00022527640976477414,
0.0015712999738752842,
19.68087387084961,
-3.980436325073242,
-2.887248992919922,
19.199928283691406,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0
],
"extrinsics": {
"rotationMatrix": [
[
0.9999700784683228,
0.0027009991463273764,
0.007252837531268597
],
[
-0.0026633546222001314,
0.9999829530715942,
-0.00519499322399497
],
[
-0.007266745902597904,
0.00517552113160491,
0.99996018409729
]
],
"specTranslation": {
"x": 5.300000190734863,
"y": -0.0,
"z": -0.0
},
"toCameraSocket": 2,
"translation": {
"x": 5.21282958984375,
"y": -0.012659935280680656,
"z": 0.008406048640608788
}
},
"height": 800,
"intrinsicMatrix": [
[
906.0726318359375,
0.0,
625.2752685546875
],
[
0.0,
906.3057861328125,
399.464599609375
],
[
0.0,
0.0,
1.0
]
],
"lensPosition": 0,
"specHfovDeg": 75.0,
"width": 1280
}
]
],
"deviceName": "",
"hardwareConf": "F0-FV00-BC000",
"housingExtrinsics": {
"rotationMatrix": [],
"specTranslation": {
"x": 0.0,
"y": 0.0,
"z": 0.0
},
"toCameraSocket": -1,
"translation": {
"x": 0.0,
"y": 0.0,
"z": 0.0
}
},
"imuExtrinsics": {
"rotationMatrix": [
[
0.0,
0.0,
0.0
],
[
0.0,
0.0,
0.0
],
[
0.0,
0.0,
0.0
]
],
"specTranslation": {
"x": 0.0,
"y": 0.0,
"z": 0.0
},
"toCameraSocket": -1,
"translation": {
"x": 0.0,
"y": 0.0,
"z": 0.0
}
},
"miscellaneousData": [],
"productName": "OAK-FFC-3P",
"stereoEnableDistortionCorrection": false,
"stereoRectificationData": {
"leftCameraSocket": 1,
"rectifiedRotationLeft": [
[
0.9990953207015991,
-0.03698360174894333,
0.020993344485759735
],
[
0.036815181374549866,
0.9992871880531311,
0.008353326469659805
],
[
-0.02128731645643711,
-0.007572895381599665,
0.999744713306427
]
],
"rectifiedRotationRight": [
[
0.9997386932373047,
0.014605958014726639,
-0.017585638910531998
],
[
-0.014745572581887245,
0.9998605847358704,
-0.007835760712623596
],
[
0.017468737438321114,
0.008093023672699928,
0.9998146295547485
]
],
"rightCameraSocket": 0
},
"stereoUseSpecTranslation": true,
"version": 7,
"verticalCameraSocket": -1
}
ShivamSharma
Yes, they normally large due to distortion correction being a rational function and not only a polynomial one as seen here
https://docs.opencv.org/4.x/d9/d0c/group__calib3d.html
jakaskerl I think you didn’t see my updated reply
ShivamSharma
Are you sure the correct board config is used? This should not happen, unless this is some other calibration file.
Thanks,
Jaka
It may be possible that I am providing incorrect camera name such as CAM_A and CAM_B in the board file. I will paste the board file I am using at the end. I am using the left and right socket of OAK FFC 3P. When I run ros2 launch depthai_ros_driver camera.launch.py it says -
[component_container-1] [14442C10E14FCBD600] [1.2.1] [2.336] [MonoCamera(3)] [error] Camera not detected on socket: 0
It is using the board file that I am passing to it because I can see HFOV and baseline is what I entered in the board file.
The board file is :
{
"board_config": {
"name": "OAK-FFC-3P",
"revision": "R3M0E3",
"cameras": {
"CAM_A": {
"name": "right",
"hfov": 75,
"type": "mono",
"extrinsics": {
"to_cam": "CAM_B",
"specTranslation": {
"x": 14.5,
"y": 0,
"z": 0
},
"rotation": {
"r": 0,
"p": 0,
"y": 0
}
}
},
"CAM_B": {
"name": "left",
"hfov": 75,
"type": "mono"
}
},
"stereo_config": {
"left_cam": "CAM_B",
"right_cam": "CAM_A"
}
}
}
The command is :
python3 calibrate.py -s 3.8 -brd OAK-FFC-3P.json -nx 13 -ny 7 -m process -dsb rgb
Hi @ShivamSharma
If you are running with ROS, perhaps the config you are using is wrong? You need to specify the params_file
and pass in a config file that doesn't use the RGB camera.
Examples: luxonis/depthai-rostree/humble/depthai_ros_driver/config
If you want to test the accuracy of depth, I'd suggest using some raw-depthai examples from depthai-python or depthai-core repository.
Thanks,
Jaka
- Edited
When I used depthai python to see the disparity, the results were bad. I flashed the device with the board file in my reply above. I still have question about why the calibration file saved has 3 sockets when I am using -dsb rgb and I am using two mono camera with Oak FFC 3P. I will paste my command and result below:
python3 calibrate.py -s 3.8 -brd OAK-FFC-3P.json -nx 13 -ny 7 -m process -dsb rgb
python3 depth_preview_sr.py
Can you please help me understand why the disparity is so bad?
The following is the settings for the depth_preview file:
#!/usr/bin/env python3
import cv2
import depthai as dai
import numpy as np
# Closer-in minimum depth, disparity range is doubled (from 95 to 190):
extendedDisparity = False
# Better accuracy for longer distance, fractional disparity 32-levels:
subpixel = True
# Better handling for occlusions:
lrCheck = True
enableRectified = True
# Create pipeline
pipeline = dai.Pipeline()
# Define sources and outputs
left = pipeline.create(dai.node.ColorCamera)
right = pipeline.create(dai.node.ColorCamera)
# Create stereo
stereo = pipeline.create(dai.node.StereoDepth)
xoutDepth = pipeline.create(dai.node.XLinkOut)
xoutDepth.setStreamName("disparity")
# Properties
left.setResolution(dai.ColorCameraProperties.SensorResolution.THE_800_P)
left.setCamera("left")
right.setResolution(dai.ColorCameraProperties.SensorResolution.THE_800_P)
right.setCamera("right")
stereo.setDefaultProfilePreset(dai.node.StereoDepth.PresetMode.HIGH_DENSITY)
stereo.initialConfig.setMedianFilter(dai.MedianFilter.KERNEL_7x7)
stereo.setLeftRightCheck(lrCheck)
stereo.setExtendedDisparity(extendedDisparity)
stereo.setSubpixel(subpixel)
# Linking
left.isp.link(stereo.left)
right.isp.link(stereo.right)
if enableRectified:
xoutRectR = pipeline.create(dai.node.XLinkOut)
xoutRectL = pipeline.create(dai.node.XLinkOut)
xoutRectR.setStreamName("rectifiedRight")
xoutRectL.setStreamName("rectifiedLeft")
stereo.rectifiedLeft.link(xoutRectL.input)
stereo.rectifiedRight.link(xoutRectR.input)
stereo.disparity.link(xoutDepth.input)
maxDisp = stereo.initialConfig.getMaxDisparity()
# Connect to device and start pipeline
with dai.Device(pipeline) as device:
while not device.isClosed():
queueNames = device.getQueueEvents()
for q in queueNames:
message = device.getOutputQueue(q).get()
# Display arrived frames
if type(message) == dai.ImgFrame:
frame = message.getCvFrame()
if 'disparity' in q:
disp = (frame * (255.0 / maxDisp)).astype(np.uint8)
disp = cv2.applyColorMap(disp, cv2.COLORMAP_JET)
cv2.imshow(q, disp)
else:
cv2.imshow(q, frame)
if cv2.waitKey(1) == ord('q'):
break
Hi @ShivamSharma
Yes, that is a bad calibration.
Few things to try:
swap left and right sockets, then view the preview again, hopefully only the sockets are swapped in calibration:
right.setCamera("left") left.setCamera("right")
You are using OV9282 which is basically the same as OV9782, so the 3P can sometimes confuse them. Edit the calibration json to manually set the sensor model:
"CAM_B": { "model": "OV9282", "name": "left", .... "type": "mono",
for both. Make sure that you are running on the updated develop branch of depthai repo.
I see you are using an altered depth preview script where the camera nodes are ColorCamera
. How come? 9282 is mono.
Thanks,
Jaka
Thanks!
I changed the sockets as you suggested and now the following are the results which look better as compared to before but I am not sure how accurate should I expect it to be.
I am using the depth_preview.py file. I can see the curtains in the disparity and the bed but I am also sitting in front of the camera and it is showing me as black.
Can the performance be improved further? Also, should I switch the sockets when I run the ros2 driver or I can just switch the calib file for each camera?
I was using the OV9282 model name in the board file before.
Hi @ShivamSharma
14.5cm baseline is quite large, are you sure you are not sitting too close to the cameras?
ShivamSharma Can the performance be improved further? Also, should I switch the sockets when I run the ros2 driver or I can just switch the calib file for each camera?
A good recalibration is the only way to improve the depth. You can alter the calibration file (with the examples we have for doing it) and just swap the sockets.
Thanks,
Jaka
Yes, I am sitting close to the camera. I am still not getting good depth in ROS 2 from the cameras. I swapped the sockets in ROS2.
ShivamSharma
Could you send over the dataset you have used (should be stored in the dataset
folder)? Also left and right (not rectified) image for scene where the depth doesn't look ok, and a calibration dump. We'll try to reproduce the results to see what is wrong.
Thanks,
Jaka
- Edited
The following are the raw images:
Link to left and right image
The following is the link to calibration dump:
Link