jakaskerl
That makes sense based off the documentation. Should I refer to this link:Calibration (luxonis.com)?
My other question is more so toward ToF vs StereoDepth. I am still using the scripts I created earlier to capture images in pcd format. Then I am passing those images through a python script that combines them and generate one point cloud. However as I am getting more familiar with the nodes and pipelines. In regards of generating/creating point cloud data, is it better for me to use ToF or StereoDepth or both? Currently my script is set up like this:
mono_left = pipeline.createMonoCamera()
mono_left.setBoardSocket(depthai.CameraBoardSocket.LEFT)
mono_left.setResolution(depthai.MonoCameraProperties.SensorResolution.THE_800_P)
mono_right = pipeline.createMonoCamera()
mono_right.setBoardSocket(depthai.CameraBoardSocket.RIGHT)
mono_right.setResolution(depthai.MonoCameraProperties.SensorResolution.THE_800_P)
# Stereo depth
stereo = pipeline.createStereoDepth()
stereo.setOutputDepth(True)
stereo.setConfidenceThreshold(200)
mono_left.out.link(stereo.left)
mono_right.out.link(stereo.right)
# XLinkOut
xout_depth = pipeline.createXLinkOut()
xout_depth.setStreamName("depth")
stereo.depth.link(xout_depth.input)
# Connect to the device and start the pipeline
with depthai.Device(pipeline) as device:
q_depth = device.getOutputQueue(name="depth", maxSize=4, blocking=False)
capture_count = 0
while True:
in_depth = q_depth.get()
if in_depth is not None:
depth_frame = in_depth.getFrame()
depth_frame_colored = cv2.applyColorMap(cv2.convertScaleAbs(depth_frame, alpha=0.03), cv2.COLORMAP_JET)
cv2.imshow("Depth", depth_frame_colored)
k = cv2.waitKey(1) & 0xFF
if k == 27: # Press esc to close window
print('Window closing!')
break
elif k == 99: # Press c to capture point cloud
capture_count += 1
# Create a Point Cloud from the depth data
depth_array = np.asanyarray(depth_frame)
height, width = depth_array.shape
# Convert depth image to point cloud
fx, fy = 800, 800 # Focal length, adjust based on your camera calibration
cx, cy = width / 2, height / 2 # Principal point
points = []
for v in range(height):
for u in range(width):
z = depth_array[v, u] / 1000.0 # depth scaling factor (adjust as needed)
if z > 0:
x = (u - cx) \* z / fx
y = (v - cy) \* z / fy
points.append([x, y, z])
pcd = o3d.geometry.PointCloud()
pcd.points = o3d.utility.Vector3dVector(points)