As I happily await for my new OAK-D board, I have many ideas and questions. I am an AI engineer that utilizes computer vision for real-life manufacturing and robotic applications, thus I try to leverage all the products I use in the most effective manner.
Q1: If I want to compile the latests openCV-main repo, and have it run on the OAK-D... How would I do so? I ask this because I currently do all my development on the Jetson Xavier devices, and I recompile the repo to utilize the Jetsons hardware and to provide me more opencv functionality compared to a pip-installed version.
Q2: Since the patent for SIFT has expired this year, openCV has transitioned the algo to the main branch. I think SIFT would be a fantastic application for the depthAI platform especially for the depth aspect.
Would you recommend running SIFT on the OAK-D or using the pointcloud information from OAK-D cameras and then do all processing with the Xavier? I am curious about the performance since this would be another fantastic application for pick-and-place robotics.
To run some code for benchmarkings, please see the following code example:
import numpy as np
from matplotlib import pyplot as plt
MIN_MATCH_COUNT = 10
FLANN_INDEX_KDTREE = 0
img1 = cv2.imread('box.png',0) # queryImage
img2 = cv2.imread('box_in_scene.png',0) # trainImage
sift = cv2.SIFT_create()
# find the keypoints and descriptors with SIFT
kp1, des1 = sift.detectAndCompute(img1,None)
kp2, des2 = sift.detectAndCompute(img2,None)
index_params = dict(algorithm = FLANN_INDEX_KDTREE, trees = 5)
search_params = dict(checks = 50)
flann = cv2.FlannBasedMatcher(index_params, search_params)
matches = flann.knnMatch(des1,des2,k=2)
# store all the good matches as per Lowe's ratio test.
good = 
for m,n in matches:
if m.distance < 0.7*n.distance:
src_pts = np.float32([ kp1[m.queryIdx].pt for m in good ]).reshape(-1,1,2)
dst_pts = np.float32([ kp2[m.trainIdx].pt for m in good ]).reshape(-1,1,2)
M, mask = cv2.findHomography(src_pts, dst_pts, cv2.RANSAC,5.0)
matchesMask = mask.ravel().tolist()
h,w = img1.shape
pts = np.float32([ [0,0],[0,h-1],[w-1,h-1],[w-1,0] ]).reshape(-1,1,2)
dst = cv2.perspectiveTransform(pts,M)
img2 = cv2.polylines(img2,[np.int32(dst)],True,255,3, cv2.LINE_AA)
print("Not enough matches are found - %d/%d" % (len(good),MIN_MATCH_COUNT))
matchesMask = None
draw_params = dict(matchColor = (0,255,0), # draw matches in green color
singlePointColor = None,
matchesMask = matchesMask, # draw only inliers
flags = 2)
img3 = cv2.drawMatches(img1,kp1,img2,kp2,good,None,**draw_params)
With that said, I have a huge array of different use cases for this in real-word scenerios. If you and the team would like me to test other applications of DepthAI or develop potentially new ones, please feel free to reach out. 🙂