Hi, I'm working on a project where I aim to optimize processing efficiency by offloading certain computational tasks directly onto the camera. This approach would relieve the host system (in this case, a Raspberry Pi) from handling these tasks, thereby freeing up CPU resources for other operations.
def capture_image(self, threshold):
# Blend when both received
if self.frameRgb is not None and self.frameDisp is not None:
# Resize frameDisp to match the resolution of frameRgb
frameDispResized = cv2.resize(self.frameDisp, (self.frameRgb.shape[1], self.frameRgb.shape[0]))
# Now, use frameDispResized to create the mask
mask = frameDispResized >= threshold
# Create a binary mask where pixels greater than 'threshold' are True
#mask = self.frameDisp >= threshold # frameDisp is 1 channel
# Expand the mask to 3 dimensions to match the color channels of frameRgb
mask3 = np.repeat(mask[:, :, np.newaxis], repeats=3, axis=2)
# Apply the mask to frameRgb to selectively display pixels
frameRgbMasked = np.where(mask3, self.frameRgb, np.zeros_like(self.frameRgb))
# Convert the masked image to a JPEG byte buffer
is_success, buffer = cv2.imencode(".jpg", frameRgbMasked)
if not is_success:
raise ValueError("Failed to convert the image to JPEG format")
return io.BytesIO(buffer)
I would like to execute this code directly on the camera itself, specifically on the OAK-D Pro PoE model. If feasible, I plan to connect up to 8 cameras to a single Raspberry Pi using this algorithm.
Is it possible to run this processing directly on the camera?
Additionally, I've noticed occasional issues with frame synchronization, leading to the erroneous elimination of pixels. How can I ensure that frameDisp and frameRgb are perfectly synchronized to achieve flawless results?
Thank you for your assistance.