Hi, I'm working on a project where I aim to optimize processing efficiency by offloading certain computational tasks directly onto the camera. This approach would relieve the host system (in this case, a Raspberry Pi) from handling these tasks, thereby freeing up CPU resources for other operations.

def capture_image(self, threshold):
    # Blend when both received
    if self.frameRgb is not None and self.frameDisp is not None:
    
        # Resize frameDisp to match the resolution of frameRgb
        frameDispResized = cv2.resize(self.frameDisp, (self.frameRgb.shape[1], self.frameRgb.shape[0]))

        # Now, use frameDispResized to create the mask
        mask = frameDispResized >= threshold

        # Create a binary mask where pixels greater than 'threshold' are True
        #mask = self.frameDisp >= threshold  # frameDisp is 1 channel
        
        # Expand the mask to 3 dimensions to match the color channels of frameRgb
        mask3 = np.repeat(mask[:, :, np.newaxis], repeats=3, axis=2)
        
        # Apply the mask to frameRgb to selectively display pixels
        frameRgbMasked = np.where(mask3, self.frameRgb, np.zeros_like(self.frameRgb))

        # Convert the masked image to a JPEG byte buffer
        is_success, buffer = cv2.imencode(".jpg", frameRgbMasked)
        if not is_success:
            raise ValueError("Failed to convert the image to JPEG format")

        return io.BytesIO(buffer)

I would like to execute this code directly on the camera itself, specifically on the OAK-D Pro PoE model. If feasible, I plan to connect up to 8 cameras to a single Raspberry Pi using this algorithm.

Is it possible to run this processing directly on the camera?

Additionally, I've noticed occasional issues with frame synchronization, leading to the erroneous elimination of pixels. How can I ensure that frameDisp and frameRgb are perfectly synchronized to achieve flawless results?

Thank you for your assistance.

    Hi @Alberto
    It's possible I think, but should really be performed on the RPI since the OAK CPU is pretty weak and should only be used for very simple tasks.

    Alberto Additionally, I've noticed occasional issues with frame synchronization, leading to the erroneous elimination of pixels. How can I ensure that frameDisp and frameRgb are perfectly synchronized to achieve flawless results?

    https://docs.luxonis.com/projects/api/en/latest/components/nodes/sync_node/

    Thanks,
    Jaka

    Hi @jakaskerl ,

    Thank you for responding so quickly. Based on your experience with Oak, do you think it would be possible to do what I mentioned in my initial query with the OAK CPU if I lower it to 5 FPS? How could I try it? How can I run the above algorithm on the camera's own CPU?

    Hi @Alberto
    Likely not since there is a bunch of copying of large arrays which the CPU is terrible at.
    You should be able to achieve it with the script node, but keep in mind you can't use numpy or any fancy libraries.

    Thanks,
    Jaka

    7 days later

    Hi @jakaskerl ,

    I receive this error when using synchronized nodes: module 'depthai.node' has no attribute 'Sync', which leads me to believe that the library has changed and that the Sync functionality may no longer exist.

    Solved, I updated to depthai-2.24.0.0