Hi all,

i just purchased an OAK D pro and I really like it. Object detections and neural networks with one RGB frame inputs works as a charm.

Now, I ran over this problem: I want to implement neural network to estimante the heart rate of a person. For this, the networks usually need the last 180-128 frames of a webcam as input. Can you give me some advices how I can buffer or save 180 consecutive frames before piping it to the neural network node? The closest things, I found on the forum was the 3 image concat example.

Best,

Khan

    KhanCh This example might help:

    https://docs.luxonis.com/projects/api/en/latest/samples/MobileNet/video_mobilenet/#video-mobilenetssd

    and I just did a little test (I thought it was an interesting thing to practice since I am learning python). Maybe this will be useful? (Probably not, but who knows):

    from depthai_sdk import OakCamera
    from depthai_sdk.classes import FramePacket
    import threading
    
    frameStore = []
    framesWanted = 180
    frameIter = 0
    with OakCamera() as oak:
        color = oak.create_camera('color')
        color.set_num_frames_pool(180)
        
        def checkFrames():
            global frameIter
            while True:
                if frameIter > 180:
                    print(frameStore)
                    frameIter = 0     
        
        def doFrameStore(packet: FramePacket):
            global frameStore
            global frameIter
            frame = packet.frame
            frameStore.append(frame)
            frameIter +=1
            print(frameIter)
            
        t = threading.Thread(target = checkFrames)
        t.daemon = True
        t.start()
        oak.callback(color.out.main, callback=doFrameStore)
        oak.start(blocking=True)

      Hi vital,

      thanks a lot for the advice. As far as I can see the 180 img frames are stored locally on the host machine in the frameStore var and not on the OakCamera or am I missing something? So technically, I always put the lastest image frame and store it locally but I never address the frames_pool?

      And is it possible to use the frame_pool as input for a nn on the oak directly without going to the host machine? basically:

      On the oakd: buffer 180 in the frames_pool => use it as an input for a custom blob model

      Best

        KhanCh It was just an experiment I did; I don't think it is particularly useful for your case but I wanted to share in case it happened to be. I am not at all a depthai expert -- sorry if there was a misunderstanding.

          Hi vital ,

          no worries! For brunching and transporting the data to the host system that good. I also think, it should be possible to buffer all 180 frames on the host machine and then use it as an input for the oak camera again as shown in https://docs.luxonis.com/projects/api/en/latest/samples/MobileNet/video_mobilenet/#video-mobilenetssd . So schematically you do:

          Oak-D => frames => buffer on host machine => processing on host machine => oak-d => nn.input => AI-processing.

          Basically, to use the host machine as external storage.

          I just wondered, that I couldn't find something in the document for accessing img frames from the frames_pool? Does anyone know how to do that?

            vital / KhanCh , For the heartrate estimation, perhaps you could reuse some of Cortic loigic? Demo here (at 4:14):

            Relevant code from their repo:

            Quickly glancing over the code, they run face deteciton on OAK, then send frames to the host where they run heartrate estimation.