jakaskerl Are you suggesting this as a way to sort of post-process the data? as in I can find out how many frames I should delete/shift based on the time difference?
Or is there a more systematic way to get the frames to match before saving them in the first place

    Hi Tsjarly
    As I understand there is a problem with the recording portion, since timestamps of saved rgb frames does not match the timestamp of the depth frame, despite recording the same amount of frames?

    That could be solved by checking the timestamps and deleting the non-matching frames.

    Thanks,
    Jaka

    yes, indeed!
    So for example if I record 200 frames of data, my video will be 200 frames long, and I have 200 separate depth maps.
    If I would delete the mismatched depth frames, my video would be longer than the depth 'video'. so then I would also have to cut the video at the end if I would want to remain an equal amount of frames.

    If this is the way to do it, I will try to figure it out, but I was just wondering whether perhaps there would be some function that would already sync or align the queues, more internally

      Tsjarly
      I think the next depthAI API version implements some new way of syncing (like a sync node or something similar).

      Thanks,
      Jaka

      perfect! if you could add a link here to the documentation/example at the time it will be released, that would be amazing 🙂 thank you

      Hi @jakaskerl,

      I wasn't sure whether the new API has already been released or not.
      I tried implementing the timestamp approach, where it only starts saving the depth frames once, the time difference between rgb and d is at the lowest (so before the time-difference starts increasing again). this seems to be around 15ms maximum for different runs. However in the resulting video, there still seems to be an offset of 1 or 2 frames occasionally, sometimes too late, sometimes too early.

      Anyhow, I was wondering if the API is available yet, and otherwise whether it would be possible to combine these two example scripts, this one to sync camera streams and this one to sync different cameras.
      Would this theoretically be able to output 3 cameras, rgb and depth, (so 6 streams) all in sync?

        Hi Tsjarly
        API is out as of today: luxonis/depthai-pythonreleases/tag/v2.24.0.0

        Examples can be found under examples/sync.

        Tsjarly Would this theoretically be able to output 3 cameras, rgb and depth, (so 6 streams) all in sync?

        In theory, yes. Each device would sync its own cameras, then the host will sync the devices.

        Thanks,
        Jaka

        Perfect!
        So this one should solve the syncing between rgb and depth than.
        and the older one is still valid for syncing multiple cameras? or can this new node syncing also be used to sync different cameras?

          Hi Tsjarly

          Tsjarly So this one should solve the syncing between rgb and depth than.

          Correct.

          Tsjarly and the older one is still valid for syncing multiple cameras? or can this new node syncing also be used to sync different cameras?

          You will still have to use the old example. Current state of sync nodes does not allow for cross device syncing. Might be added in the future though.

          Thanks,
          Jaka

            jakaskerl coolio! I will look into getting it to work next week 🙂 thanks again