Z
zdwiel

  • Jun 4, 2022
  • Joined Jun 3, 2022
  • 0 best answers
  • @zdwiel I am very interested in your results. What FOV value did you find? I really need to know this in my case when using 4K resolution.

  • @zdwiel
    1) The software sync for the camera modules within one device (rgb, left, right) happens automatically. The mechanism is explained here:
    https://docs.luxonis.com/projects/hardware/en/latest/pages/guides/sync_frames.html?#software-soft-sync

    Software “soft” sync

    Through firmware sync, we’re monitoring for drift and aligning the capture timestamps of the cameras, which are taken at the MIPI Start-of-Frame event. The Left/Right global shutter cameras are driven by the same clock, started by broadcast write on I2C, so no drift will happen over time, even when running freely without a sync. With the above functionality it would be also possible to configure FSIN as an output on one sensor, and an input to the other sensor. The RGB rolling shutter has a slight difference in clocking/frame-time, so when we detect a small drift, we’re modifying the frame time (number of lines) for the next frame by a small amount to compensate.

    All 3 cameras are soft-synced by default using the above method, as long as they are configured with the same FPS (default is 30).

    For multiple devices, currently we don't have a way to do soft sync. host-multiple-OAK-sync.py is actually just matching frames on host as close as possible based on their timestamps.
    But we were thinking to implement a soft sync mechanism for multiple devices connected to the same host as: a time-sync mechanism already runs in the background, and all devices are aware of the host time. Two options from here:

    • each device will try to align the capture time for its RGB/L/R cameras at host time intervals multiple of a period, based on the configured FPS. For example for an FPS of 20, the interval would be 50ms, and the devices will try to align the capture at host timestamps like: 1000.2s, 1000.25s, 1000.3s, ...
    • the host could monitor the frame timestamps from each device, and decide to issue frame-time adjustment commands (+/- a small time interval) for cameras that are drifting, that would be applied once (one frame-time is corrected, then the camera goes back to its configured frame-time -- based on FPS).

    2) As you said, capturing the stopwatch shown on the monitor isn't a good way to measure how good the sync is, it was added just to show that the frames captured are actually in sync, and not far apart by a large number. For each camera, we do capture a timestamp on device and attach it as metadata to the ImgFrame object. You can check the timestamp diff between cameras as here:
    https://github.com/luxonis/depthai-experiments/blob/master/gen2-syncing/README.md#output-with-logging-enabled

       Seq  Left_tstamp  RGB-Left  Right-Left  Dropped
       num    [seconds]  diff[ms]    diff[ms]  delta
         0     0.055592    -0.785       0.017
         1     0.088907    -0.771       0.011
         2     0.122222    -0.758       0.009
         3     0.155537    -0.745       0.010
         4     0.188852    -0.340       0.011
         5     0.222167    -0.326       0.011
         6     0.255482     0.063       0.010
         7     0.288796     0.078       0.010
         8     0.322111     0.257       0.010
         9     0.355426     0.270       0.009
        10     0.388741     0.246       0.010
        11     0.422056     0.260       0.011
        12     0.455371     0.146       0.009
        13     0.488686     0.160       0.009
        14     0.522000     0.046       0.010
        15     0.555315     0.061       0.010
        16     0.588644    -0.015       0.010
        17     0.621945     0.020       0.011
        18     0.655267    -0.011       0.008
        19     0.688575     0.019       0.010
        20     0.721890     0.018       0.009
        ...
    • ifl replied to this.