Hey everyone, I'm experiencing significant jitter when capturing IMU data on my OAK-D.

I followed this GitHub issue in an attempt to resolve the problem: https://github.com/luxonis/depthai-python/issues/774. I've also comment on the issue as well.

  1. I ran Python script provided by @Gomzinator (with default values - 200hz IMU) and noted that some deltas were as large as 8 ms (jitter is definitely large).

  2. I went on to check and update the firmware on my OAK-D. Current firmware was 3.9.7 and it was successfully updated to 3.9.9. After rerunning the script, this did not alleviate the issue.

  3. I also checked the DepthAI version installed on my system, it was 2.27. I updated to version 2.28. After rerunning the script, this still did not alleviate the issue.

  4. Trying on different USB ports on my system still hasn't resulted in a change.

  5. I noticed my RAM usage to be unusually high so I decided to restart my PC and rerun the script ~ still no change.

  6. Decided to checkout and install the development branch of DepthAI ~ still experiencing the issue.

Attached is a scatter-plot showcasing my deltas:

    ShivaRamoudith
    Possible frequencies:
    Accelerometer: 15Hz, 31Hz, 62Hz, 125Hz, 250Hz 500Hz
    Gyroscope: 25Hz, 33Hz, 50Hz, 100Hz, 200Hz, 400Hz

    Setting to 200Hz will set accelerometer to 250Hz..

    Thanks,
    Jaka

    Unfortunately, this difference between stable frequencies was overlooked and has been affecting my script.
    Thank you for bringing it to my attention!

    7 days later

    @jakaskerl
    I just need help confirming that everything is working correctly. I recorded accelerometer and gyroscope data separately and created graphs of the timestamp deltas.

    Attached are two graphs showing the timestamp deltas for the accelerometer (125 Hz) and gyroscope (100 Hz) respectively. Data was captured for 90 seconds for each sensor.

    I do notice that there are some significant spikes with the deltas for the accelerometer.

    Accelerometer:

    Gyroscope:

      ShivaRamoudith
      Show a mre please.

      The small jitter (0.35 std) is expected because of the BNO interrupt inconsistencies.

      Thanks,
      Jaka

      ShivaRamoudith
      Remove

      if cv2.waitKey(1) == ord('q'):
                  break

      it's too slow.

      Thanks,
      Jaka

      I understand your logic in flagging this however I reran my script and there is no significant difference in the deviation.

        Hi ShivaRamoudith

        Try this:

        import cv2
        import depthai as dai
        import matplotlib.pyplot as plt
        import numpy as np
        
        # Create pipeline
        pipeline = dai.Pipeline()
        
        # Define sources and outputs
        imu = pipeline.create(dai.node.IMU)
        xlinkOut = pipeline.create(dai.node.XLinkOut)
        pipeline.setXLinkChunkSize(0)
        
        xlinkOut.setStreamName("imu")
        
        # Enable GYROSCOPE_RAW at 200 Hz rate
        imu.enableIMUSensor(dai.IMUSensor.GYROSCOPE_RAW, 100)
        
        # Set batch report threshold and max batch reports
        imu.setBatchReportThreshold(1)
        imu.setMaxBatchReports(10)
        
        # Link plugins IMU -> XLINK
        imu.out.link(xlinkOut.input)
        
        # Initialize list to store time deltas
        time_deltas = []
        
        # Pipeline is defined, now we can connect to the device
        with dai.Device(pipeline) as device:
        
            # Output queue for imu bulk packets
            imuQueue = device.getOutputQueue(name="imu", maxSize=50, blocking=False)  # Reduce maxSize for more frequent updates
            lastTimestamp = None
        
            while len(time_deltas) < 1000:
                imuData: dai.IMUData = imuQueue.get()  # blocking call, will wait until new data has arrived
                currentTimestamp = imuData.getTimestamp().total_seconds()
            
                if lastTimestamp is not None:
                    delta = (currentTimestamp - lastTimestamp)  # Calculate time delta
                    time_deltas.append(delta)
        
                lastTimestamp = currentTimestamp
        
                
        
        # Calculate mean and standard deviation
        time_deltas_ms = [x * 1000 for x in time_deltas]
        mean_delta = np.mean(time_deltas_ms)
        std_delta = np.std(time_deltas_ms)
        
        # Create scatter plot
        plt.figure(figsize=(10, 6))
        plt.scatter(range(len(time_deltas_ms)), time_deltas_ms, c='blue', label='Time Deltas', s=10)
        plt.axhline(mean_delta, color='red', linestyle='--', label=f'Mean = {mean_delta:.2f} ms')
        plt.text(0, 0, f"Mean: {mean_delta:.2f} ms\nStd: {std_delta:.2f} ms", fontsize=12, bbox=dict(facecolor='white', alpha=0.5))
        plt.xlabel('Measurement Index')
        plt.ylabel('Time Delta (ms)')
        plt.title('Scatter Plot of Time Deltas Between Consecutive IMU Packets')
        plt.legend()
        plt.show()

        Thanks,
        Jaka

        Thank you for the script Jaka.

        Unfortunately, as I increase the number of data points to be captured with your script, I'm getting pretty much the same results with my script.

          ShivaRamoudith
          Pretty much meaning what? The variance is expected due to how the IMU works. If you get timestamps that are multples of the base time difference, it means the host side loop is unable to keep up with the IMU frequency.

          Thanks,
          Jaka

          I thought that your code was going to produce something different hence my question.

          I'll keep your point (on the multiples of base time difference) in mind when doing further investigation.

          Thank you for assisting me Jaka! 🙂

          I’ve given more thought to what you mentioned, Jaka - If we’re experiencing large deltas, it suggests that the host side (PC) isn’t able to keep up with the IMU frequency.

          However, aren’t the timestamps generated directly on the OAK-D? When we call getDeviceTimestamp(), we should receive the timestamp corresponding to when a data packet was created on the OAK-D. This means we’re using timestamps from the OAK instead of the host, so we shouldn’t see such large deltas (in my case, it’s nearly double the average delta).

            ShivaRamoudith
            That would be correct, but the IMU packets are discarded if they are not read in time. You can see this by increasing the queue size to eg. 100. It will take more time for measurements to become unstable.

            Thanks,
            Jaka

            I tried increasing the queue size to the following values : [30, 60, 90, 200, 500, 1000, 5000, 10000] and I am still getting the same number of outliers (5 in total) every single time I run the script.
            I also notice that the points in time which the outliers occur are pretty much the same.
            Note that I am also capturing a total of 10,000 data points and I even set the blocking value to True to prevent data loss.

            @jakaskerl Could you provide insight as to why I am getting the same 5 outliers at almost the same point in time across all queue sizes?

              ShivaRamoudith
              Even with 10000 datapoints I am not getting any issues. It's likely that some link in the pipeline is incapable of keeping up with the speed of data, potentially dropping frames. You can try setting all queues to blocking and setMaxBatchReports to a higher number so IMU itself doesn't drop frames if they are not read fast enough.

              Thanks,
              Jaka