• Measure a Latency of NeuralNetwork Node

Hi,

I have several questions regarding to latency.

I have my custom model converted to a blob file.

  1. Is there any way to measure latency in a NeuralNetwork node?

  2. I also tried to measure it by using in and out data of a NeuralNetwork node as you can see below.


    but time that written input, out, passthough by calling getTimestamp() are all same. Why


    Thank you

  • erik replied to this.
    • Best Answerset by erik

    Hi Jungduri ,
    If you enable trace debugging you can see latency of operations, example of log output:

    [184430102127631200] [1.3] [3.884] [DetectionNetwork(1)] [trace] NeuralNetwork inference took '56.251972' ms.
    [184430102127631200] [1.3] [3.887] [system] [trace] EV:0,S:0,IDS:10,IDD:13,TSS:3,TSN:887495335
    [184430102127631200] [1.3] [3.887] [system] [trace] EV:0,S:1,IDS:10,IDD:13,TSS:3,TSN:887539441
    [184430102127631200] [1.3] [3.884] [system] [trace] EV:1,S:0,IDS:9,IDD:0,TSS:3,TSN:884797408
    [184430102127631200] [1.3] [3.884] [system] [trace] EV:1,S:1,IDS:9,IDD:0,TSS:3,TSN:884834724
    [184430102127631200] [1.3] [3.886] [DetectionNetwork(1)] [trace] DetectionParser took '0.027416' ms.
    [184430102127631200] [1.3] [3.887] [system] [trace] EV:1,S:1,IDS:13,IDD:0,TSS:3,TSN:887861777
    [184430102127631200] [1.3] [3.889] [system] [trace] EV:0,S:0,IDS:10,IDD:13,TSS:3,TSN:889062871
    [184430102127631200] [1.3] [3.889] [system] [trace] EV:0,S:1,IDS:10,IDD:13,TSS:3,TSN:889118768
    [184430102127631200] [1.3] [3.891] [system] [trace] EV:1,S:1,IDS:14,IDD:0,TSS:3,TSN:891560231
    [184430102127631200] [1.3] [3.886] [DetectionNetwork(1)] [trace] NeuralNetwork inference took '57.269817' ms.
    [184430102127631200] [1.3] [3.886] [system] [trace] EV:1,S:0,IDS:9,IDD:0,TSS:3,TSN:886867631
    [184430102127631200] [1.3] [3.886] [system] [trace] EV:1,S:1,IDS:9,IDD:0,TSS:3,TSN:886903565
    [184430102127631200] [1.3] [3.894] [system] [trace] EV:0,S:0,IDS:5,IDD:12,TSS:3,TSN:894595561
    [184430102127631200] [1.3] [3.894] [system] [trace] EV:0,S:1,IDS:5,IDD:12,TSS:3,TSN:894662758
    [184430102127631200] [1.3] [3.894] [system] [trace] EV:1,S:1,IDS:12,IDD:0,TSS:3,TSN:894774412
    [184430102127631200] [1.3] [3.894] [system] [trace] EV:1,S:0,IDS:13,IDD:0,TSS:3,TSN:894880010
    [184430102127631200] [1.3] [3.888] [DetectionNetwork(1)] [trace] DetectionParser took '0.026914' ms.

    So here we can see inference itself takes about 57ms, and parsing of the results (as I'm using MobileNetDetectionNetwork) takes about 27us. I hope this helps!
    Thanks, Erik

    Hi Jungduri ,
    If you enable trace debugging you can see latency of operations, example of log output:

    [184430102127631200] [1.3] [3.884] [DetectionNetwork(1)] [trace] NeuralNetwork inference took '56.251972' ms.
    [184430102127631200] [1.3] [3.887] [system] [trace] EV:0,S:0,IDS:10,IDD:13,TSS:3,TSN:887495335
    [184430102127631200] [1.3] [3.887] [system] [trace] EV:0,S:1,IDS:10,IDD:13,TSS:3,TSN:887539441
    [184430102127631200] [1.3] [3.884] [system] [trace] EV:1,S:0,IDS:9,IDD:0,TSS:3,TSN:884797408
    [184430102127631200] [1.3] [3.884] [system] [trace] EV:1,S:1,IDS:9,IDD:0,TSS:3,TSN:884834724
    [184430102127631200] [1.3] [3.886] [DetectionNetwork(1)] [trace] DetectionParser took '0.027416' ms.
    [184430102127631200] [1.3] [3.887] [system] [trace] EV:1,S:1,IDS:13,IDD:0,TSS:3,TSN:887861777
    [184430102127631200] [1.3] [3.889] [system] [trace] EV:0,S:0,IDS:10,IDD:13,TSS:3,TSN:889062871
    [184430102127631200] [1.3] [3.889] [system] [trace] EV:0,S:1,IDS:10,IDD:13,TSS:3,TSN:889118768
    [184430102127631200] [1.3] [3.891] [system] [trace] EV:1,S:1,IDS:14,IDD:0,TSS:3,TSN:891560231
    [184430102127631200] [1.3] [3.886] [DetectionNetwork(1)] [trace] NeuralNetwork inference took '57.269817' ms.
    [184430102127631200] [1.3] [3.886] [system] [trace] EV:1,S:0,IDS:9,IDD:0,TSS:3,TSN:886867631
    [184430102127631200] [1.3] [3.886] [system] [trace] EV:1,S:1,IDS:9,IDD:0,TSS:3,TSN:886903565
    [184430102127631200] [1.3] [3.894] [system] [trace] EV:0,S:0,IDS:5,IDD:12,TSS:3,TSN:894595561
    [184430102127631200] [1.3] [3.894] [system] [trace] EV:0,S:1,IDS:5,IDD:12,TSS:3,TSN:894662758
    [184430102127631200] [1.3] [3.894] [system] [trace] EV:1,S:1,IDS:12,IDD:0,TSS:3,TSN:894774412
    [184430102127631200] [1.3] [3.894] [system] [trace] EV:1,S:0,IDS:13,IDD:0,TSS:3,TSN:894880010
    [184430102127631200] [1.3] [3.888] [DetectionNetwork(1)] [trace] DetectionParser took '0.026914' ms.

    So here we can see inference itself takes about 57ms, and parsing of the results (as I'm using MobileNetDetectionNetwork) takes about 27us. I hope this helps!
    Thanks, Erik

    
    def main(cfg):
        visualizer = Visualizer()
        pipeline = create_pipeline(cfg)
        postprocess = PostprocessSingle()
    
        device_info = dai.DeviceInfo("192.168.0.100")
        with dai.Device(pipeline, device_info) as device:
            device.setLogLevel(dai.LogLevel.TRACE)
            device.setLogOutputLevel(dai.LogLevel.TRACE)
            device.startPipeline()
            # q_rgb = device.getInputQueue("custom_image")
            q_rgb = device.getOutputQueue(name="rgb", maxSize=4, blocking=False)
            q_pose2d_out = device.getOutputQueue(
                name="pose2d_out", maxSize=4, blocking=False
            )
            idx = 0
            while True:
                rgb = q_rgb.get().getCvFrame()
                pose_data = q_pose2d_out.get()
               ....

    It shows "[system] [trace] and [system] [info]" but nothing about "dai.node.NeuralNetwork" I'm afraid 🙁
    Does it require to add more codes when I create this node?

    • erik replied to this.

      Hi Jungduri ,
      Maybe try enabling trace via terminal; I use powershell so I did $env:DEPTHAI_LEVEL='trace' to get it working.
      Thanks, Erik

      4 months later

      Unfortunately the given solution dose not work for me.

      Im still getting only the [system][trace] information but no [DetectionNetwork]. In my pipeline I am using NeuralNetwork, ScriptNode and stereo-depth related nodes.

      I´ve tryed to activate the trace-mode via terminal (powershell) and also via python script as described in https://docs.luxonis.com/projects/api/en/latest/tutorials/debugging/#depthai-debugging-level

      Am I missing something here?

        Hi werkilan
        What is the command you are using? Also, could you post the full log for maybe 15s since boot. It might give us more information as to what is going on.

        Thanks,
        Jaka

        Hi @jakaskerl

        Sorry for the late Reply.

        I activated the trace log via Terminal $env:DEPTHAI_LEVEL='trace' and executed my script for around 15s. Since there is a lot of prints, I dumped it all into the file.txt but it looks like I am not allowed to upload it. So here are a few parts of it:

        In this case the script uses basically the MobileNetSpatialDetectionNetwork node with spatial detection and sends the result via XLink to host. I did not find anything similar to "DetectionNetwork" and its inference time for example.

        It would be awesome if we could get some tool for debugging the pipeline, time taken by the nodes etc.

        Thanks,
        werkilan

        • erik replied to this.

          Hi werkilan ,
          We do have Pipeline graph tool which shows movement of msgs, it does not yet have processing time integrated (which can be seen by setting debug to trace).