MobileNetDetection Node synch question
- Edited
Sorry about that...here is the pipeline setup and threads used. The problem with detections and passthroughs not seeming to be synchronized happens with both MobileNets.
pipeline = depthai.Pipeline()
cam_rgb = pipeline.create(depthai.node.ColorCamera) # create color camera object
cam_rgb.setPreviewSize(576, 576) # set camera preview size
cam_rgb.setInterleaved(False)
cam_rgb.initialControl.setManualFocus(110)
det_nn = pipeline.createMobileNetDetectionNetwork() # create tattoo detection mobilenet network
det_nn.setBlobPath("C:\\Luxonis\\DETECT_BLOBS\\Detect_2_17_2022.blob") # configure path to blob
det_nn.setConfidenceThreshold(0.5) # set confidence threshold
det_nn.input.setQueueSize(1)
det_nn.input.setBlocking(False)
manipRgb = pipeline.createImageManip()
rgbRr = depthai.RotatedRect()
rgbRr.center.x, rgbRr.center.y = cam_rgb.getPreviewWidth() // 2, cam_rgb.getPreviewHeight() // 2
rgbRr.size.width, rgbRr.size.height = cam_rgb.getPreviewHeight(), cam_rgb.getPreviewWidth()
rgbRr.angle = 90
manipRgb.initialConfig.setCropRotatedRect(rgbRr, False)
cam_rgb.preview.link(manipRgb.inputImage)
manip = pipeline.createImageManip()
manip.initialConfig.setResize(300, 300)
manip.initialConfig.setFrameType(depthai.RawImgFrame.Type.RGB888p)
manipRgb.out.link(manip.inputImage)
manip.out.link(det_nn.input)
cam_xout = pipeline.createXLinkOut()
cam_xout.setStreamName("cam_out")
manipRgb.out.link(cam_xout.input)
rec_nn = pipeline.createMobileNetDetectionNetwork() # create tattoo ocr mobilenet network
rec_nn.setBlobPath("C:\\Luxonis\\READ_BLOBS\\read_2_16_2022.blob") # configure path to blob
rec_nn.setConfidenceThreshold(0.4) # set confidence threshold
rec_nn.input.setQueueSize(1)
rec_nn.input.setBlocking(False)
rec_xin = pipeline.createXLinkIn()
rec_xin.setStreamName("rec_in")
rec_xin.out.link(rec_nn.input)
det_nn_xout = pipeline.createXLinkOut()
det_nn_xout.setStreamName("det_nn")
det_nn.out.link(det_nn_xout.input)
det_pass = pipeline.createXLinkOut()
det_pass.setStreamName("det_pass")
det_nn.passthrough.link(det_pass.input)
rec_xout = pipeline.createXLinkOut()
rec_xout.setStreamName("rec_nn")
rec_nn.out.link(rec_xout.input)
rec_pass = pipeline.createXLinkOut()
rec_pass.setStreamName("rec_pass")
rec_nn.passthrough.link(rec_pass.input)
def detect_thread(det_queue, det_pass, rec_queue):
global tattoo_detections, tat_last_seq, tat_last_img
while running:
try:
in_det = det_queue.get().detections
in_pass = det_pass.get()
orig_frame = frame_seq_map.get(in_pass.getSequenceNum(), None)
tat_last_img = orig_frame
if orig_frame is None:
continue
tat_last_seq = in_pass.getSequenceNum()
tattoo_detections = in_det
for detection in tattoo_detections:
bbox = frameNorm(orig_frame, (detection.xmin, detection.ymin, detection.xmax, detection.ymax))
cropped_frame = orig_frame[bbox[1] - offset:bbox[3] + offset, bbox[0] - offset:bbox[2] + offset]
shape = cropped_frame.shape
if shape[0] > 0 and shape[1] > 0:
tstamp = time.monotonic()
img = depthai.ImgFrame()
img.setTimestamp(tstamp)
img.setType(depthai.RawImgFrame.Type.BGR888p)
img.setData(to_planar(cropped_frame, (300, 300)))
img.setWidth(300)
img.setHeight(300)
rec_queue.send(img)
fps.tick('detect')
except RuntimeError:
continue
def rec_thread(q_rec, q_pass):
global rec_results, decoded_text
while running:
try:
# Get detections from queue of cropped frames from tattoo detection nn
rec_data = q_rec.get().detections
rec_frame = q_pass.get().getCvFrame()
seq = q_pass.get().getSequenceNum()
char_detections = [detection for detection in rec_data]
except RuntimeError:
continue
decoded_text = ''
# Create list of detections xmin position and detection label
for detection in rec_data:
bbox = frameNorm(rec_frame, (detection.xmin, detection.ymin, detection.xmax, detection.ymax))
cv2.rectangle(rec_frame, (bbox[0], bbox[1]), (bbox[2], bbox[3]), (205, 0, 0), 2)
cv2.putText(rec_frame, '{} ({}%)'.format(labels[detection.label], int(detection.confidence * 100)), (bbox[0] - 10, bbox[1] - 20), cv2.FONT_HERSHEY_TRIPLEX, 0.4, (205, 0, 0))
# Create result image to stack
rec_results = [(cv2.resize(rec_frame, (300, 300)), decoded_text)] + rec_results[:9]
fps.tick('OCR')
with depthai.Device(pipeline) as device:
cam_out = device.getOutputQueue("cam_out", 1, True)
rec_in = device.getInputQueue("rec_in")
det_nn = device.getOutputQueue("det_nn", 1, False)
det_pass = device.getOutputQueue("det_pass", 1, False)
rec_nn = device.getOutputQueue("rec_nn", 1, False)
rec_pass = device.getOutputQueue("rec_pass", 1, False)
det_t = threading.Thread(target=detect_thread, args=(det_nn, det_pass, rec_in))
det_t.start()
rec_t = threading.Thread(target=rec_thread, args=(rec_nn, rec_pass))
rec_t.start()
Hello TylerD , from my understanding of the code you are running first object detection (mobilenet), then crop the image on the host, send it back to the device and run another object detection model. Instead of sending imgs/NN results back to host for cropping, I would suggest using Script node instead (for ImageManipConfigs used for cropping), see demo here.
Thanks, Erik
Understood, I'll give that a try.
When using the script node it freezes. I used the existing warn diagnostic messages from the demo. The following is the output from the warn diagnostics and error message in the order they occured.
[Script(4)] [warning] Detection rect: 0.34814453125, 0.529296875, 0.45703125, 0.611328125
[Script(4)] [warning] 1 from nn_in: 0.34814453125, 0.529296875, 0.45703125, 0.611328125
[ImageManip(5)] [error] Processing failed, potentially unsupported config.
- Edited
cam_rgb = pipeline.create(depthai.node.ColorCamera)
cam_rgb.setPreviewSize(300, 300)
cam_rgb.setInterleaved(False)
det_nn = pipeline.createMobileNetDetectionNetwork()
det_nn.setBlobPath("C:\\Luxonis\\DETECT_BLOBS\\Detect_2_17_2022.blob")
det_nn.setConfidenceThreshold(0.5)
det_nn.input.setQueueSize(1)
det_nn.input.setBlocking(False)
cam_rgb.preview.link(det_nn.input)
image_manip_script = pipeline.create(depthai.node.Script)
det_nn.out.link(image_manip_script.inputs['nn_in'])
cam_rgb.preview.link(image_manip_script.inputs['frame'])
image_manip_script.setScript("""
import time
def limit_roi(det):
if det.xmin <= 0: det.xmin = 0.001
if det.ymin <= 0: det.ymin = 0.001
if det.xmax >= 1: det.xmax = 0.999
if det.ymax >= 1: det.ymax = 0.999
while True:
frame = node.io['frame'].get()
tat_dets = node.io['nn_in'].get().detections
node.warn(f"Tats detected: {len(tat_dets)}")
for det in tat_dets:
limit_roi(det)
node.warn(f"Detection rect: {det.xmin}, {det.ymin}, {det.xmax}, {det.ymax}")
cfg = ImageManipConfig()
cfg.setCropRect(det.xmin, det.ymin, det.xmax, det.ymax)
cfg.setResize(300, 300)
cfg.setKeepAspectRatio(False)
node.io['manip_cfg'].send(cfg)
node.io['manip_img'].send(frame)
node.warn(f"1 from nn_in: {det.xmin}, {det.ymin}, {det.xmax}, {det.ymax}")
""")
manip_crop = pipeline.create(depthai.node.ImageManip)
image_manip_script.outputs['manip_img'].link(manip_crop.inputImage)
image_manip_script.outputs['manip_cfg'].link(manip_crop.inputConfig)
manip_crop.initialConfig.setResize(300, 300)
manip_crop.inputConfig.setWaitForMessage(True)
ocr_nn = pipeline.createMobileNetDetectionNetwork()
ocr_nn.setBlobPath("C:\\Luxonis\\READ_BLOBS\\read_2_16_2022.blob")
ocr_nn.setConfidenceThreshold(0.4)
ocr_nn.input.setQueueSize(1)
ocr_nn.input.setBlocking(False)
manip_crop.out.link(ocr_nn.input)
Could you try setting both Script inputs to blocking=False
and QueueSize=1
?
image_manip_script.inputs['nn_in'].setBlocking(False)
image_manip_script.inputs['nn_in'].setQueueSize(1)
image_manip_script.inputs['frame'].setBlocking(False)
image_manip_script.inputs['frame'].setQueueSize(1)
Thanks, Erik
Sorry for the delay...I made the recommended changes. I also found I had to replace the .get() and .get().detections with tryGets() and check that detections were not None before proceeding with the ImageManip. I am still get the warning "Processing failed, potentially unsupported config" error, but it doesn't cause a problem anymore. Setting the setKeepAspectRatio(True) stops this error from occurring, but has a negative effect on the second network's performance. The cropped image is being correctly sent to the second MobileNet and it is returning good results. I have a few more things to add, but will be doing more field testing soon. Thanks for the support (and the great product) !!!
- Edited
Erik why was this the recommendation here?
I am having an issue with my set up where a lot of frames are being dropped, I assume because some node is taking a while and then frames are being replaced. So when I try to grab synched messages on the host sometimes up to 4 seq numbers will skip.
I wanted to update to setQueueSize(10)
on all my nodes and setBlocking(True)
.
- Is there some queue limit size I should be considering on the device?
- it seems like NN's don't have setQueueSize they have setNumPoolFrames, is this equivalent
- Does nn nodes have an equivalent for setBlocking?
Are all device nodes default setQueueSize(3) and setBlocking(False) ?
Hi AdamPolak ,
- No, RAM is the limitation
- No, pool is the output, queue is the input, see docs here: https://docs.luxonis.com/projects/api/en/latest/components/device/
- Yes:
nn = pipeline.create(dai.node.MobileNetDetectionNetwork) nn.input.setBlocking(False)
Hi AdamPolak
AdamPolak Are all device nodes default setQueueSize(3) and setBlocking(False) ?
https://docs.luxonis.com/projects/api/en/latest/components/device/#blocking-behaviour
Though, I have found that on some nodes (object tracker) the default is non blocking.
Thanks,
Jaka