Sorry for the delay. We are working on producing the mre in the next few days and will be sending it over. We consider our code proprietary so I will be sending it over email.
Aawetzel
- Aug 29, 2024
- Joined Aug 18, 2022
- 0 best answers
Looks like there were no crash dumps saved unfortunately
erik We have not. I'll see if we can check that out today. Here's the actual output from our logs if this helps:
Aug 5 09:52:09 lpunit0006 Application.out[29929]: [184430104139E9F400] [10.10.1.8] [594.596] [system] [critical] Fatal error. Please report to developers. Log: 'stereoShave' '487'
Aug 5 09:53:41 lpunit0006 Application.out[29929]: [184430104139E9F400] [10.10.1.8] [1722866021.841] [host] [warning] Monitor thread (device: 184430104139E9F400 [10.10.1.8]) - ping was missed, closing the device connection
Aug 5 09:53:43 lpunit0006 Application.out[29929]: Exception (Communication exception - possible device error/misconfiguration. Original message 'Couldn't read data from stream: 'cam' (X_LINK_ERROR)') in EOA Camera thread. Trying again...
Looks like the R4M2E3 boards were fine, but the R4M2E4 is the one with the problem. They were all running the same bootloader and depthai version. We just updated the bootloader as well and that seemed to diminish the issue but didn't fix it.
Also, with some testing we've found that it resets much more often with larger models than smaller ones. With a nano detection model it was happening around once an hour and much more frequently with a medium one.
We have been using the oak-d pro poe in production systems for a bit over a year now and recently we have been seeing some of the devices we've order drop offline and then quickly reconnect. We use the same pipeline and code to connect to these cameras with two other oak-d pro poe's in the same system and this seems to be only happening with this new batch. Was there any changes made to their firmware or hardware in latest batches that would require an update in that from our side?
That makes a lot of sense and fixed it. I was unable to get the frame syncing to work, so I figured I'd get the simplest possible model working with two-stage object detection and then work my way back up.
- Edited
I’ve been attempting to get this two-stage neural network model working and I have been simplifying but nothing works. It appears to get 4 frames in and then hangs up. I have attempted using the loopback approach and other sync approaches as found in different online examples and experiments. Would you mind looking at the simplified pipeline I am running and let me know if anything is broken, or if it is simply hanging up because the second network is too slow?
from os.path import dirname import depthai as dai import cv2 def frame_norm(img, bounding_box): norm_vals = np.full(len(bounding_box), img.shape[0]) norm_vals[::2] = img.shape[1] return (np.clip(np.array(bounding_box), 0, 1) * norm_vals).astype(int) PATH = os.path.abspath(os.getcwd()) anchors = [10.0, 13.0, 16.0, 30.0, 33.0, 23.0, 30.0, 61.0, 62.0, 45.0, 59.0, 119.0, 116.0, 90.0, 156.0, 198.0, 373.0, 326.0] masks = {"side40": [0, 1, 2], "side20": [3, 4, 5], "side10": [6, 7, 8]} # Create Pipeline pipeline = dai.Pipeline() # Create Cam cam = pipeline.create(dai.node.ColorCamera) cam.setPreviewSize(int(1920), int(1080)) cam.setResolution(dai.ColorCameraProperties.SensorResolution.THE_1080_P) cam.setColorOrder(dai.ColorCameraProperties.ColorOrder.RGB) cam.setInterleaved(False) cam.setFps(10) # Image Manip manip = pipeline.createImageManip() manip.initialConfig.setResize(320, 320) manip.setMaxOutputFrameSize(320*320*3) manip.initialConfig.setKeepAspectRatio(False) # Create mouse Neural Network mouse_nn = pipeline.create(dai.node.YoloDetectionNetwork) mouse_nn.setBlob(os.path.join(PATH, 'mouse.blob')) mouse_nn.setConfidenceThreshold(.7) mouse_nn.setNumClasses(1) mouse_nn.setCoordinateSize(4) mouse_nn.setAnchors(anchors) mouse_nn.setAnchorMasks(masks) # Create scroll_wheel Neural Network scroll_wheel_nn = pipeline.create(dai.node.YoloDetectionNetwork) scroll_wheel_nn.setBlob(os.path.join(PATH, 'scroll.blob')) scroll_wheel_nn.setConfidenceThreshold(.7) scroll_wheel_nn.setNumClasses(1) scroll_wheel_nn.setCoordinateSize(4) scroll_wheel_nn.setAnchors(anchors) scroll_wheel_nn.setAnchorMasks(masks) # Create Script script = pipeline.create(dai.node.Script) script.setScript(""" def limit_roi(det): if det.xmin <= 0: det.xmin = 0.001 if det.ymin <= 0: det.ymin = 0.001 if det.xmax >= 1: det.xmax = 0.999 if det.ymax >= 1: det.ymax = 0.999
while True: mouse_det = node.io['mouse'].get().detections for det in mouse_det: limit_roi(det) cfg = ImageManipConfig() cfg.setCropRect(det.xmin, det.ymin, det.xmax, det.ymax) cfg.setResize(320, 320) cfg.setKeepAspectRatio(False) node.io['manip_cfg'].send(cfg)
""") # Create scroll_wheel Manip scroll_wheel_manip = pipeline.create(dai.node.ImageManip) scroll_wheel_manip.initialConfig.setResize(320, 320) scroll_wheel_manip.inputConfig.setWaitForMessage(True) # Label cam_x = pipeline.create(dai.node.XLinkOut) cam_x.setStreamName('cam') scroll_wheel_x = pipeline.create(dai.node.XLinkOut) scroll_wheel_x.setStreamName('scroll_wheel') # Link cam.preview.link(cam_x.input) scroll_wheel_nn.out.link(scroll_wheel_x.input) cam.preview.link(manip.inputImage) cam.preview.link(scroll_wheel_manip.inputImage) manip.out.link(mouse_nn.input) # manip to mouse_nn input scroll_wheel_manip.out.link(scroll_wheel_nn.input) mouse_nn.out.link(script.inputs['mouse']) script.outputs['manip_cfg'].link(scroll_wheel_manip.inputConfig) # sends manip_cfg to scroll_wheel input with dai.Device(pipeline) as device: cam_q = device.getOutputQueue('cam', maxSize=4, blocking=False) scroll_wheel_q = device.getOutputQueue('scroll_wheel', maxSize=4, blocking=False) mouse_color = (0, 255, 0) scroll_wheel_color = (255, 0, 255) while True: frame = cam_q.get().getCvFrame() scroll_wheel_det = scroll_wheel_q.get().detections for detection in scroll_wheel_det: bbox = frame_norm(frame, (detection.xmin, detection.ymin, detection.xmax, detection.ymax)) print(f'BBOX: {bbox}') `I am currently working on deploying a simple object detection model trained on yolov7. I exported the model as a .onnx type and then converted it to .blob. While running the pipeline on our OAK-D Pro Wide the detections fed from the neural network queue are not lining up with the expected value. I am getting confidences up to 3.3, labels of 0 and 1 when there is only one label, and negative x and y values.
Example Output:
Label: 0
Confidence: 1.1123046875
Xmin: -8.9375
Ymin: 0.10968017578125
Xmax: -1.23828125
Ymax: 1.1015625Makes sense. The current setup is a 100 Mbps POE switch with only the computer and camera connected. At 100 Mbps that delay is expected?
I have the Pro W and the Pro W POE. Using the available demo, the type-c camera works flawlessly and shows video in real time. The POE model on the other hand has a 1-2 second delay and a frame rate of around 13 fps.
Is this a common/known issue or could this be an issue with my hardware setup?