Hi guys
We are trying to accomplish the following and running into a couple of problem.
The problem we are trying to solve as follows:
We only want to send frames to the host from the camera when something is detected via the MobileNet node. This decision of whether to send a frame or not has to happen wholly on the camera, we will eventually be sending frames over SPI
To date we have the following working
- We are using the RGB cam and passing the preview stream to the Mobile Net
- Detections from the MobileNet node are being passed to a script node
- The script node iterates the detections and decides if something of interested was detected
If something of interested is detected we would like the script node to then grab a still frame to send to the host (over SPI in the final version).
Our problems are as follows
- We can’t attach two pipelines to the single camera
- Trying to get a still off the pipeline sending the preview stream to mobilenet doesn’t seem to result in any data, the get call blocks and never returns
- Using passthrough on the mobile net to pass the image frames to the script from mobilenet causes the script node to block on calling get on the detections for some reason, I don’t understand why, I have attached the code for that since I don’t see why it’s causing it to lock up.
So our questions are as follows
- Is there a way to attach multiple pipelines to a single camera
- Is there a way to send the preview stream to one node but allow another to take stills?
- Is there another approach to this we should be looking at?
Here is our code:
from re import X
import depthai as dai
import time
import cv2
from pathlib import Path
pipeline = dai.Pipeline()
camRgb = pipeline.create(dai.node.ColorCamera)
camRgb.setPreviewSize(512, 512)
camRgb.setInterleaved(False)
camRgb.setFps(15)
# Define a neural network that will make predictions based on the source frames
nnPath = str((Path(__file__).parent / Path('C:/Users/MelikaSoleimaniAST/esp-idf/depthai-python-examples/depthai-python/examples/models/person-vehicle-bike-detection-crossroad-1016.blob')).resolve().absolute())
nn = pipeline.create(dai.node.MobileNetDetectionNetwork)
nn.setConfidenceThreshold(0.5)
nn.setBlobPath(nnPath)
nn.setNumInferenceThreads(2)
nn.input.setBlocking(False)
# Send camera feed to the model to create detections
camRgb.preview.link(nn.input)
#nn.out.link(nnOut.input)
#this should no longer be used - at the moment
#xIn = pipeline.create(dai.node.XLinkIn)
#xIn.setStreamName("str")
#Send output from detections to the input of the script node
script = pipeline.create(dai.node.Script)
nn.out.link(script.inputs['str'])
#This sends still frames into the script node
#still_cam = pipeline.create(dai.node.ColorCamera)
#still_cam.still.link(script.inputs['frames'])
nn.passthrough.link(script.inputs['frames'])
#script.inputs['frames'].link(nn.passthrough)
script.setScript("""
import time
labelMap = ["bike", "vehicle", "person"]
#ctrl = CameraControl()
#ctrl.setCaptureStill(True)
#node.io['ctrl'].send(ctrl)
while True:
node.warn('looping')
data = node.io['str'].get() #.getData()
node.warn('called get')
if data is not None:
node.warn('data is something')
detections = data.detections
for detection in detections:
node.warn('we detected: ' + labelMap[detection.label])
if (labelMap[detection.label]=='person'):
node.warn('1')
frame = node.io['frames'].get()
node.warn('2')
node.io['stream'].send(frame)
node.warn('3')
else:
node.warn('Data is none')
# Slow down so we don't hot loop
time.sleep(0.1) # Sleep 1 sec - avoid hot looping - need to tune this for the detections
#We don't need to update any of the controlls?
#node.io['ctrl'].send(ctrl)
""")
#Send still frames over xlink
xout = pipeline.create(dai.node.XLinkOut)
xout.setStreamName('stream')
script.outputs['stream'].link(xout.input)
# Send instructions to camera from script object
#script.outputs['ctrl'].link(still_cam.inputControl)
with dai.Device(pipeline) as device:
# Output queues will be used to get the rgb frames and nn data from the outputs defined above
#qScript_out = device.getOutputQueue("ret_str")
qStream = device.getOutputQueue("stream")
device.startPipeline()
while not device.isClosed():
data = qStream.get()
time.sleep(1)