erik Hello Erik, I hope you are doing great. Thanks for your help. I managed to get around the problem i was facing, to solve it for now I removed the normalization code from the model training and retrained my model. So now I do not need to add a neural network node for normalization.
I have one other problem which I also asked you earlier in this thread. I have three patches extracted from same camera frame using three ImageManip nodes and I need to pass them through same neural network one by one. You suggested to use script node can you please explain a little more how can I do that. How can i use script in between to network nodes.
I have pasted my code below for your reference. At the moment i am just passing one image patch through the network. I have uncommented the other two ImageManip nodes in code so that you can see what I want to do. Right I am just passing output manip_time node through network. You might think I am missing connections of two ImageManip nodes but i just added them for you to understand what i want.
`import cv2
import depthai as dai
import numpy as np
time_bb_cord=[0.3617,0.6175,0.6414,0.7887]
local_bb_cord=[0.2750,0.28,0.4365,0.44375]
visit_bb_cord=[0.5846,0.27625,0.7435,0.445]
def Manip_Frame(pipeline,region_bb_cord):
manip_time = pipeline.create(dai.node.ImageManip)
manip_time.initialConfig.setCropRect(region_bb_cord[0],region_bb_cord[1],region_bb_cord[2],region_bb_cord[3])
manip_time.setKeepAspectRatio(False)
manip_time.initialConfig.setResize(100,32)
return manip_time
def NN_node(pipeline,path):
nn = pipeline.create(dai.node.NeuralNetwork)
nn.setBlobPath(path)
return nn
pipeline = dai.Pipeline()
model_path = 'crnn_99_soft_no_norm.blob'
cam = pipeline.create(dai.node.MonoCamera)
cam.setBoardSocket(dai.CameraBoardSocket.RIGHT)
cam.setFps(6)
cam.setResolution(dai.MonoCameraProperties.SensorResolution.THE_800_P)
manip_time=Manip_Frame(pipeline,time_bb_cord)
cam.out.link(manip_time.inputImage)
manip_local=Manip_Frame(pipeline,local_bb_cord)
cam.out.link(manip_local.inputImage)
manip_visit=Manip_Frame(pipeline,visit_bb_cord)
cam.out.link(manip_visit.inputImage)
recognition_nn=NN_node(pipeline,model_path)
manip_time.out.link(recognition_nn.input)
nn_Out = pipeline.create(dai.node.XLinkOut)
nn_Out.setStreamName("rec_time")
recognition_nn.out.link(nn_Out.input)
with dai.Device(pipeline) as device:
nn_queue = device.getOutputQueue(name="rec_time", maxSize=4, blocking=False)
while True:
nn_out = nn_queue.get()
if nn_out is not None:
outname=nn_out.getAllLayerNames()[0]
data=nn_out.getLayerFp16(outname)
raw_preds=[]
for i in range(24):
log_probs=data[i*13:i*13+13]
raw_preds.append(log_probs.index(max(log_probs)))
results=[]
previous=None
for l in raw_preds:
if l != previous:
results.append(l)
previous = l
results = [l for l in results if l != 0]
print(results)
else:
break
`