• DepthAI-v2
  • Neural Network Node (multiple inputs to one node)

Hello msee19018 ,
What you need is syncing from multiple inputs (from multiple ImageManip nodes tiling a frame) and then forward frames. Script node would read all frames then sync them based on sequence number. Once it gets all 3 subframes/tiles from the same frame, it forwards them to the NN node (3 outputs, as NN node has 3 inputs). Forwarding frames/demux example here. An example of how to sync multiple frames based on sequence number can be found here.
Thanks, Erik

    erik Erik, I got everything from your last reply except where you said NN can have 3 inputs. How can it be done? does not Neural Network only has one input as described in docs? I am confused here.

    • erik replied to this.

      Hello msee19018 , I apologize for lacking docs, we will update them asap. You can have multiple inputs, please refer to this post for more information.
      Thanks. Erik

        erik one more thing I need a high quality gray scale image. How can I convert ColorCamera Image to gray scale? without using any neural network. Are there any functions inside depthai API that can do this?

        • erik replied to this.

          erik Thanks for that suggestion. I got into new problem, I am trying to use script node for some calculation on output of Neural Network node. Problem is it runs correct one time and then it gets stuck. May be the pipeline freezes or something like that. I am unable to get it solved for sometime now. I started debugging it I thought may be some other node is causing problems so I started removing one by one all the nodes and then checked again but it still gets stuck. finally I left with just camera node and script node even that gets stuck. I do know any debugging tool for depthai. Please tell if there is one or how can i get it fixed?. I have pasted my code below with output on console please suggest a solution.

          import depthai as dai
          import numpy as np
          
          time_bb_cord=[0.3633,0.62,0.6297,0.79125]
          local_bb_cord=[0.275,0.28375,0.4367,0.45125]
          visit_bb_cord=[0.5867,0.27875,0.7468,0.44875]
          
          def Manip_Frame(pipeline,region_bb_cord):
          	manip_time = pipeline.create(dai.node.ImageManip)
          	manip_time.initialConfig.setCropRect(region_bb_cord[0],region_bb_cord[1],region_bb_cord[2],region_bb_cord[3])
          	manip_time.setKeepAspectRatio(False)
          	manip_time.initialConfig.setResize(100,32)
          	return manip_time
          def NN_node(pipeline,path):
          	nn = pipeline.create(dai.node.NeuralNetwork)
          	nn.setBlobPath(path)
          	return nn
          
          pipeline = dai.Pipeline()
          model_path = 'crnn_99_soft_no_norm.blob'
          
          cam = pipeline.create(dai.node.MonoCamera)
          cam.setBoardSocket(dai.CameraBoardSocket.RIGHT)
          cam.setFps(6)
          cam.setResolution(dai.MonoCameraProperties.SensorResolution.THE_800_P)
          
          manip_time=Manip_Frame(pipeline,time_bb_cord)
          cam.out.link(manip_time.inputImage)
          
          recognition_time=NN_node(pipeline,model_path)
          manip_time.out.link(recognition_time.input)
          
          
          script=pipeline.create(dai.node.Script)
          script.inputs['time_pred'].setBlocking(False)
          script.inputs['time_pred'].setQueueSize(1)
          recognition_time.out.link(script.inputs['time_pred'])
          script.setScript("""
          import marshal
          
          time_=node.io['time_pred'].get()
          outname=time_.getAllLayerNames()[0]
          data=time_.getLayerFp16(outname)
          raw_preds=[]
          for i in range(24):
          	log_probs=data[i*13:i*13+13]
          	raw_preds.append(log_probs.index(max(log_probs)))
          	results=[]
          previous=None
          for l in raw_preds:
          	if l != previous:
          		results.append(l)
          		previous = l
          results = [l for l in results if l != 0]
          
          x_serial = marshal.dumps(results)
          b = Buffer(len(x_serial))
          b.setData(x_serial)
          node.io['f_time'].send(b)
          
          # node.warn(time)
          	""")
          
          nn_Out = pipeline.create(dai.node.XLinkOut)
          nn_Out.setStreamName("rec_time")
          script.outputs['f_time'].link(nn_Out.input)
          
          with dai.Device(pipeline) as device:	
          	nn_queue = device.getOutputQueue(name="rec_time", maxSize=4, blocking=False)
          	while True:
          		nn_out = nn_queue.get()
          		if nn_out is not None:
          			print(nn_out.getData())

          Last run on console is regarding this problem you can see it is stuck.

          • erik replied to this.

            Hello msee19018 ,
            it's running only once as you don't have while True: loop in your Script node. Is the error screenshot related to the code above?
            Thanks, Erik

              msee19018 does it work now as expected, after adding the while True: loop to the Script node?

                erik Thanks Erik yes that is working now. I thought script node will automatically gets called in pipeline whenever input is available at to it.

                erik Hi Erik I got into an error I have pasted its screen shot below can you please tell me what is causing it?

                • erik replied to this.

                  Hello msee19018 , I have no idea, could you share minimal reproducible code for this error?
                  Thanks, Erik

                    erik Hi Erik, I want to process images from color camera before sending it to neural network. I just need to multiply some values with each channel of RGB image. I am unable to get the image in an array or list format and getCvFrame() and getFrame also do not work in script node. I first need to convert (1080,1920*3) image to byte array or list after that it would be easy i think. I have following code but it gives the error. I have attached screen shot of error for your reference.

                    import cv2
                    import numpy as np
                    import depthai as dai
                    import os
                    from time import monotonic
                    time_bb_cord=[0.3617,0.6175,0.6414,0.7887]
                    local_bb_cord=[0.2750,0.28,0.4365,0.44375]
                    visit_bb_cord=[0.5846,0.27625,0.7435,0.445]
                    def Manip_Frame(pipeline,region_bb_cord):
                    	manip_time = pipeline.create(dai.node.ImageManip)
                    	manip_time.initialConfig.setCropRect(region_bb_cord[0],region_bb_cord[1],region_bb_cord[2],region_bb_cord[3])
                    	manip_time.setKeepAspectRatio(False)
                    	manip_time.initialConfig.setResize(100,32)
                    	return manip_time
                    
                    pipeline = dai.Pipeline()
                    cam = pipeline.create(dai.node.ColorCamera)
                    cam.setBoardSocket(dai.CameraBoardSocket.RGB)
                    cam.setResolution(dai.ColorCameraProperties.SensorResolution.THE_1080_P)
                    cam.setInterleaved(False)
                    cam.setColorOrder(dai.ColorCameraProperties.ColorOrder.RGB)
                    
                    gray_scale_script=pipeline.create(dai.node.Script)
                    cam.video.link(gray_scale_script.inputs['video_link'])
                    gray_scale_script.setScript("""
                    b=Buffer(2074680)
                    while True:
                    	frame=node.io['video_link'].get()
                    	array=list(frame.getData())
                    	b.setData(array[0:2074680]) 
                    	node.warn(str(b[0]))
                    	node.warn(str(array[0]))
                    	# node.warn(str(imgGray[0,0]))
                    	# b=imgFrame(2,074,680)
                    	# b.setData(imgGray)
                    	# b.setHeight(1080)
                    	# b.setWidth(1920)
                    	node.io['script_out'].send(b)
                    	""")
                    
                    camout = pipeline.create(dai.node.XLinkOut)
                    camout.setStreamName("right_cam")
                    gray_scale_script.outputs['script_out'].link(camout.input)
                    with dai.Device(pipeline) as device:
                    	right_cam = device.getOutputQueue(name="right_cam", maxSize=4, blocking=False)
                    	while True:
                    		right_frame=right_cam.get()
                    
                    		if right_frame is not None:
                    			print(right_frame.getCvFrame().shape)
                    		else:
                    			break
                    
                    		if cv2.waitKey(1) == ord('q'):
                    			break
                    
                    
                    `
                    • erik replied to this.

                      Hello msee19018 , using Script node for such task isn't advisable, as it would likely take a few seconds for each frame (to copy the bytes). Why don't you just send frame directly to the neural network node, where it would multiply some values?
                      Thanks, Erik

                        20 days later

                        erik Hi Erik, I am almost done with my project I have just one last task to accomplish. After all the processing I have a simple string message output from a script node and finally i want to update local webserver using this string. Any ideas how can i do that? I using flask api of python. I have pasted a sample code below. If there is any sample already built for this can you refer it. I think my program flow is not correct.

                        import depthai as dai
                        import time
                        scoreupdate='initializing pipeline'
                        app = Flask(__name__)
                        @app.route("/")
                        def hello():
                        	templateData = {
                        		'title' : 'HELLO!',
                        		'data': scoreupdate
                        	}
                        	return render_template('index.html', **templateData)
                        app.run(host='0.0.0.0', port=80, debug=True)
                        scoreupdate='initializing  pipeline'
                        pipeline = dai.Pipeline()
                        cam = pipeline.create(dai.node.MonoCamera)
                        cam.setBoardSocket(dai.CameraBoardSocket.RIGHT)
                        cam.setResolution(dai.MonoCameraProperties.SensorResolution.THE_800_P)
                        
                        script=pipeline.create(dai.node.Script)
                        cam.out.link(script.inputs['video_link'])
                        script.setScript("""
                        while True:
                        	frame=node.io['video_link'].get()
                        	if frame is not None:
                        		h=frame.getHeight()
                        		w=frame.getWidth()
                        		text='Frame Height = '+str(h)+' Frame Width = '+str(w)
                        		b=Buffer(40)
                        		b.setData(text.encode('ascii'))
                        
                        		node.io['script_out'].send(b)
                        	""")
                        
                        camout = pipeline.create(dai.node.XLinkOut)
                        camout.setStreamName("right_cam")
                        script.outputs['script_out'].link(camout.input)
                        
                        with dai.Device(pipeline) as device:
                        	right_cam = device.getOutputQueue(name="right_cam", maxSize=4, blocking=False)
                        	while True:
                        		right_frame=right_cam.get().getData()
                        		if right_frame is not None:
                        			scoreupdate=right_frame.tostring().decode('UTF-8')
                        		else:
                        			break
                        • erik replied to this.

                          Hello msee19018 ,
                          I would suggest checking this example. From your code it looks like you are encoding text with ascii (1byte/char) and decode it with utf8 (2bytes/char), so the string you receive at the host probably looks random characters.
                          Thanks, Erik