FrancisGuindon

  • Aug 17, 2023
  • Joined May 23, 2023
  • 0 best answers
  • Hello,

    Context : I have a camera that checks planks that are passing by. As they are passing by, I want them to have a unique ID. Why? Because if the planks stop passing by and there is two of them, YoloDetectionNetwork will keep going back and forth those two planks. Overall, I want objects to be detected once, then ignored. It is somewhat difficult, because all the planks look the same.

    I've been trying to use the many tracker types that ObjectTracker offers, to no avail.

    Is there a solution to this problem?
    Thanks and have a nice day!

    • nathan45654

      My bad, I misread and thought, from the title, that I could help.

      You could try and see if it does this : run an ipconfig multiple times after launching a python script. Try to notice what happens with the IP. Does it appear after it gives you a XLINKDEVICENOTFOUND? Sometimes before? It could be that it tries to fetch an IP through DHCP/IPV4LL, and fails to do so in time.

      At this point, if that is the behaviour, I don't know about the Windows side of things. You could try pinging jakaskerl. I assume you already set a static IP from the network interface in Windows, since you mentionned it has a static IP adress.

      Anyway, that's the best I can personally give you. Good luck.

      • nathan45654

        Edit /etc/dhcpcd.conf using "sudo nano" and enter those two lines after your interface (in my case : interface eth0) : "nodhcp" and "nolink". The reason why your camera times out is because it tries to get a Link-Local IP address, but fails to do so in time. By disabling it, you get the IP straight off the bat, and there is no timeout.

        @jakaskerl This solution is something I figured on my own. I couldn't find help from the Luxonis discord back in the day and I've googled this issue for days, to no avail. Tagging you for future reference, if someone asks a similar question.

        • jakaskerl

          Hello,

          I used Luxonis' tool to convert my .pt to a blob. I haven't used blobconverter in my code or in CLI.

          I have fixed my issue by using ColorCamera's preview instead of video + imagemanip. The aspect ratio was the main reason I thought I needed imagemanip (wanted full 16:9 image in 320x320), until I figured that you can set the preview's keepaspectratio to false.

          Anyway, thank you for your time.

          • Hello,

            I've been getting this error :

            [DetectionNetwork(4)] [error] Input tensor 'images' (0) exceeds available data range. Data size (153600B), tensor offset (0), size (307200B) - skipping inference

            The only thing I know, is that the size is double the data size.

            Using this code, what causes the error? :

            import depthai as dai
            import cv2
            
            # Model paths
            plank_model_path = 'plank.blob'
            label_model_path = 'label.blob'
            
            pipeline = dai.Pipeline()
            
            # Define a source - color camera
            cam = pipeline.create(dai.node.ColorCamera)
            cam.setResolution(dai.ColorCameraProperties.SensorResolution.THE_1080_P)
            cam.setInterleaved(False)
            cam.setBoardSocket(dai.CameraBoardSocket.RGB)
            
            # Create outputs
            xout_rgb = pipeline.create(dai.node.XLinkOut)
            xout_rgb.setStreamName("rgb")
            
            # Create ImageManip node for cropping
            manip = pipeline.create(dai.node.ImageManip)
            manip.initialConfig.setResizeThumbnail(320, 320)
            manip.initialConfig.setKeepAspectRatio(False)
            manip.setMaxOutputFrameSize(320*320 * 3)
            
            # Camera control / input
            controlIn = pipeline.create(dai.node.XLinkIn)
            ctrl = dai.CameraControl()
            controlIn.setStreamName('control')
            
            plankDet = pipeline.create(dai.node.YoloDetectionNetwork)
            plankDet.setBlobPath(plank_model_path)
            plankDet.setConfidenceThreshold(0.5)
            plankDet.input.setBlocking(False)
            
            labelDet = pipeline.create(dai.node.YoloDetectionNetwork)
            labelDet.setBlobPath(label_model_path)
            
            cam.video.link(manip.inputImage)
            manip.out.link(xout_rgb.input)
            controlIn.out.link(cam.inputControl)
            manip.out.link(plankDet.input)
            
            with dai.Device(pipeline) as device:
            
                # Output queues will be used to get the rgb frames and NN data from the outputs defined above
                q_rgb = device.getOutputQueue(xout_rgb.getStreamName(), maxSize=4, blocking=False)
                q_ctrl = device.getInputQueue(controlIn.getStreamName(), maxSize=4, blocking=False)
            
                ctrl = dai.CameraControl()
                ctrl.setManualExposure(2500, 1300)
                ctrl.setManualFocus(103)
                q_ctrl.send(ctrl)
            
            
                while True:
                    in_rgb = q_rgb.get()
            
                    # If 'q' is pressed on the keyboard, exit this loop
                    if cv2.waitKey(1) == ord('q'):
                        break
            
                    cv2.imshow("RGB", in_rgb.getCvFrame())
            
                # Clean up
                cv2.destroyAllWindows()
            • Hello,

              I am using a Raspberry Pi with DepthAI to detect passing planks. I have tried to make a YoloV7 model in the past, but realized that it could be too intensive for this simple task and the limited amount of computer resource I have (RPi).

              I have tried running the YoloV4-tiny with custom data tutorial. However, the tutorial uses Tensorflow 1.x, which is unusable with Google Colab. Then, I have tried running it with Tensorflow 2.x. Everything ran smooth, until the training, where it gave me a cuDNN error :

              cuDNN status Error in: file: ./src/convolutional_kernels.cu function: forward_convolutional_layer_gpu() line: 543
              cuDNN Error: CUDNN_STATUS_BAD_PARAM
              Darknet error location: ./src/convolutional_kernels.cu, forward_convolutional_layer_gpu(), line #543
              cuDNN Error: CUDNN_STATUS_BAD_PARAM: Success

              I could make the YoloV7 work, but I feel like it is unnecessary for the work I need the detection to do. But, if it is the only way to make it work due to the new tensorflow requirements, it'll have to do.

              Therefore, here are my three questions :

              1. Is it possible to still use the yolov4-tiny tutorial even with the new TensorFlow requirements?
              2. Will there be an updated version, or a yoloV5-6-7-tiny tutorial?
              3. Unrelated, but can I use 300x300 images for training? The tutorial says the resolutions have to be divisible by 32. I have used 320x320 instead to make it work, but I am unsure if it's going to work.

              Thanks a lot and have a nice day!

              • jakaskerl

                Thank you so much! Everything makes a lot more sense.

                My last remaining question is about the voc.yaml file it creates. It contains the path for the training pictures and labels. It points to the images/labels that the notebook downloads. Also, it seems to not mention fetching into the custom dataset folder during the execution of train.py (only the images that the notebook give). Should I change the paths inside the voc.xaml to the paths where my custom dataset is?

                If you don't have the answer, I'll experiment with some on my own. I simply feel that the old tutorial was clearer on how custom data was used, and was more user-friendly.

                Thanks for your time and have a nice day!
                -Francis

                • jakaskerl

                  Hello!

                  I apologize about my lack of precision earlier. I was in fact, talking about the notebook you sent. I've been using it with custom data. In this graph :

                  It's expected to put custom images with their respective labels. Am I supposed to create the folders in the root/"content" folder, or in the yolov7 folder that the notebook creates?

                  Concerning the custom model, I am pretty new to machine learning. I'm simply looking to create a model that will detect passing wood planks. Won't the extra items (face, books, etc.) try to be detected, resulting in more CPU utilization? I'm looking to use the model on a Raspberry Pi, I don't have that many resources in terms of CPU and RAM.

                  Thanks a lot for the help and have a nice day!

                  • Hello,

                    I've been using the old/deprecated DepthAI, but it is no longer usable. Therefore, I have switched to the YOLOV7, but I have some questions about it all :

                    1. Where am I supposed to put the dataset? In the tutorial, the line above says "yolov7". Am I supposed to put the dataset folder inside "yolov7", or in the "content"/root folder?
                    1. Is it possible to not use a pre-trained version? If so, what would be the steps? I only want to create a model for simple plank (wood) detection. I don't need the model to recognize anything else.

                    Have a nice day!