DrewBarfield

  • Jan 18, 2022
  • Joined Jan 14, 2021
  • 0 best answers
  • I think installing depthai_experiments causes something similar too. (Or maybe the requirements for a single experiment.)

    Here's the output trying to run depthai_hand_tracker after installing depthai_experiments:

    XXXXX@XXXXX:/workdrive/Repositories/oak-d-lite/depthai_hand_tracker$ python3 demo.py -e -g --lm_model lite
    Palm detection blob     : /workdrive/Repositories/oak-d-lite/depthai_hand_tracker/models/palm_detection_sh4.blob
    Landmark blob           : /workdrive/Repositories/oak-d-lite/depthai_hand_tracker/models/hand_landmark_lite_sh4.blob
    PD post processing blob : /workdrive/Repositories/oak-d-lite/depthai_hand_tracker/custom_models/PDPostProcessing_top2_sh1.blob
    Sensor resolution: (1920, 1080)
    Internal camera FPS set to: 36
    Internal camera image size: 1152 x 648 - pad_h: 252
    Creating pipeline...
    Creating Color Camera...
    Creating Palm Detection pre processing image manip...
    Creating Palm Detection Neural Network...
    Creating Palm Detection post processing Neural Network...
    Creating Hand Landmark pre processing image manip...
    Creating Hand Landmark Neural Network (2 threads)...
    Pipeline created.
    [184430101116701200] [7.841] [NeuralNetwork(4)] [warning] Network compiled for 4 shaves, maximum available 13, compiling for 6 shaves likely will yield in better performance
    Pipeline started - USB speed: HIGH
    [184430101116701200] [8.073] [NeuralNetwork(8)] [warning] Network compiled for 4 shaves, maximum available 13, compiling for 6 shaves likely will yield in better performance
    [184430101116701200] [8.085] [NeuralNetwork(4)] [warning] The issued warnings are orientative, based on optimal settings for a single network, if multiple networks are running in parallel the optimal settings may vary
    [184430101116701200] [8.085] [system] [error] Attempted to start Color camera - NOT detected!
    [184430101116701200] [8.085] [NeuralNetwork(8)] [warning] The issued warnings are orientative, based on optimal settings for a single network, if multiple networks are running in parallel the optimal settings may vary
    ^CTraceback (most recent call last):
      File "demo.py", line 85, in <module>
        frame, hands, bag = tracker.next_frame()
      File "/workdrive/Repositories/oak-d-lite/depthai_hand_tracker/HandTrackerEdge.py", line 462, in next_frame
        in_video = self.q_video.get()
    KeyboardInterrupt

    ...and running the depthai_demo.py:

    XXXXX@XXXXX:/workdrive/Repositories/oak-d-lite/depthai$ python3 depthai_demo.py 
    Using depthai module from:  /home/drew/.local/lib/python3.8/site-packages/depthai.cpython-38-x86_64-linux-gnu.so
    Depthai version installed:  2.11.1.0
    Version mismatch between installed depthai lib and the required one by the script. 
                    Required:  2.13.3.0
                    Installed: 2.11.1.0
                    Run: python3 install_requirements.py 

    Disconnecting for 30 seconds and reinstalling the requirements for the depthai repo resolved my problem:
    python3 -m pip install -r requirements.txt

  • I've heard of Project North Star. I didn't realize that it was still on-going. I'll definitely check them out again.

    Thanks @erik !

  • After further consideration, I'd go with two OAK cameras instead of 2 normal and 1 OAK. That would give the headset 8 Tops.

    Also, I think the HD displays should 'wrap around' to include peripheral vision.

    If I had a 3D printer I'd start hacking on this.

  • I have a need for a simple AR headset that has two cameras (for normal stereo vision) and two associated HD displays all wired together with a Raspberry Pi (or similar SBC). The idea of course being that OpenCV/DepthAI code can be written to manipulate the headset display based on camera input.

    (Maybe there should be two normal cameras and one OAK camera in the middle?)

    In-painting a video on a wall (over an ArUco marker) would be trivial ...and such a headset would be very convenient and useful.

    Maybe this could be Luxonis next $1M Kickstarter project? 😀

  • I'll be completing the "Open CV for Beginners" course shortly and would like to suggest/request a course that focuses on OAK. More specifically, creating NNs from scratch, training, optimizations, creating pipelines, converting existing NNs to be OAK compatible, and deploying and maintaining multiple OAK devices throughout a home or business.

    Sounds to me like a win-win for everyone.

  • What if the "pole" was horizontal as in the case of a balcony handrail?

    The VESA angle is interesting. There are a lot of options. My only concern would be outdoor/weatherproof options.

    • Oh, yeah. I've been meaning to ask: Do acrylic domes have any negative effect on these cameras? Any chance they could increase false positives, reflections, etc.

      Amazon link

    • Looks good. What's the red thing in the middle? Reminds me of an optical connection (that would be really cool).

      • erik replied to this.
      • For my use case I would only need the camera to be normal to the wall with no need to turn. I'll be mounting them at an approximate 5ft. height.

      • @Brandon / @erik,

        I hope this makes some sense. The image did not come out exactly as planned. However, it should convey the basic idea. The back plate screws to a wall stud. A hole must be drilled into the wall for the female Ethernet connector and cable. The camera sits on the male connector and a screw can be threaded into the ISO 1222:2010 camera threaded bore.







      • Thank you @Brandon.

        I think there is a market for a plastic or aluminum wall bracket that can hold either camera by the Ethernet port - if the solder connection (et al.) for the port is strong enough for the camera to sit on.

        This could really simplify installation.

        I'll try to make a drawing of what I'm proposing.

      • @Brandon, do you know the dimensions and mass/weight of the OAK-D-POE and OAK-1-POE (with cases)?

        I've looked at depthai-hardware, but don't see that.

        Maybe these properties haven't been solidified at this time?

        Thank you!

        • My homework is cut out for me. I'll post what I find. I have need for only two outdoor cameras ... though that should probably be more. I would like for all of the cables to be waterproof and shielded even indoor. I really need to get a building plan and estimate cable lengths first...

        • How can a team deploy and maintain a fleet (20+) OAK-D devices throughout a facility? Would they all necessarily require a dedicated host or could they be corraled into a single server? Could this communication be Ethernet based?

          Thank you!

          • How can I display the output (a preview or video) as an inline video in a Jupyter Notebook? The 4K RGB MobileNetSSD sample works just fine with the external windows, but I want to access JupyterLab across the network. In order for this to work properly, I'll need the video to display in the notebook. Otherwise the notebook server will be showing a window I can't see or close.

            I'm also wondering if there are other best practices for DepthAI in JupyterLab. For example, you should always explicitly clean up your code, or your script will hang and crash the kernel. See the last 3 lines:

            import depthai as dai
            pipeline = dai.Pipeline()
            
            camRgb = pipeline.createColorCamera()
            camRgb.setPreviewSize(300, 300)
            camRgb.setBoardSocket(dai.CameraBoardSocket.RGB)
            camRgb.setResolution(dai.ColorCameraProperties.SensorResolution.THE_1080_P)
            camRgb.setInterleaved(False)
            camRgb.setColorOrder(dai.ColorCameraProperties.ColorOrder.RGB)
            
            xoutRgb = pipeline.createXLinkOut()
            xoutRgb.setStreamName("rgb")
            camRgb.preview.link(xoutRgb.input)
            
            with dai.Device(pipeline) as device:
                device.startPipeline()
                qRgb = device.getOutputQueue(name="rgb", maxSize=4, blocking=False)
                print('Press \'Q\' to Quit')
                while True:
                    inRgb = qRgb.get()
                    cv2.imshow("bgr", inRgb.getCvFrame())
                    if cv2.waitKey(1) == ord('q'):
                        break
                device.close()
                
            cv2.destroyAllWindows()
            del pipeline

            Any tips are greatly appreciated!
            -Drew