R
ryan

  • Oct 10, 2021
  • Joined Mar 17, 2021
  • 0 best answers
  • Thank you Brandon for following up so quickly and so thoroughly.

    I did see the 4-points transform, which is why I kinda 'deep down in the cockles of my heart' knew that the functionality must be there somewhere and that there is a hole in my knowledge 🙂

    I also did see setWarpTransformMatrix3x3 in the API and my thought was that it would/might/should do what I needed, based on my limited understanding of the algorithms doing the actual work behind the OpenCV fisheye functions that I currently use, but as I was at the limits of my knowledge I wanted to check before I invested serious learning time into an approach that might actually be a dead-end.

    This is critically important to me, and so armed with the knowledge that there is a path there for me to pursue, I will follow it as best I can.

    If you are able to get the output to 4K, this would be fantastic (I am already doing 1080 with the RPi4 and it really needs to be higher. I was on the verge of trying to solve this by trying to get 2K by stitching two frames or such)

    To be honest, this will be challenging for me. I will appreciate anything that reduces the slope of my curve.

    I will start on this today, though will disappear down a hole for a few days (hopefully!) while I work to get my head around it properly. I will re-emerge with either results, questions, or both I expect. When I do, would your Discord server be the more appropriate forum for this communication?

    Thank you again. I very much appreaciate how good your support is. I am sure you guys must be very busy.
    Ryan

    • Hello,

      One critical requirement of my use case is the ability to create video of a large area (a football pitch) for coaches and players to later look at.

      I can currently do this with RPi4, but not well! Encoding is limited to 1080, the CPU struggles badly to dewarp the fisheye, and so processing times are terrible.

      I have purchased the BW1098FFC Body Kit and adapter, and am using an RPi4 and HQ camera with a 180 degree M12 fisheye lens from Arducam.

      The 4K video encoding is awesome, and I can see that one of the key features of DepthAI is Warp/Dewarp.

      Looking at the Python API, I was hoping to see something familiar and akin to the fisheye functions that I currently use with OpenCV, but I don’t see any.

      It would seem to me that either I have a hole in my knowledge that I need to fill (very likely; I do not have expertise in this space), or DepthAI does not have this capability.

      This is a critical element to the success of my project with this hardware, so I am hoping that DepthAI does support this capability, and I just need to be pointed in the right direction and get in some learning…

      Thanks in advance,
      Ryan

      (P.S. I have also read through docs, forum, git examples code and discord and not found this information - so apologies upfront if this is already covered somewhere)

      • Jason. Apologies. I was mistaken. I take detailed notes, and should have checked them before responding. (I must have been thinking of something else that resulted in "Illegal instruction".)

        My error message was:

        (lux_only) pi@oak1b:~/depthai-python/examples $ python3 01_rgb_preview.py 
        Traceback (most recent call last):
          File "01_rgb_preview.py", line 23, in <module>
            with dai.Device(pipeline) as device:
        RuntimeError: Failed to find device (ma2480), error message: X_LINK_DEVICE_NOT_FOUND

        Most likely due to trying to run Gen2 examples with Gen1 depthai, I would think.

        My venv/bin is :

        (lux_only) pi@oak1b:~/.virtualenvs/lux_only/bin $ ls -al | grep python
        lrwxrwxrwx 1 pi pi   16 Apr  1 11:34 python -> /usr/bin/python3
        lrwxrwxrwx 1 pi pi    6 Apr  1 11:34 python3 -> python
        lrwxrwxrwx 1 pi pi    6 Apr  1 11:34 python3.7 -> python

        And in the hope that it might be helpful, I will include my entire install notes below:

        Downloaded 32 bit via Raspberry Pi Imager
        Set time/loc/passwd/connected to WLAN/ran upddates / rebooted
        Renamed host to 'oak1b'
        
        >Preferences>Raspberry Pi Configuration
            Renamed to 'oak1b'
            Enabled camera, VNC, SSH
            Increased GPU to 128
            rebooted / checked settings
        
        set static wifi ip / disabled IPv6 / rebooted
        
        connected to vnc 
        sudo rpi-eeprom-update checked
        right click on Task Bar and Add/Remove Panel items.
            added CPU temp
            added CPU usage
        
        sudo apt update
        sudo apt full-upgrade
        sudo apt install python3 python3-pip
        
        sudo apt install build-essential cmake pkg-config
        sudo apt install libatlas-base-dev gfortran
        sudo apt install libhdf5-serial-dev hdf5-tools
        mkdir python-envs && cd python-envs
        
        pip3 install virtualenvwrapper
        
        sudo nano ~/.bashrc
        
        export WORKON_HOME=$HOME/.virtualenvs
        export PROJECT_HOME=./python-envs
        export VIRTUALENVWRAPPER_PYTHON=/usr/bin/python3
        export VIRTUALENVWRAPPER_VIRTUALENV=~/.local/bin/virtualenv
        source ~/.local/bin/virtualenvwrapper.sh
        export VIRTUALENVWRAPPER_ENV_BIN_DIR=bin
        
        exit
        
        mkvirtualenv lux_only
        
        [my notes are not clear at this point - but essentially - within the lux_only env, I followed the basic steps of dependencies.sh, install_reqs and then I wasn't sure which depth-ai to install, and ended up trying to install Gen1, and I think the snapshot (though maybe I am mistaken and am remembering a previous occasion...); 
        but the end result was that I ended up running: pip uninstall depthai]
        
        //then my notes start again...
        
        python3 install_requirements.py
        ...Installing collected packages: depthai
        Successfully installed depthai-2.1.0.0

        After that I was successfully able to execute the scripts in the examples folder.

        I hope this is helpful, and apologies for my initial incorrect response.

        • Hi Jason,

          This might help... but I too had a problem when I tried to explicitly install the snapshot version (my install also reported 'Successfully installed depthai-2.1.0.0') and so I just went back to the default install instructions and the issue automagically went away...

          (pip uninstall depthai)
          sudo curl -fL http://docs.luxonis.com/_static/install_dependencies.sh | bash
          python3 install_requirements.py
          git clone https://github.com/luxonis/depthai-python.git

          I have since been able to successfully run the example scripts.

          I am running RPi4 using VNC (and also sometimes directly via bluetooth keyboard and HDMI). I too am using virtual environments.

          FWIW - interestingly - VNC so far works perfectly... but I have experienced 'weird issues' when trying to do the same work using my bluetooth keyboard and HDMI.

          Hope this is helpful.
          Cheers

          • Hey Brandon,

            Thank you for your prompt and very helpful reply.

            I appreciate the heads-up regarding the newer version that is on the way. I will definitely be getting one (probably more 🙂 ) of those.

            However, as best I can tell, your current solution will fit my needs, and so I am happy with that, and will proceed with purchase.

            Thank you also for telling me about the Arducam M12 IMX477 - I didn't know about these.

            If someone told me in 2017 that by 2021, I could retail purchase hardware that could run object detection at 30pfs on a low power (fanless!) system with 3 high quality cameras at a total size not much bigger than a deck of cards (say with a Pi Zero), I would have wondered what they had been smoking!

            Thanks again and best wishes for your ongoing success,
            Ryan

            • Hello,
              I love the work that you guys are doing and am genuinely excited about the journey that I am on learning how to use my OAKs.

              I was an OAK backer and purchased 2xOak1 and 2xOakD.

              I am particularly keen to combine RPi HQ camera and the OAK1.

              My understanding is that this is 'in progress' with #136:
              DepthAI Pipeline Builder Gen2 #136 https://github.com/luxonis/depthai/issues/136

              Looking through the Luxonis website I saw the BW1098FFC Body Kit:
              https://shop.luxonis.com/collections/all/products/usb3c-ffc-body

              My impression is that this is essentially the BW1098FFC without the cameras?
              DepthAI: USB3C with Modular Cameras https://shop.luxonis.com/collections/all/products/depthai-usb3-edition

              My understanding is that I could connect the 'BW1098FFC Body Kit' to the HQ Cam via the (alpha status) IMX477 adapter, and that doing this would give me features advertised on the adapter's shop page: https://shop.luxonis.com/products/rpi-hq-camera-imx477-adapter-kit

              Question 1 : Is my understanding correct?

              I am just wanting to be clear as the text on that page only refers to connecting the adapter with DepthAI FFC Edition/USB3C with Modular Cameras. Perhaps there is some subtle difference (or obvious and I am just missing it! 🙂 )

              Question 2 : Assuming that my understanding is correct, and I connect up the BW1098FFC Body Kit, HQ Cam and IMX477 adapter; is my understanding also correct for the following;

              • a) I could just connect the USB-C cable to my RPi4 as normal and use it just as I currently do with my OAK1? (as appears to be the case in your demo video)
              • b) that the new HQ Camera essentially acts a 'drop-in' hardware replacement? For example, the OAK test scripts don't need to be modified; you can connect the adapter and camera and run the demo script and you are up and running?

              Question 3 : Are all of the items in stock?

              Thank you again for the work you are doing and I look forward to hearing from you.