G
GergelySzabolcs

  • a day ago
  • Joined Sep 19, 2020
  • 1 best answer
  • erik Thanks for pointing me in the right direction! I tried the branch and was able to create an ImageManip node to resize the depth frame down to something that will fit within USB2 bandwidth. Now I can have all three of the things in the thread title!

    I also just read that StereoDepth nodes have a setOutputSize method that resizes the depth frame, but only when doing RGB-depth alignment. It almost seems to have been made specifically for my kind of use case, though I missed it at first. You can even use it with the main branch.

  • I am a complete novice/amateur at computer vision, machine learning, OpenCV, PyCharm, GitHub, etc., and still a novice Python programmer. But I was a professional programmer and electronics engineer for a long time, so I can recognize great hardware and software products and great technical support when I see it. I purchased my OAK-D during its Kickstarter campaign, but only started working with it in April.

    So far the OAK-D has done everything promised. I think it is a fabulous piece of technology. DepthAI is also great. Reasonably easy to use and it performs really well on both my MacBook and my Raspberry Pi. And the examples ... the breadth of examples is just stunning!

    The Luxonis support team, however, deserves special praise. I've run into all the standard things one finds with new products/technologies: lack of knowledge on then part of the user, missing or inadequate documentation, code bugs. In every instance I've encountered, one or more members of the team stepped up quickly to address the situation and produce a resolution, sometimes while having to in effect hold my hand (remember novice/amateur). They seem to work 24/7. Very, very impressive!

    My compliments to Luxonis all around! Makes me look forward to the delivery of my OAK-D Lite.

    • GergelySzabolcs thanks a lot. I kinda figured out most of this stuff by experimenting. Great insight and explanations. Thanks again!

      Brandon, I believe that if that stuff (which GergelySzabolcs described above) were described in help and how to manuals in Docs section, my life, and possibly lives of other beginners would have been significantly easier. Yes, all the info is available online, but it is scattered and none of these systems (TF, OpenVINO) is newbie-friendly. It actually took me an awful lot of time to realise OpenVINO can add per-processing layers, although now it sounds so logical and makes so much sense...

      • Hi everyone,

        So I've always wanted to start a company doing something that matters. And I'm finally doing it!

        After being mentored for several years by Robert Pera at Ubiquiti Networks (UBNT), I think I'm in a life context now where I'm most likely to do succeed at doing such a thing (and that actually was the goal and agreement joining Robert at Ubiquiti to run the UniFi team). I highly recommend reading his blog, by the way, here. It's what initially drew me to work at Ubiquiti, and what prompted me to fly out to a Ubiquiti bowling night to pitch Robert (who wasn't there, actually) on the ideas I had for him and Ubiquiti. Anyways, that eventually worked out, and I had an absolutely fantastic time working at Ubiquiti, running/scaling UniFi - and now I'm off to try to accomplish my life goal.

        As I mentioned in another post, while I was speeding up on machine learning and computer vision (thanks to a colleague at Ubiquiti, actually, who's now a professor at machine learning at UCSD) several of my friends were hit from behind by texters-and-drivers while bike commuting. These impacts reduced their quality of life severely in the short term and resulted in permanent reduction in the quality of their life forever. (And since then, I've learned about too many more, including the friend of a contributor to this project/product, who was hit and killed by a distracted driver from behind.)

        Machine learning and computer vision techniques I'd been studying had to be leverage-able to help stop this carnage, I figured. So at the beginning on furiously sought out ways to leverage them, and hit tons of dead-ends in terms of size, weight, power, and cost. I cancelled this idea (Google Project X style) 4 independent times, until eventually getting to a viable approach in terms of all of those, and, most importantly, works well.

        See here if you haven't seen the demo of the initial prototype working. It's been so cold (and icy, and slippery) here in Colorado that going outside to shoot more footage has been either dangerous, or just plain bone-chillingly-preventative.

        In working towards this end-goal, we realized that the PCB we'll be making as an intermediate step, would probably add a lot of value to other engineers. And we have to make it anyways, so we figured might as well do so because:

        1. Maybe doing so will help fund the long-term life-saving goal.
        2. Maybe we'll also get community support around the platform (which we'll open-source as much as we can), which itself may benefit the end goal.
        3. The final product is pretty complex. This allows us to make something not-so-complex, get it dialed in/shipping, before making the final thing. Risk 'burn down' on might say. (And hopefully it's as useful to others as we think it'll be.)

        And we're looking for anyone who's interested in helping.

        Want to help?

        Best,
        Brandon

        Here's me tuning-in the first prototype: