Hi,
I'm wondering if an OAK camera is appropriate for my usecase & is the best option, or if there might be better options. And if an OAK camera is appropriate for what I want to, then which cameras would be best.
I need both a depth camera as well as 4 camera for a rover-like platform.
More specifically, I'm part of a university team participating in URC and CIRC.
For the depth cameras, I have the following requirements:
- Can be used without too much difficulty on an NVIDIA Jetson Nano (Specifically, at the moment we're using the dev board. I believe we're using the Jetson Orin Nano, though I'm not 100% sure on that)
- First-party integration with ROS or rather easy to integrate with it.
- Being able to get a mesh or point cloud would be a major bonus.
- We're currently using a USB connection, however I would be open to something like GMSL2 if options for how to connect it to a jetson nano dev board could be provided.
- From my understanding of the benefits/downsides of different technologies for a depth camera, I believe that we want something that uses a stereo system, however a combination of stereo + IR would be a bonus.
- Budget is roughly $600 USD (we were previously using a ZED 2 camera however it broke and we are considering new options) (can possibly be extended upwards slightly if a really good option is presented)
For the 4 other cameras, I'd ideally like to use the same model of camera for all 4. The cameras should ideally be at most \$100, preferably closer to \$80.
Here are some requirements I have for those:
- USB-based connection would be best. If there is a camera which isn't usb-based but offers significant benefits or price savings, then I would be open to using something else.
- an enclosure is not necessary, as we can 3d print our own enclosures.
- small footprint/form factor. We're currently using arducams but are looking to replace them. (I believe we're using the EVK modules? Although my understanding is we're using prototypes of it that were donated to us a few years back, so they may be different from that)
Here are some requirements that apply to everything:
- 1080p is a minimum for the resolution. Higher is better. Higher quality for the depth camera has greater priority over a higher resolution for the other cameras.
- 30fps is a minimum for the frame rate. Higher resolution would be preferred over higher frame rate, if a choice between the two is necessary. (ie. 4k@30fps > 1080p@60fps)
- on-camera hardware encoding (h264, optionally h265 or av1) is desired (ideally at a higher quality than what the jetson nano can offer with its hardware encoder), although can be compromised on if absolutely necessary.
- Cameras need to be able to perform reasonably well in a wide range of environments. It may be used both in the day in the desert where there will be a lot of light reflection from the sand, as well as in very low-light conditions at midnight. (We may be using a light, however having the ability to perform well without the light would be a very nice bonus)
ideally we'd be able to perform in both environments without any hardware changes, however adding something like a light filter would be acceptable.
- I would really like the ability to do opencv processing on the cameras. More specifically, detection of aruco tags, although possibly some additional stuff (like general object detection) in the future. It doesn't need to actually be opencv based, as long as we can do aruco tag detection on it and porting the aruco tag detection code isn't too difficult.
My reason for wanting this is because currently we're doing aruco tag detection for the camera feeds, however in order to do this we are running it through a ROS node reading from an image topic published by the usb_camera node. This then interferes with what we're using to stream the camera feeds wirelessly using gstreamer, because only one process can read from a block device in linux. So instead we need to have gstreamer read from the image topic, which seems to heavily limit the frame rate to something like 10fps even if I attempt to increase it. Because of this we're currently limiting it to only doing aruco tag detection on a single camera and just taking that frame rate hit for that one camera, but if we could run it for all the cameras instead that would be amazing & doing it on-camera would simplify things quite a bit, I feel.
I plan to use these cameras with ROS for autonomous navigation using nav2 (this will be performed on-rover), as well as for streaming them over a wireless connection.
Also, if you have a lot of experience with other general hardware things, there are a few other components like GPS modules or an IMU that I am looking for suggestions for. I won't go into it here as it's off-topic, however if you'd like to provide suggestions you could look at my fedi post I made about it. If you don't have a fedi account then it's probably (hopefully) fine to post it here.