Up until recently I have been working with the more traditional Oak cameras that have a central RGB camera and two monochrome cameras on either side. I have made several pipelines for this type of setup and have a decent understanding of how to set them up. I'm moving to the Oak-D LR and this model is a bit unique from the rest of the lineup, so I would like to understand the intricacies of this particular model before I start deciding what the pipeline should look like. Some of these questions are specifically about the Oak-D LR, and some of them are more general questions about things in depthai that I know how to implement, but don't fully understand the reasoning behind why they are used a specific way.
Most importantly, on the Oak-D LR all of the sensors are RGB and the same size of 1920x1200. On an Oak-D I would use the Camera node for the central camera and then 2 MonoCamera nodes for the left and right sensors. Since the Long Range has all RGB sensors, I'm guessing I can't use the same MonoCamera nodes in the pipeline? Would I just swap the 2 of them out for Camera nodes?
What is the difference between the ColorCamera and Camera nodes? I know that the Camera node is newer and it seems like it has a few more features, but is it meant to replace ColorCamera? Should we be migrating to the Camera node going forward? Are there any guidelines as to when to use ColorCamera vs Camera? In the StereoDepth node documentation, it states that it can accept MonoCamera or ColorCamera as inputs. Would the Camera node also be an acceptable input even though it isn't mentioned in the docs?
Lastly, the Oak-D LR sensors are 1920x1200 native, which is more than the 1280 pixel width limit for stereo. Is the workaround for this to just downscale the 1200p stream? Should the downscaling be done with the Camera node or would it be better to do this with ColorCamera and setting the ISP scale? Are there specific resolutions I should choose, relative to native res, to avoid losing sharpness in the image?