lovro
Thanks for the quick response!
I want to clarify that these artifacts do not show up in the stereo node frames, they are showing up in the rgb image from the camera not being used as an input to stereodepth. For example if the left and right cameras are being used for stereo, then the artifacts will show up in the rgb frames from the central camera. I get why there could be problems in the depth side of the pipeline if it was originally intended for grayscale and now the rgb images aren't being handled correctly. But the central rgb camera isn't being used for anything in the stereodepth node. I'm not well versed in how the pipeline backend works so maybe you could shed a little bit of light on how this problem would manifest.
The depth*_preview_lr.py script doesn't replicate this issue because it's just outputting disparity. Do you know if there's another example script that includes a setup like I described above? I can try to modify the depth_preview_lr.py example but I would prefer to use already existing code so that I can rule out a mistake that I might be making. That's why I was using the spectacularAI api because it's already well flushed out.
FYI the recording/frame I posted from the spectacularAI api was using their recording code. So just a simple recording and not doing any of the mapping VIO stuff. It has a flag "--color_stereo" that I was using, which as far as I can understand is specifically for use with cameras that have color cameras for the stereo array.