It is very clear you are willing to be very helpful, and I appreciate that. I realize, however, that I did not make my last question clear (I'm blaming it on a glass of wine). I will try again (this time after coffee).
In the example rgb_depth_aligned.py I've set downscaleColor = True. Based on the code, the comments, and live results, the color camera is set to output 1080P (1920x1080), but it gets downscaled, via camRgb.setIspScale(2, 3), to 1280x720. And thus the shape of the displayed rgb frame is 1280x720.
The mono cameras get set to resolution 400P (640x400), and based on some scripts I've created, and your confirmation, the expected output of the StereoDepth node disparity or depth would also be 400P. However, the statement stereo.setDepthAlign(dai.CameraBoardSocket.RGB) appears to automatically scale the disparity/depth to the resolution of the color/RGB camera (1280x720).
The question I meant to ask is whether stereo.setDepthAlign(dai.CameraBoardSocket.RGB) implies scaling as part of alignment? I think I confirmed this by commenting that statement and checking the disparity shape (it was the expected 640x400).
I could find nothing describing "alignment" in any of the documentation, though I may have just missed it.
Once again, thanks for the help.