Thank you Jaka for the help!
As you predicted, running that loop put a significant load on the script node.
I was able to manage something that kindof worked. Passing the depth into the following script node allowed the output depth to be video encoded.
while True:
depth = node.io['depth_in'].get() # loads a 1440x1080 RAW16 depth
depth.setType(ImgFrame.Type.YUV400p)
depth.setWidth(depth.getWidth() * 2)
node.io['depth_out'].send(depth)
It could then be unencoded and reconstructed on host using
encoded_depth_data = depth_packet.getData()
depth_frame = cv2.imdecode(encoded_depth_data, cv2.IMREAD_GRAYSCALE) # returns a 2880x1080 uint8
depth_frame = depth_frame.view(np.uint16).reshape(1080, 1440) # reconstructs original 1440x1080 uint16
Seems hacky, but it is working consistently and giving a 50% reduction in bandwidth (w/ default mjpeg quality of 97) which is nice 🙂 However, it introduces more encoding/decoding processing and adds JPEG artifacts to the depth image which is something to consider.
Thank you all!