Hi,
We're using a camera setup with multiple streams. One of the streams sends high resolution images. We use the camera's mjpeg video codec to compress the images. The ros pipeline does not use these images other than that it stores or uploads them as jpegs.
So far we have been using this callback to get and publish the high resolution images:
def __get_and_publish_hi_res_image(self, frame: ImgFrame):
image_data = frame.getData()
image = cv2.imdecode(image_data, cv2.IMREAD_COLOR)
header = …
image_msg = self.__bridge.cv2_to_imgmsg(image, header=header, encoding='passthrough')
self.__hi_res_image_pub.publish(image_msg)
This publishes the images as sensor_msgs.Image
messages. But since we want to store or upload the images as jpegs anyway, it makes more sense to publish them as sensor_msgs/CompressedImage
messages. So we changed the callback to:
def __get_and_publish_hi_res_image(self, frame: ImgFrame):
image_data = frame.getData()
header = …
image_msg = CompressedImage(header=header, format='jpeg', data=image_data)
self.__hi_res_image_pub.publish(image_msg)
You'd say the node would take less CPU using the latter function, since there's no image decoding. But in fact it takes 25-30% more CPU. Anyone any ideas how that can be?
Thanks in advance