Hi,
I currently have a custom yolov8n model trained and deployed on the OAK camera but I'd like to pre-process the incoming frames before putting them through the neural network. My model is 640x640 and the original training data used to construct the model was 1440x640, barrel distortion corrected, resized and padded to fit the 640x640 dimensions. Is there a way to do the same to the raw frames obtained by the camera?
Second, I'd like to start testing the accuracy of inferences with pre-existing images saved in the computer. Is there also a way to "bypass" the camera feed and continually perform inferences with the onboard neural net on the saved image instead?
Lastly, how would I send the output bounding box coordinates, classes and confidences to be used with another Python script?
For reference I've attached the sample code I'm using to drive the camera/neural net:
`from depthai_sdk import OakCamera, ArgsParser
import argparse
parse arguments
parser = argparse.ArgumentParser()
parser.add_argument("-conf", "--config", help="Trained YOLO json config path", default='model/yolo.json', type=str)
args = ArgsParser.parseArgs(parser)
with OakCamera(args=args) as oak:
color = oak.create_camera('color')
nn = oak.create_nn(args['config'], color, nn_type='yolo', spatial=True)
oak.visualize(nn, fps=True, scale=2/3, visualizer='opencv')
oak.visualize(nn.out.passthrough, fps=True, visualizer='opencv')
oak.start(blocking=True)`
Thanks in advance!