Hi @RaveenaJadhav,
I did some testing, it seems to me that cropping isn't the cause of low fps. I used the tiny yolo example which is very similar to your code, but instead of your barcode model, I used tiny yolo v4. I also added your logic for cropping.
Initial observations/tests:
Setting the color camera sensor resolution to 1080p gives me around 30FPS.
Setting the color camera sensor resolution to 4K gives me around 19-20FPS.
Changing crop and resizes didn't affect the end performance and it also didn't really make sense to me since both the neural network as well as xlinkout are getting the same resolution image (set by previewSize) in both cases.
So I did some debugging and it seems like the drop stems from resource management
Note that cmx/shaves values seem to be switched
1080P
ColorCamera allocated resources: no shaves; cmx slices: [13-15]
ImageManip allocated resources: shaves: [15-15] no cmx slices
NeuralNetwork allocated resources: shaves: [0-12] cmx slices: [0-12]
Inference thread count: 2, number of shaves allocated per thread: 6
4K
ColorCamera allocated resources: no shaves; cmx slices: [10-15]
ImageManip allocated resources: shaves: [15-15] no cmx slices
NeuralNetwork allocated resources: shaves: [0-9] cmx slices: [0-9]
Inference thread count: 1, number of shaves allocated per thread: 6
As you can see, the 4K camera resolution uses 6 shaves (confirmed by docs). Which leaves NeuralNetwork node to only have 10 shaves available, which for a model compiled for 6 shaves or more, forces the node to run on a single inference thread.
On the other hand, at 1080P (3 shaves used by the camera), 13 shaves are available to NeuralNetwork node, which is enough to allow it to run with 2 inference threads (2 threads * 6 Shaves = 12 shaves) which in turn allows for more FPS.
So after compiling the model for 5 shaves, the model now runs on 2 inference threads and the result framerate at 4K is 26FPS.
Hope this helps in your case as well 🙂
Jaka