Hi Kristoffer
You have limited amount of shaves and cmx slices, lets say 16.
- Color camera uses cmx slices depending on the resolution.
- NN uses both (matches the number) - depends on how many shaves you trained it for.
- One is used by ImgManip
- Stereo also uses some (not relevant in this case)
Because you raised the resolution to 4K, you are now using 6 cmx slices instead of 3. Imgmanip uses 1 (shared with colorcamera so it doesn't count).
16-6 = 10 remaining cmx. Since NN wants the same number of cmx as shaves, your total available shave count is 10.
Since you trained the model for 6 shaves you are at a loss because 4 shaves are not being used. The API sees this and warns you to lower the shave count to 5. This enables you to run two separate inference threads which greatly speeds up the inference. 2 inference threads * 5 shaves per thread = 10 shaves
<= all shaves are used.
Running your scripts by adding DEPTHAI_LEVEL=DEBUG
at the beginning will enable debug mode in which you can see the following output (this one is tailored to your example):
1080P:
ColorCamera allocated resources: no shaves; cmx slices: [13-15]
ImageManip allocated resources: shaves: [15-15] no cmx slices
NeuralNetwork allocated resources: shaves: [0-12] cmx slices: [0-12]
Inference thread count: 2, number of shaves allocated per thread: 6
4K:
ColorCamera allocated resources: no shaves; cmx slices: [10-15]
ImageManip allocated resources: shaves: [15-15] no cmx slices
NeuralNetwork allocated resources: shaves: [0-9] cmx slices: [0-9]
Inference thread count: 1, number of shaves allocated per thread: 6
Hope this helps,
Jaka