How do I install the latest version of blobconverter ?
I am trying the command python3 -m pip install blobconverter as suggested here [https://github.com/luxonis/blobconverter/tree/master/cli] .
But it only installs till version 2021.4 and I see no support for version 2022.1 in the init.py file after downloading the package. Can you please help me with it ?
Thanks.
Blob conversion Issue while custom training a model with yolov7
Hi MonalisaAchalla ,
You specify the version you want to use by calling "-v" or "--version". Example:
python3 -m blobconverter --zoo-name face-detection-retail-0004 --shaves 6 -v 2022.1
Hope this helps,
Jaka
Thankyou for your reply. I have mentioned the version 2022_1 in openvinoversion.
But somehow by default my blobconverter is always using 2021.4. I installed the package version 1.3.0 blobconverter from git and there is no support for 2022_1 version over there. I think that is causing the issue and I was wondering how else to manually install blobconverter that supports 2022.1.
Hi MonalisaAchalla,
I see what you mean, we will update the PYPI package with the new version. For now you can use
python3 -m pip install git+https://github.com/luxonis/blobconverter.git#egg=blobconverter\&subdirectory=cli --ignore-installed
and see if it works.
Let me know if this solves the problem.
Jaka
Hello Jaka,
It worked, thanks a lot.
Also I realised in depthai_sdk, the oak_camera code is forcing the openvinoversion to 2021.4, i had to go in there and change it. It then worked.
Hi MonalisaAchalla ,
i am also facing the same issue, while putting blob file into code, can you please tell me how u did it??
HI ThasnimolVSam22d007
You should be able to force the OpenVINO version with nn.forced_openvino_version = '2022.1'
.
That is, if you are using SDK. Otherwise, use setOpenVINOVersion(dai.OpenVINO.VERSION_2022_1)
.
Hope this helps,
Jaka
but am gettting this error, can you please tell me the size of your single image , image dataset and trained size and blob number of shaves and all, am having reshaping issues
Hi ThasnimolVSam22d007
I would need some more context about the model you are using and what the expected output is.
I think the model you are using outputs array of size 6, which probably has the bounding box/ labels. You are trying to parse that output as an image.
Maybe you could also add your code so I can check. But, do it in a separate thread, please. Ill link them together if we come to the same conclusion.
Thanks,
Jaka
- Edited
am doing it for face detection/ any kind of object detection with custom model, captured from the oak d(size1440*1080) that normal size, later at different img size like 416, 640 i tried then converted to onnx and blob with shaves of 6 (using tool), but i couldnt understand where the error is happening while implementing in code this reshaping error is showing??
you want to see the training part or deployment - training am using yolov8 and deploymnet - https://github.com/luxonis/depthai-experiments/blob/master/gen2-yolo/yolox/main.py
to this code i tried to implement it - so i couldnt able to prove where custom model can deploy in it that conclusion
Hi ThasnimolVSam22d007
Sorry I thought the logs said 6 (instead of 0) so I thought you were getting results. Could you add the pipeline as well so I can see what operations you are running?
Also correct me if I'm wrong: You are trying to create a yolov8 based model with our collab notebooks, and you are running the blob inside normal NN node (like in the experiments code)? I believe since you are using yolo, you could use the YoloDetectionNetwork node, so you don't have to manually parse the results. Code here.
Thanks,
Jaka
for this example code you mentioned how you are generating blob and doing training and all?
because problem is whatever blob am generating it is not fitting into that code?? but am following the same colab and that blob converter?
Hi ThasnimolVSam22d007
Not all models work out of the box. Different models use different input and output configurations, which are preserved during blob creation. Yolo networks usually have similar output structure, which the YoloDetectionNetwork utilizes to parse the results automatically. For Yolo node, the output should automatically be ImgDetections while for the standard NN node, the result is whatever the model outputs and has to be parsed correctly before displaying.
Could you share the name of the model you are using and the pipeline?
Thanks,
Jaka