Heyo, i'm not sure if this forum the right place to ask questions like this, if not, let me know and i'll delete this post. But i'm working on something and running into some difficulties. I'm working on a sorting robot that needs to sort Dutch fryable snacks. I've trained a YoloV5 model, which works very well for detecting which snacks are where, but in order for me to use a robot arm to actually sort the snack, i will need a location, and how many degrees the snack is rotated.
Because a bounding box using YoloV5 is always recatangular, i needed a different way of detecting the orientation of the snacks. For that i used basic OpenCV functions to filter out the backgound, and create contours of the remaining snacks. The problem with this, is if snacks overlap, or are directly against eachother, it will detect 2 snacks as 1 big snack.
I'm trying to find better solutions to find the orientation of the snacks, i was wondering if any of you people know any tips to find a solution to this.
PS: If this is too vague, i've also created a PDF with pictures and more context.