Ggifvgk
- Oct 5, 2024
- Joined Nov 9, 2023
- 0 best answers
pedro-UCA Thank you very much for the code!
But I get an error after running the program:
output0 = np.reshape(in_nn_yolo.getLayerFp16("output0"), newshape=([1, 116, 8400]))
ValueError: cannot reshape array of size 0 into shape (1,116,8400)
The blob model I use is yolo's official yolov8n-seg.blob, and when I run the program, I comment out the YOLOSeg.py if name == 'main': after that, I don't know what the problem is?
I've converted the yolov8 instance segmentation model into blob, but I don't know how to extract the output data of the neural network to segment the RGB image? Does anyone implement it?
Thank you very much for your patience.The problem is that my oak d pro currently only has a joint calibration program on ubuntu, there is no windows, my programs are all on windows. Unable to get the conversion matrix for camera and IMU.
Sorry, I may have made a mistake with the transform relationship. My camera and IMU are not jointly calibrated, and I shouldn't use the IMU's rotation matrix directly on the camera coordinate system transformation.Maybe that's the key.
Sorry for not describing my problem in detail before.I took the quaternion of the IMU at a certain moment in time and converted it into a rotation matrix.
q=np.array([-0.019775,0.479309,0.003967,0.877441])
def quaternion_to_rotation_matrix(q): # x, y ,z ,w
rot_matrix = np.array(
[[1.0 - 2 * (q[1] * q[1] + q[2] * q[2]), 2 * (q[0] * q[1] - q[3] * q[2]), 2 * (q[3] * q[1] + q[0] * q[2])],
[2 * (q[0] * q[1] + q[3] * q[2]), 1.0 - 2 * (q[0] * q[0] + q[2] * q[2]), 2 * (q[1] * q[2] - q[3] * q[0])],
[2 * (q[0] * q[2] - q[3] * q[1]), 2 * (q[1] * q[2] + q[3] * q[0]), 1.0 - 2 * (q[0] * q[0] + q[1] * q[1])]],
dtype=q.dtype)
return rot_matrixR = np.eye(4)
R[:3, :3] = quaternion_to_rotation_matrix(q)
print(R)As a result, it was:
[[ 0.54049429 -0.02591829 0.84097384 0. ]
[-0.01199505 0.99918642 0.03850563 0. ]
[-0.84128763 -0.03089995 0.53974366 0. ]
[ 0. 0. 0. 1. ]]
I think the resulting rotation matrix should be an orthogonal identity matrix, but obviously, R is not.An orthogonal identity matrix can convert the point cloud to the world coordinate system.That's where I'm confused
How to calibrate IMU with Oak D Pro on Windows system
Hi,recently I wanted to convert the point cloud generated by Oak-D-Pro from the camera coordinate system to the world coordinate system with the rotation vector information output by the IMU, but I was using the Windows system and did not calibrate the IMU.I don't know how wrong the IMU data is now, but in fact, the coordinate transformation results are not ideal for me. Pls what should I do, I'm about to collapse.Please help me.