- depth values go into diff model at: (0..65535) depth data in millimeters
- they are calculated from 96 disparity steps
- so you can only have 96 different depth readings
- the readings at long distances "jump" a lot (it is not linear)
- so at from step 1 to 2 it might be 10 millimeters, but from step 90 to 91 it might be 10,000 millimeters
- so at far away distances (around 7 meters) the depth reading may have read it at step 90 and then even though nothing has changed it then reads it at step 91
- this means even though nothing changed, there might be a diff reading of 1000 millimeters
- when you wave your hand the distance of the camera to whatever was behind your hand and the distance to your hand (diff) might be small maybe 1,000 millimeters
- So the depth_diff has a range of values of 0 (because either it didn't have a confident reading or it is the exact same reading), or 1,000 (because of your hand movement), or 10,000 (because a far wall changed a step reading even though nothing changed)
- So when you run
colorize = cv2.normalize(diff, None, 255, 0, cv2.NORM_INF, cv2.CV
8UC1)
you are asking opencv to change the values that you have in the depth_diff (normalize) so that the smallest value you have in the dept_ diff (0) is 0, and the largest value you have in the depth_diff (10,000) is 255. - From the depth_diff you sent I am guessing that in the top right of the image it is far away walls, and the bottom left of the image is also far away
- The largest values are "white" (the far away things)
- So your small depth_diff values (like your hand moving) get squashed to be near black (very dark grey), and have a value of 25, only 10% of the largest value (which will be white)
- if you set up your depth camera so that there was a large poster only a few meters away from it, then waved your hand infront of THAT, the values of the depth_diff of your hand moving would be larger relative to all readings so it would be more clear
- try changing to
colorize = cv2.normalize(diff, None, 1, 0, cv2.NORM_INF, cv2.CV_8UC1)
and see what happens - it should make it so pixels are only white or black, all 0 values (no difference or unconfident) will be black, and ANY depth diff values that are not 0 will be white
If I subtract 2 StereoDepth frames from each other how to output in OpenCV
Now I am confused, how does getting the depth_diff of depth frames get you that?
If you want to speed of an object you will want to do a SpatialImgDetection to get the center point and use that overtime.
What objects are you trying to get speed for?
The depth-diff is in a single dimension not in 3 dimensions.
There are a lot of good models out there, just choose one from the model zoo. Use SpatialImgDetections. Find the x,y,z per frame, and then find the difference in distance between distance, and divide by timestamp of frames and you are set.
You will likely want to average this with a window to get rid of noise.