Hi pHubb ,
I agree with Brandon's approach - Store the first ID in the scene and track them while it has that ID. This should help to lock on one person. There are couple variables may be useful in your case. You can find this piece of code (line 207) in the github link [https://github.com/luxonis/depthai-experiments/blob/master/gen2-pedestrian-reidentification/main.py]
for person_id in results:
dist = cos_dist(reid_result, results[person_id])
if dist > 0.7:
result_id = person_id
results[person_id] = reid_result
break
else:
result_id = next_id
results[result_id] = reid_result
results_path[result_id] = []
next_id += 1
and the variable result_id
should be the ID printed on each person in the video. You can link the result_id to your code, which only tracks one person.
Then about the x-coordinates, you can use the variable raw_bbox
(line 202) to track. It has det.xmin, det.ymin, det.xmax, det.ymax
raw data. Or you can find cv2.rectangle(frame, (bbox[0], bbox[1]), (bbox[2], bbox[3]), (10, 245, 10), 2)
(in line 221), the (bbox[0], bbox[1])
and(bbox[2],bbox[3])
are the top left and right bottom coordinates for the box. You can also use that data stream to work with your code and trigger your code.
Hope this helps! Please feel free to let us know how it goes pHubb .
Please feel free to correct me if finds any misleading info above @Luxonis-Lukasz and @Brandon
Best,
Steven