Changelog
🚀 Added
- Added
OCSORTTracker, a clean re-implementation of OC-SORT. OC-SORT shifts to an observation-centric paradigm, using real detections to correct Kalman filter errors accumulated during occlusions. It introduces Observation-Centric Re-Update (ORU) for state recovery, Observation-Centric Momentum (OCM) for direction-consistency-weighted association, and Observation-Centric Recovery (OCR) for second-stage heuristic matching. OC-SORT achieves the highest HOTA on MOT17 and DanceTrack with default parameters. (#207)
| Algorithm | Description | MOT17 HOTA | SportsMOT HOTA | SoccerNet HOTA | DanceTrack HOTA |
|---|---|---|---|---|---|
| SORT | Kalman filter + Hungarian matching baseline. | 58.4 | 70.9 | 81.6 | 45.0 |
| ByteTrack | Two-stage association using high and low confidence detections. | 60.1 | 73.0 | 84.0 | 50.2 |
| OC-SORT | Observation-centric recovery for lost tracks. | 61.9 | 71.7 | 78.4 | 51.8 |
import cv2
import supervision as sv
from inference import get_model
from trackers import OCSORTTracker
model = get_model("rfdetr-medium")
tracker = OCSORTTracker()
box_annotator = sv.BoxAnnotator()
label_annotator = sv.LabelAnnotator()
cap = cv2.VideoCapture("<SOURCE_VIDEO_PATH>")
if not cap.isOpened():
raise RuntimeError("Failed to open video source")
while True:
ret, frame = cap.read()
if not ret:
break
result = model.infer(frame)[0]
detections = sv.Detections.from_inference(result)
detections = tracker.update(detections)
frame = box_annotator.annotate(frame, detections)
frame = label_annotator.annotate(frame, detections, labels=detections.tracker_id)
cv2.imshow("RF-DETR + OC-SORT", frame)
if cv2.waitKey(1) & 0xFF == ord("q"):
break
cap.release()
cv2.destroyAllWindows()trackers-2.3.0-promo.mp4
- Added
trackers downloadCLI command anddownload_datasetPython API. Download benchmark datasets directly from the command line or from code. Supports MOT17 and SportsMOT with split and asset filtering. (#262)
# List available datasets
trackers download --list
# Download full dataset
trackers download mot17
# Download specific split and asset type
trackers download mot17 --split train --annotations-only
# Custom output directory
trackers download sportsmot --split val -o ./datasetsfrom trackers import download_dataset, Dataset, DatasetSplit, DatasetAsset
download_dataset(
dataset=Dataset.MOT17,
split=[DatasetSplit.VAL],
asset=[DatasetAsset.ANNOTATIONS, DatasetAsset.DETECTIONS],
output_dir="./data",
)| Dataset | Description | Splits | Assets | License |
|---|---|---|---|---|
mot17 |
Pedestrian tracking with crowded scenes and frequent occlusions. | train, val, test |
frames, annotations, detections |
CC BY-NC-SA 3.0 |
sportsmot |
Sports broadcast tracking with fast motion and similar-looking targets. | train, val, test |
frames, annotations |
CC BY 4.0 |
MOT17_MOT17-02-DPM.mp4
SportsMOT_v_-6Os86HzwCs_c001.mp4
- Added
--track-idsflag totrackers trackCLI command. Filter displayed tracks by track ID to focus on specific objects in a scene. (#280)
trackers track --source video.mp4 --output output.mp4 \
--model rfdetr-medium \
--tracker bytetrack \
--track-ids 1,2🌱 Changed
-
Made
--sourceoptional intrackers trackwhen--detectionsis provided and no visual output is requested, enabling frameless tracking for evaluation workflows. (#322) -
Optimized
xcycsr_to_xyxyandxyxy_to_xcycsrbounding box converters for the single-box hot path, reducing per-call overhead in inner tracking loops. (#296)
🛠️ Fix
- Fixed a bug in MOT evaluation where ground-truth entries with
conf=0(distractors) were not filtered, causing artificially low scores on MOT17. Tracker entries withid < 0are now also excluded. Results now match TrackEval exactly. (#322)
🏆 Contributors
@JVSCHANDRADITHYA (Chandradithya Janaswami), @salmanmkc (Salman Chishti), @AlexBodner (Alexander Bodner), @Borda (Jirka Borovec), @SkalskiP (Piotr Skalski)