Face tracking in the loop
The project connects directly to the DJI Tello video stream, runs YOLO-based face detection via OpenCV, and then uses those detections to drive control signals. The drone attempts to keep a face centered in the frame by adjusting yaw, altitude, and distance.
System design
- Video frames pulled from the Tello via UDP and decoded in Python.
- Faces detected using YOLO + OpenCV; bounding box position used as a control signal.
- Control loop sends velocity and rotation commands back to the drone over its SDK.
- Safety and bounds checks to avoid overly aggressive movements.
Lessons & takeaways
This was a hands-on way to explore latency, feedback loops, and the difference between a model that “just detects” something and a full system that has to respond physically and safely.