Autonomous Drone Racing - Vision-Based Gate Detection and MPC Control (ECE484)
Hybrid YOLOv5 perception + MPC control achieving 200%+ improvement over NeurIPS 2019 baseline.
Overview
Developed a vision-based gate detection system using YOLOv5 for autonomous drone racing. Fine-tuned the model on custom drone racing gate datasets and implemented lightweight post-processing algorithms to achieve robust real-time localization. The system handles varying lighting conditions, motion blur, and perspective distortions common in high-speed flight scenarios. This was the final project for 'Principles of Safe Autonomy' (ECE 484), focusing on safe drone control systems. I implemented a hybrid perception and control architecture that maintains measurable localization error bounds and enforces safety constraints in real-time. The system combines vision-based state estimation with model predictive control to ensure safe navigation in constrained environments. As the first autonomous drone final project in the course, I implemented a hybrid vision pipeline combining traditional computer vision techniques with a supervised, fine-tuned YOLOv5 model, achieving an average localization error of 6.2% of the gate's center (around 9cm on a 1.6m gate)—accurate enough for the controller to fly aggressive trajectories without clipping gates. Integrating this perception stack with an MPC-based controller achieved over 200% improvement relative to the NeurIPS 2019 'Game of Drones' baseline on the AirSim drone-racing platform. This project was my first encounter with the brittleness of 'state-of-the-art' perception: small changes in lighting or motion blur induced by high-speed maneuvers could cause failures that the controller had no way to anticipate.