Quadcopter drone formation control via onboard visual perception

Files
Dunn_bu_0017N_15283.pdf(34.55 MB)
Main thesis
Dunn_bu_0017N_429/finalDetail_XLine.avi(7.1 MB)
View finalDetail_XLine.avi
Dunn_bu_0017N_429/finalDetail_YCircle.avi(6.38 MB)
View finalDetail_YCircle.avi
Dunn_bu_0017N_429/finalDetail_3dRectangle.avi(8.13 MB)
View finalDetail_3dRectangle.avi
Dunn_bu_0017N_429/finalDetail_XLine.mov(16.68 MB)
View finalDetail_XLine.mov
Date
2020
DOI
Authors
Dunn, James Kenneth
Version
OA Version
Citation
Abstract
Quadcopter drone formation control is an important capability for fields like area surveillance, search and rescue, agriculture, and reconnaissance. Of particular interest is formation control in environments where radio communications and/or GPS may be either denied or not sufficiently accurate for the desired application. To address this, we focus on vision as the sensing modality. We train an Hourglass Convolutional Neural Network (CNN) to discriminate between quadcopter pixels and non-quadcopter pixels in a live video feed and use it to guide a formation of quadcopters. The CNN outputs "heatmaps" - pixel-by-pixel likelihood estimates of the presence of a quadcopter. These heatmaps suffer from short-lived false detections. To mitigate these, we apply a version of the Siamese networks technique on consecutive frames for clutter mitigation and to promote temporal smoothness in the heatmaps. The heatmaps give an estimate of the range and bearing to the other quadcopter(s), which we use to calculate flight control commands and maintain the desired formation. We implement the algorithm on a single-board computer (ODROID XU4) with a standard webcam mounted to a quadcopter drone. Flight tests in a motion capture volume demonstrate successful formation control with two quadcopters in a leader-follower setup.
Description
License