Vision-based Landing Guidance through Tracking and Orientation Estimation

Ferreira, João and Pinto, João and Moura, Júlia and Li, Yi and Castro, Cristiano and Angelov, Plamen (2024) Vision-based Landing Guidance through Tracking and Orientation Estimation. In: IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) :. UNSPECIFIED. ISBN 9798350318920 (In Press)

[thumbnail of WACV_2025_LARD]
Text (WACV_2025_LARD)
WACV_2025_LARD.pdf - Accepted Version
Available under License Creative Commons Attribution.

Download (1MB)

Abstract

Fixed-wing aerial vehicles are equipped with functionalities such as ILS (instrument landing system), PAR (precision approach radar) and, DGPS (differential global positioning system), enabling fully automated landings. However, these systems impose significant costs on airport operations due to high installation and maintenance requirements. Moreover, since these navigation parameters come from ground or satellite signals, they are vulnerable to interference. A more cost-effective and independent alternative for guiding landing is a vision-based system that detects the runway and aligns the aircraft, reducing the pilot’s cognitive load. This paper proposes a novel framework that addresses three key challenges in developing autonomous vision-based landing systems. Firstly, to overcome the lack of aerial front-view video data, we created high-quality videos simulating landing approaches through the generator code available in the LARD (landing approach runway detection dataset) repository. Secondly, in contrast to former studies focusing on object detection for finding the runway, we chose the state-of-the-art model LoRAT to track runways within bounding boxes in each video frame. Thirdly, to align the aircraft with the designated landing runway, we extract runway keypoints from the resulting LoRAT frames and estimate the camera relative pose via the Perspective-n-Point algorithm. Our experimental results over a dataset of generated videos and original images from the LARD dataset consistently demonstrate the proposed framework’s highly accurate tracking and alignment capabilities. Our approach source code and the LoRAT model pre-trained with LARD videos are available at https:// github.com/ jpklock2/ visionbased-landing-guidance

Item Type:
Contribution in Book/Report/Proceedings
Uncontrolled Keywords:
Research Output Funding/yes_externally_funded
Subjects:
?? yes - externally funded ??
ID Code:
227160
Deposited By:
Deposited On:
28 Jan 2025 10:35
Refereed?:
Yes
Published?:
In Press
Last Modified:
21 Feb 2025 01:50