Sammanfattning

We present a unique comparative analysis, and evaluation of vision, radio, and audio based localization algorithms. We create the first baseline for the aforementioned sensors using the recently published Lund University Vision, Radio, and Audio (LuViRA) dataset, where all the sensors are synchronized and measured in the same environment. Some of the challenges of using each specific sensor for indoor localization tasks are highlighted. Each sensor is paired with a current state-of-the-art localization algorithm and evaluated for different aspects: localization accuracy, reliability and sensitivity to environment changes, calibration requirements, and potential system complexity. Specifically, the evaluation covers the ORB-SLAM3 algorithm for vision-based localization with an RGB-D camera, a machine-learning algorithm for radio-based localization with massive MIMO technology, and the SFS2 algorithm for audio-based localization with distributed microphones. The results can serve as a guideline and basis for further development of robust and high-precision multi-sensory localization systems, e.g., through sensor fusion, and context- and environment-aware adaptation.
Originalspråkengelska
Antal sidor11
TidskriftIEEE Journal of Indoor and Seamless Positioning and Navigation
DOI
StatusE-pub ahead of print - 2024 juli 17

Ämnesklassifikation (UKÄ)

  • Datorseende och robotik (autonoma system)

Fingeravtryck

Utforska forskningsämnen för ”LuViRA Dataset Validation and Discussion: Comparing Vision, Radio, and Audio Sensors for Indoor Localization”. Tillsammans bildar de ett unikt fingeravtryck.

Citera det här