Abstract

We present a unique comparative analysis, and evaluation of vision, radio, and audio based localization algorithms. We create the first baseline for the aforementioned sensors using the recently published Lund University Vision, Radio, and Audio (LuViRA) dataset, where all the sensors are synchronized and measured in the same environment. Some of the challenges of using each specific sensor for indoor localization tasks are highlighted. Each sensor is paired with a current state-of-the-art localization algorithm and evaluated for different aspects: localization accuracy, reliability and sensitivity to environment changes, calibration requirements, and potential system complexity. Specifically, the evaluation covers the ORB-SLAM3 algorithm for vision-based localization with an RGB-D camera, a machine-learning algorithm for radio-based localization with massive MIMO technology, and the SFS2 algorithm for audio-based localization with distributed microphones. The results can serve as a guideline and basis for further development of robust and high-precision multi-sensory localization systems, e.g., through sensor fusion, and context- and environment-aware adaptation.
Original languageEnglish
Number of pages11
JournalIEEE Journal of Indoor and Seamless Positioning and Navigation
DOIs
Publication statusE-pub ahead of print - 2024 Jul 17

Subject classification (UKÄ)

  • Computer Vision and Robotics (Autonomous Systems)

Fingerprint

Dive into the research topics of 'LuViRA Dataset Validation and Discussion: Comparing Vision, Radio, and Audio Sensors for Indoor Localization'. Together they form a unique fingerprint.

Cite this