TY - JOUR
T1 - LuViRA Dataset Validation and Discussion: Comparing Vision, Radio, and Audio Sensors for Indoor Localization
AU - Yaman, Ilayda
AU - Tian, Guoda
AU - Tegler, Erik
AU - Gulin, Jens
AU - Challa, Nikhil
AU - Tufvesson, Fredrik
AU - Edfors, Ove
AU - Åström, Kalle
AU - Malkowsky, Steffen
AU - Liu, Liang
PY - 2024/7/17
Y1 - 2024/7/17
N2 - We present a unique comparative analysis, and evaluation of vision, radio, and audio based localization algorithms. We create the first baseline for the aforementioned sensors using the recently published Lund University Vision, Radio, and Audio (LuViRA) dataset, where all the sensors are synchronized and measured in the same environment. Some of the challenges of using each specific sensor for indoor localization tasks are highlighted. Each sensor is paired with a current state-of-the-art localization algorithm and evaluated for different aspects: localization accuracy, reliability and sensitivity to environment changes, calibration requirements, and potential system complexity. Specifically, the evaluation covers the ORB-SLAM3 algorithm for vision-based localization with an RGB-D camera, a machine-learning algorithm for radio-based localization with massive MIMO technology, and the SFS2 algorithm for audio-based localization with distributed microphones. The results can serve as a guideline and basis for further development of robust and high-precision multi-sensory localization systems, e.g., through sensor fusion, and context- and environment-aware adaptation.
AB - We present a unique comparative analysis, and evaluation of vision, radio, and audio based localization algorithms. We create the first baseline for the aforementioned sensors using the recently published Lund University Vision, Radio, and Audio (LuViRA) dataset, where all the sensors are synchronized and measured in the same environment. Some of the challenges of using each specific sensor for indoor localization tasks are highlighted. Each sensor is paired with a current state-of-the-art localization algorithm and evaluated for different aspects: localization accuracy, reliability and sensitivity to environment changes, calibration requirements, and potential system complexity. Specifically, the evaluation covers the ORB-SLAM3 algorithm for vision-based localization with an RGB-D camera, a machine-learning algorithm for radio-based localization with massive MIMO technology, and the SFS2 algorithm for audio-based localization with distributed microphones. The results can serve as a guideline and basis for further development of robust and high-precision multi-sensory localization systems, e.g., through sensor fusion, and context- and environment-aware adaptation.
U2 - 10.1109/JISPIN.2024.3429110
DO - 10.1109/JISPIN.2024.3429110
M3 - Article
SN - 2832-7322
JO - IEEE Journal of Indoor and Seamless Positioning and Navigation
JF - IEEE Journal of Indoor and Seamless Positioning and Navigation
ER -