Skip to main navigation Skip to search Skip to main content

Abstract

As robots increasingly integrate into our social environments, from factories to social spaces, there is a growing need to find ways to effectively collaborate in these dynamic environments. However, current robotics research is mostly aimed towards task and environment specific programming. Even the state-of-the-art collaborative robotics technologies lack or have a very rudimentary understanding of the multimodal methods used by human teammates to communicate in real-time. This leads to an increased workload for human operators and becomes a critical problem that limits robots to operate in dynamic environments. We focus on one such dynamic setting of search and rescue (SAR) scenarios. In order to achieve effective collaboration between humans and robots in this scenario, there is a need for robots to naturally understand human intentions through the interpretation of multimodal communication cues such as gaze, gesture, and contextual signals in real-time. This research aims towards achieving mixed-initiative interaction by addressing the gap of robots proactively collaborating with humans through a two step process. The first part of the thesis, following a Design Science approach, explores the use and integration of non-verbal communication cues to conduct collaborative tasks in a SAR environment. Through designing the human-in-the-loop collaboration system CueSense and testing different collaboration strategies, we investigate when and how humans and robots can dynamically share control during missions. This modular system is capable of tracking gaze to predict task focus and gesture inputs for nuanced intent interpretation. This is validated through user studies where participants work alongside the system in different collaborative settings for a simulated search-and-rescue scenario. The results show that the system successfully assists users in the task and improves task efficiency, performance, and reduces mental workload. The second part of the thesis focuses on intention recognition as a foundation of proactive support and mixed-initiative interaction in human robot collaboration. We present an extensive review of the literature on intention recognition and identify gaps and challenges in implementing robust intention recognition systems in robotics. In summary, we focus our research on communication modalities, interfaces, and intention recognition for mixed-initiative interaction to allow efficient and seamless human robot collaboration in dynamic high-stakes scenarios.
Original languageEnglish
Supervisors/Advisors
  • Topp, Elin A., Supervisor
  • Malec, Jacek, Assistant supervisor
  • Olofsson, Björn, Assistant supervisor
Place of PublicationLund
Publisher
ISBN (Print)978-91-8104-655-7
ISBN (electronic) 978-91-8104-656-4
Publication statusPublished - 2025 Sept

Subject classification (UKÄ)

  • Robotics and automation
  • Human Computer Interaction
  • Computer Sciences

Fingerprint

Dive into the research topics of 'Towards proactive support for human-robot collaboration'. Together they form a unique fingerprint.
  • Chaos to control: human assisted scene inspection

    Jena, A. & Topp, E. A., 2023 Mar 13, HRI 2023 - Companion of the ACM/IEEE International Conference on Human-Robot Interaction. IEEE Computer Society, p. 491-494 4 p. (ACM/IEEE International Conference on Human-Robot Interaction).

    Research output: Chapter in Book/Report/Conference proceedingPaper in conference proceedingpeer-review

    Open Access

Cite this