go back

Comparing Rule-based and LLM based Methods to enable Active Robot Assistant Conversations

Jan Leusmann, Chao Wang, Sven Mayer, "Comparing Rule-based and LLM based Methods to enable Active Robot Assistant Conversations", Workshop@CHI 2024: Building Trust in CUIs – From Design to Deployment , 2024.

Abstract

Human-robot interaction (HRI) has recently undergone major advancements. Robots are not only autonomous agents anymore, but the domain is shifting more and more towards collaborative settings. Advancements in different topics have enabled the possibility of this shift. Robots can now perform various tasks to support humans in their daily lives. However, in collaborative settings, communication is key. In human-human collaborative settings, we often need to ask questions regarding how a task should be performed or if its currently a good time to help. While this is often intuitive in human-human scenarios depending on how close the relationship between the users is, in HRI, it is currently unclear how to best enable this kind of communication. Current technical devices often react predictable to human input as we have learned the interaction modalities with them. Most voice assistants still only react to user requests through a wake word, and the interaction flow is often static and repeatable. On the one hand, this makes it possible to learn how to interact with these systems. On the other hand, this might lead to frustration when the system can not dynamically react to unexpected requests from potentially novice users. Large-language models (LLMs) have recently drastically changed the possibilities of interacting with technical systems. Due to their potential to ``understand'' every human request to a certain extent, they enable more fluid interaction. Previously, most communication models were rule-based. The conversation flow was streamlined, and users had to learn how to correctly communicate with these systems to get to their desired results.



Download Bibtex file Per Mail Request

Search