go back

Theory of mind and information relevance in human centric human robot cooperation

Moritz Bühler, "Theory of mind and information relevance in human centric human robot cooperation", Technical University Darmstadt, 2022.

Abstract

In the interaction with others, besides consideration of environment and task requirements, it is crucial to account for and develop an understand- ing for the interaction partner and her state of mind. An understanding of other’s state of knowledge and plans is important to support efficient interaction activities including information sharing, or distribution of sub- tasks. A robot cooperating with and supporting a human partner might decide to communicate information that it has collected. However, sharing every piece of information is not feasible, as not all information is both, currently relevant and new for the human partner, but instead will annoy and dis- tract her from other important activities. An understanding for the human state of mind will enable the robot to balance communication according to the needs of the human partner and the efforts of communication for both. An artificial theory of mind is proposed as Bayesian inference of human beliefs during interaction. It relies on a general model for human infor- mation perception and decision making. To cope with the complexity of second order inference – estimating what the human inferred of her envi- ronment – an efficient linearization based filtering approach is introduced. The inferred human belief, as understanding of her mental state, is used to estimate her situation awareness. When this is missing, e.g. the human is unaware of some important piece of information, the robot provides supportive communication. It therefore evaluates relevance and novelty of information compared to communication efforts following a systematic information sharing concept. The robot decides about whether, when and what type of information it should provide in a current situation to sup- port the human efficiently without annoying. The decision is derived by planning under uncertainty while considering the inferred human belief in relation to the task requirements. Systematic properties and benefits of the derived concepts are discussed in illustrative example situations. Two human robot collaborative tasks and corresponding user studies were designed and investigated, applying artificial theory of mind as be- lief inference and assistive communication in the interaction with humans. Equipped with the artificial theory of mind, the robot is able to infer in- terpretable information about the human’s mental state and can detect a lack of human awareness. Supported by adaptive human centric informa- tion sharing, participants could recover much earlier from unawareness. A comparison to state-of-the-art communication strategies demonstrates the efficiency, as the new concept explicitly balances benefits and costs of communication to support while avoiding unnecessary interruptions. By sharing information according to human needs and environmental urgency, the robot does not take over nor instruct the human, but enables her to make good decisions herself.



Download Bibtex file Per Mail Request

Search