go back

A Reward-based Visual Attention Framework for Humanoid Robots

Cem Karaoguz, Tobias Rodemann, Britta Wrede, "A Reward-based Visual Attention Framework for Humanoid Robots", European Conference on Eye Movements, 2011.

Abstract

Humans are exposed to a vast amount of visual information throughout their interaction with the environment. Among different visual stimuli, finding the ones that are relevant to the task being undertaken is a crucial process. The human vision system employs top-down attention mechanisms that generate eye movements towards task-relevant stimuli [Land & Tatler, Looking and Acting, 2009, Oxford University Press]. These attention mechanisms are partly driven by a reward system that makes action-perception associations. A model that derives such relations through reinforcement learning was presented in [Ballard &Hayhoe, 2009, Modelling the role of task in the control of gaze, Visual cognition, 17(6-7), 1185-1204]. We develop a similar framework using a systems approach in which individual visual processes and cognitive tasks (e.g. visual saliency, motion detection, depth estimation, grasping) are modelled as modules. Distribution of gaze control among different modules to accomplish a certain task is learnt through a reward mechanism. The framework was applied on a scenario using the iCub humanoid robot in a simulation environment where the robot has learnt to look at the relevant objects to support a grasping task.



Download Bibtex file Per Mail Request

Search