TY - JOUR
T1 - Attention control learning in the decision space using state estimation
AU - Gharaee, Zahra
AU - Fatehi, Alireza
AU - Mirian, Maryam
AU - Nili Ahmadabadi, Majid
PY - 2014/8/8
Y1 - 2014/8/8
N2 - The main goal of this paper is modelling attention while using it in efficient path planning of mobile robots. The key challenge in concurrently aiming these two goals is how to make an optimal, or near-optimal, decision in spite of time and processing power limitations, which inherently exist in a typical multi-sensor real-world robotic application. To efficiently recognise the environment under these two limitations, attention of an intelligent agent is controlled by employing the reinforcement learning framework. We propose an estimation method using estimated mixture-of-experts task and attention learning in perceptual space. An agent learns how to employ its sensory resources, and when to stop observing, by estimating its perceptual space. In this paper, static estimation of the state space in a learning task problem, which is examined in the WebotsTM simulator, is performed. Simulation results show that a robot learns how to achieve an optimal policy with a controlled cost by estimating the state space instead of continually updating sensory information.
AB - The main goal of this paper is modelling attention while using it in efficient path planning of mobile robots. The key challenge in concurrently aiming these two goals is how to make an optimal, or near-optimal, decision in spite of time and processing power limitations, which inherently exist in a typical multi-sensor real-world robotic application. To efficiently recognise the environment under these two limitations, attention of an intelligent agent is controlled by employing the reinforcement learning framework. We propose an estimation method using estimated mixture-of-experts task and attention learning in perceptual space. An agent learns how to employ its sensory resources, and when to stop observing, by estimating its perceptual space. In this paper, static estimation of the state space in a learning task problem, which is examined in the WebotsTM simulator, is performed. Simulation results show that a robot learns how to achieve an optimal policy with a controlled cost by estimating the state space instead of continually updating sensory information.
KW - Attention Control
KW - State Estimation
KW - Bayesian Reinforcement Learning
KW - Decision Making
KW - Mixture of Experts
U2 - 10.1080/00207721.2014.945982
DO - 10.1080/00207721.2014.945982
M3 - Article
VL - 47
SP - 1659
EP - 1674
JO - International Journal of Systems Science
JF - International Journal of Systems Science
SN - 0020-7721
IS - 7
ER -