Exploring Internal Simulation of Perception in Mobile Robots

Research output: Chapter in Book/Report/Conference proceedingPaper in conference proceedingpeer-review

72 Downloads (Pure)


Based on a neuroscientific hypothesis, this paper explores the possibility of an ‘inner world’ based on internal simulation of perception. We present three sets of experiments with a possible minimal model, using a simulated Khepera robot controlled by a simple recurrent connectionist network. Using an evolutionary algorithm the robots are trained on increasingly complex tasks. In the first experiment, serving as a baseline, robots are simply trained to map sensory input to motor output such that they move around in an environment without collisions. In the second experiment robots are additionally trained on predicting the next time step’s sensory input. In the third experiment, finally, the robot’s own prediction replaces the actual sensory input in order to investigate its capability to act ‘blindly’, i.e. in the temporary absence of external stimuli. Although only the first two experiments give positive results, we conclude that the experimental framework presented here should turn out useful in the investigation of more complex artificial neural models.
Original languageEnglish
Title of host publicationLund University Cognitive Studies
EditorsKai Oliver Arras, Albert-Jan Baerveldt, Christian Balkenius, Wolfram Burgard, Roland Siegwart
PublisherLund University
Number of pages7
Publication statusPublished - 2001

Publication series


Subject classification (UKÄ)

  • Neurosciences


  • cognitive
  • sensory anticipation
  • inner world
  • simulation of perception
  • representation
  • robotics


Dive into the research topics of 'Exploring Internal Simulation of Perception in Mobile Robots'. Together they form a unique fingerprint.

Cite this