Inferring dependencies in Embodiment-based modular reinforcement learning

Research output: Contribution to journalArticlepeer-review

28 Downloads (Pure)


The state-spaces needed to describe realistic--physical embodied agents are extremely large, which presents a serious challenge to classical einforcement learning schemes. In previous work--(Jacob et al., 2005a, Jacob et al., 2005b) we introduced--our EMBER (for EMbodiment-Based modulaR) reinforcement learning system, which describes a novel method for decomposing agents into modules based on the agent s embodiment. This modular decomposition factorises the statespace--and dramatically improves performance--in unknown and dynamic environments. However,--while there are great advantages to be gained from a factorised state-space, the question of dependencies cannot be ignored. We present a development of the work reported in (Jacob et al., 2004) which shows, in a simple example, how dependencies may be identified using a heuristic approach. Results show that the--system is able quickly to discover and act upon--dependencies, even where they are neither simple--nor deterministic.
Original languageEnglish
Pages (from-to)103-110
Publication statusPublished - 2005


Dive into the research topics of 'Inferring dependencies in Embodiment-based modular reinforcement learning'. Together they form a unique fingerprint.

Cite this