Abstract
The state-spaces needed to describe realistic--physical embodied agents are extremely large, which presents a serious challenge to classical einforcement learning schemes. In previous work--(Jacob et al., 2005a, Jacob et al., 2005b) we introduced--our EMBER (for EMbodiment-Based modulaR) reinforcement learning system, which describes a novel method for decomposing agents into modules based on the agent s embodiment. This modular decomposition factorises the statespace--and dramatically improves performance--in unknown and dynamic environments. However,--while there are great advantages to be gained from a factorised state-space, the question of dependencies cannot be ignored. We present a development of the work reported in (Jacob et al., 2004) which shows, in a simple example, how dependencies may be identified using a heuristic approach. Results show that the--system is able quickly to discover and act upon--dependencies, even where they are neither simple--nor deterministic.
Original language | English |
---|---|
Pages (from-to) | 103-110 |
Journal | TAROS |
Volume | 2005 |
Publication status | Published - 2005 |