Improving Learning for Embodied Agents in Dynamic--Environments by State Factorisation

Research output: Contribution to journalArticlepeer-review

19 Downloads (Pure)

Abstract

A new reinforcement learning algorithm designed--specifically for robots and embodied systems--is described. Conventional reinforcement learning methods intended for learning general tasks suffer from a number of disadvantages in this domain including slow learning speed, an inability--to generalise between states, reduced performance--in dynamic environments, and a lack of scalability. Factor-Q, the new algorithm, uses factorised state and action, coupled with multiple structured rewards, to address these issues. Initial experimental results demonstrate that Factor-Q is able to learn as efficiently in dynamic as in static environments, unlike conventional methods. Further, in the specimen task,--obstacle avoidance is improved by over two orders--of magnitude compared with standard Qlearning.
Original languageEnglish
JournalLecture Notes in Artificial Intelligence (LNAI)
Volume2004
Publication statusPublished - 2004

Fingerprint

Dive into the research topics of 'Improving Learning for Embodied Agents in Dynamic--Environments by State Factorisation'. Together they form a unique fingerprint.

Cite this