TY - GEN
T1 - SuPLE
T2 - 2025 IEEE International Conference on Robotics & Automation (ICRA)
AU - Nguyen, Phu
AU - Polani, Daniel
AU - Tiomkin, Stas
N1 - From: The PaperPlaza Conference/Journal Management System <[email protected]>
Date: Monday, January 27, 2025 at 1:45 PM
To: Tiomkin, Stas <[email protected]>, [email protected] <[email protected]>, [email protected] <[email protected]>
Cc: [email protected] <[email protected]>, [email protected] <[email protected]>
Subject: Decision on ICRA 2025 submission 3366
This email originated outside TTU. Please exercise caution<https://askit.ttu.edu/phishing>!
Message from The PaperPlaza Conference/Journal Management System
Message originated by Seth Hutchinson
To: Prof. Stas Tiomkin
Re: ICRA 2025 Contributed paper 3366: SuPLE: Robot Learning with Lyapunov Rewards
Dear Stas,
Congratulations! It is our pleasure to inform you that your paper has been accepted for presentation at the 2025 IEEE International Conference on Robotics and Automation (ICRA), May 19-23, 2025, Atlanta, USA, and for inclusion in the conference proceedings.
In this email you will find information about:
- Conference format
- Final submission instructions
- Registration
- Conference process and statistics
- Additional information
Dates to keep in mind:
Feb 6: PaperPlaza opens for Final paper submission
March 6: Final paper submissions deadline
March 6: Author registration deadline
April 12: Graphical Abstract due
May 19-May 23: ICRA!
Note that this email applies only to papers submitted as Contributed ICRA 2025 submissions. A separate email will be sent for journal papers (RA-L, TRO, T-ASE, etc.) to be presented at ICRA 2025.
We look forward to seeing you in May for what promises to be a strong and exciting technical program and conference.
Sincerely,
Nancy Amato and Seth Hutchinson
ICRA 2025 Conference Chairs
Christian Ott
Editor-in-Chief, ICRA Conference Editorial Board
PY - 2025/6/6
Y1 - 2025/6/6
N2 - The reward function is an essential component in robot learning. Reward directly affects the sample and computational complexity of learning, and the quality of a solution. The design of informative rewards requires domain knowledge, which is not always available. We use the properties of the dynamics to produce system-appropriate reward without adding external assumptions. Specifically, we explore an approach to utilize the Lyapunov exponents of the system dynamics to generate a system-immanent reward. We demonstrate that the `Sum of the Positive Lyapunov Exponents' (SuPLE) is a strong candidate for the design of such a reward. We develop a computational framework for the derivation of this reward, and demonstrate its effectiveness on classical benchmarks for sample-based stabilization of various dynamical systems. It eliminates the need to start the training trajectories at arbitrary states, also known as auxiliary exploration. While the latter is a common practice in simulated robot learning, it is unpractical to consider to use it in real robotic systems, since they typically start from natural rest states such as a pendulum at the bottom, a robot on the ground, etc. and can not be easily initialized at arbitrary states. Comparing the performance of SuPLE to commonly-used reward functions, we observe that the latter fail to find a solution without auxiliary exploration, even for the task of swinging up the double pendulum and keeping it stable at the upright position, a prototypical scenario for multi-linked robots. SuPLE-induced rewards for robot learning offer a novel route for effective robot learning in typical as opposed to highly specialized or fine-tuned scenarios. Our code is publicly available for reproducibility and further research.
AB - The reward function is an essential component in robot learning. Reward directly affects the sample and computational complexity of learning, and the quality of a solution. The design of informative rewards requires domain knowledge, which is not always available. We use the properties of the dynamics to produce system-appropriate reward without adding external assumptions. Specifically, we explore an approach to utilize the Lyapunov exponents of the system dynamics to generate a system-immanent reward. We demonstrate that the `Sum of the Positive Lyapunov Exponents' (SuPLE) is a strong candidate for the design of such a reward. We develop a computational framework for the derivation of this reward, and demonstrate its effectiveness on classical benchmarks for sample-based stabilization of various dynamical systems. It eliminates the need to start the training trajectories at arbitrary states, also known as auxiliary exploration. While the latter is a common practice in simulated robot learning, it is unpractical to consider to use it in real robotic systems, since they typically start from natural rest states such as a pendulum at the bottom, a robot on the ground, etc. and can not be easily initialized at arbitrary states. Comparing the performance of SuPLE to commonly-used reward functions, we observe that the latter fail to find a solution without auxiliary exploration, even for the task of swinging up the double pendulum and keeping it stable at the upright position, a prototypical scenario for multi-linked robots. SuPLE-induced rewards for robot learning offer a novel route for effective robot learning in typical as opposed to highly specialized or fine-tuned scenarios. Our code is publicly available for reproducibility and further research.
KW - cs.RO
KW - cs.AI
UR - https://ieeexplore.ieee.org/xpl/conhome/1000639/all-proceedings
M3 - Conference contribution
BT - Proceedings ICRA 2025
PB - Institute of Electrical and Electronics Engineers (IEEE)
Y2 - 19 May 2025 through 23 May 2025
ER -