Temporal Logic Motion Control using Actor-Critic Methods

Files
1202.2185v2.pdf(1.31 MB)
First author draft
Date
2012
Authors
Ding, X. C.
Wang, J.
Lahijanian, M.
Paschalidis, Ioannis Ch.
Belta, C.A.
Version
OA Version
Citation
X-C Ding, J Wang, M Lahijanian, I Ch Paschalidis, C Belta. "Temporal Logic Motion Control using Actor-Critic Methods." Proceedings of IEEE International Conference on Robotics and Automation (ICRA), pp. 4687 - 4692.
Abstract
In this paper, we consider the problem of deploying a robot from a specification given as a temporal logic statement about some properties satisfied by the regions of a large, partitioned environment. We assume that the robot has noisy sensors and actuators and model its motion through the regions of the environment as a Markov Decision Process (MDP). The robot control problem becomes finding the control policy maximizing the probability of satisfying the temporal logic task on the MDP. For a large environment, obtaining transition probabilities for each state-action pair, as well as solving the necessary optimization problem for the optimal policy are usually not computationally feasible. To address these issues, we propose an approximate dynamic programming framework based on a least-square temporal difference learning method of the actor-critic type. This framework operates on sample paths of the robot and optimizes a randomized control policy with respect to a small set of parameters. The transition probabilities are obtained only when needed. Hardware-in-the-loop simulations confirm that convergence of the parameters translates to an approximately optimal policy.
Description
Technical Report which accompanies an ICRA2012 paper
License
Attribution 4.0 International