Show simple item record

dc.contributor.authorLi, Xiaoen_US
dc.contributor.authorBelta, Calinen_US
dc.date.accessioned2018-06-28T17:24:55Z
dc.date.available2018-06-28T17:24:55Z
dc.date.issued2016
dc.identifier.citationXiao Li, Calin Belta. 2016. "A Hierarchical Reinforcement Learning Method for Persistent Time-Sensitive Tasks.." CoRR, Volume abs/1606.06355
dc.identifier.urihttps://hdl.handle.net/2144/29726
dc.description.abstractReinforcement learning has been applied to many interesting problems such as the famous TD-gammon and the inverted helicopter flight. However, little effort has been put into developing methods to learn policies for complex persistent tasks and tasks that are time-sensitive. In this paper, we take a step towards solving this problem by using signal temporal logic (STL) as task specification, and taking advantage of the temporal abstraction feature that the options framework provide. We show via simulation that a relatively easy to implement algorithm that combines STL and options can learn a satisfactory policy with a small number of training cases.en_US
dc.relation.ispartofCoRR
dc.subjectArteficial intelligenceen_US
dc.titleA hierarchical reinforcement learning method for persistent time-sensitive tasksen_US
dc.typeArticleen_US
pubs.elements-sourcedblpen_US
pubs.notesEmbargo: Not knownen_US
pubs.organisational-groupBoston Universityen_US
pubs.organisational-groupBoston University, College of Engineeringen_US
pubs.organisational-groupBoston University, College of Engineering, Department of Mechanical Engineeringen_US


This item appears in the following Collection(s)

Show simple item record