Indexed by:
Abstract:
Learning skills autonomously is a particularly important ability for an autonomous robot. A promising approach is reinforcement learning (RL) where agents learn policy through interaction with its environment. One problem of RL algorithm is how to tradeoff the exploration and exploitation. Moreover, multiple tasks also make a great challenge to robot learning. In this paper, to enhance the performance of RL, a novel learning framework integrating RL with knowledge transfer is proposed. Three basic components are included: 1) probability policy reuse; 2) dynamic model learning; and 3) model-based Q-learning. In this framework, the prelearned skills are leveraged for policy reuse and dynamic learning. In model-based Q-learning, the Gaussian process regression is used to approximate the Q-value function so as to suit for robot control. The prior knowledge retrieved from knowledge transfer is integrated into the model-based Q-learning to reduce the needed learning time. Finally, a human-robot handover experiment is performed to evaluate the learning performance of this learning framework. Experiment results show that fewer exploration is needed to obtain a high expected reward, due to the prior knowledge obtained from knowledge transfer.
Keyword:
Reprint 's Address:
Email:
Version:
Source :
IEEE TRANSACTIONS ON COGNITIVE AND DEVELOPMENTAL SYSTEMS
ISSN: 2379-8920
Year: 2019
Issue: 1
Volume: 11
Page: 26-35
2 . 6 6 7
JCR@2019
5 . 0 0 0
JCR@2023
ESI Discipline: COMPUTER SCIENCE;
ESI HC Threshold:162
JCR Journal Grade:2
CAS Journal Grade:3
Cited Count:
WoS CC Cited Count: 27
SCOPUS Cited Count: 30
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 0
Affiliated Colleges: