Paraschos, Alexandros (2017)
Robot Skill Representation, Learning and Control with Probabilistic Movement Primitives.
Technische Universität Darmstadt
Ph.D. Thesis, Primary publication
|
Text
root.pdf Copyright Information: CC BY-NC-ND 4.0 International - Creative Commons, Attribution NonCommercial, NoDerivs. Download (8MB) | Preview |
Item Type: | Ph.D. Thesis | ||||
---|---|---|---|---|---|
Type of entry: | Primary publication | ||||
Title: | Robot Skill Representation, Learning and Control with Probabilistic Movement Primitives | ||||
Language: | English | ||||
Referees: | Peters, Prof. Dr. Jan ; Neumann, Prof. Dr. Gerhard ; Calinon, Dr. Sylvain | ||||
Date: | 2017 | ||||
Place of Publication: | Darmstadt | ||||
Date of oral examination: | 26 June 2017 | ||||
Abstract: | Robotic technology has made significant advances in the recent years, yet robots have not been fully incorporated in our every day lives. Current robots are executing a set of pre-programmed skills, that can not adapt to environmental changes, and acquiring new skills is difficult and time consuming. Additionally, current approaches for robot control focus on accurately reproducing a task, but rarely consider safety aspects that could enable the robots to share the same environment with humans. In this thesis, we develop a framework that allows robots to rapidly acquire new skills, to adapt skills to environmental changes, and to be controlled accurately and in a safe manner. Our framework is based on movement primitives, a well-established concept for representing modular and reusable robot skills. In this thesis, we introduce a novel movement primitive representation that not only models the shape of the movement but also its uncertainty in time. We choose to rely on probability theory, creating a mathematically sound framework that is capable of adapting skills to environmental changes as well as adapting the execution speed online. Our probabilistic framework allows training the robot with imitation learning, speeding up significantly the process of novel skill acquisition. Hence, our approach unifies all the significant properties of existing movement primitive representations and, additionally, provides new properties, such as conditioning and combination of primitives. By modeling the variance of the trajectories, our framework enables standard probabilistic operations to be applied on movement primitives. First, we present a generalization operator that can modify a given trajectory distribution to new situations and has improved performance over the current approaches. Secondly, we define a novel combination operator for the co-activating of multiple primitives, enabling the resulting primitive to concurrently solve multiple tasks. Finally, we demonstrate that smoothly sequencing primitives is simply a special case of movement combination. All aforementioned operators for our model were derived analytically. In noisy environments, coordinated movements have better recovery from perturbations when compared to controlling each degree of freedom independently. While many movement primitive representations use time as a reference signal for synchronization, our approach, in addition, synchronizes complete movement sequences in the full state of the robot. The skill's correlations are encoded in the covariance matrix of our probabilistic model that we estimate from demonstrations. Furthermore, by encoding the correlations between the state of the robot and force/torque sensors, we demonstrate that our approach has improved performance during physical interaction tasks. A movement generation framework would have limited application without a control approach that can reproduce the learned primitives in a physical system. Therefore, we derive two control approaches that are capable of reproducing exactly the encoded trajectory distribution. When the dynamics of the system are known, we derive a model-based stochastic feedback controller. The controller has time-varying feedback gains that depend on the variance of the trajectory distribution. We compute the gains in closed form. When the dynamics of the system are unknown or are difficult to obtain, e.g., during physical interaction scenarios, we propose a model-free controller. This model-free controller has the same structure as the model-based controller, i.e. a stochastic feedback controller, with time-varying gains, where the gains can also be computed in closed form. Complex robots with redundant degrees of freedom can in principle perform multiple tasks at the same time, for example, reaching for an object with a robotic arm while avoiding an obstacle. However, simultaneously performing multiple tasks using the same degrees of freedom, requires combining control signals from all the tasks. We developed a novel prioritization approach where we utilize the variance of the movement as a priority measure. We demonstrate how the task priorities can be obtained from imitation learning and how different primitives can be combined to solve unseen previously unobserved task-combinations. Due to the prioritization, we can efficiently learn a combination of tasks without requiring individual models per task combination. Additionally, existing primitive libraries can be adapted to environmental changes by means of a single primitive, prioritized to compensate for the change. Therefore, we avoid retraining the entire primitive library. The prioritization controller can still be computed in closed form. |
||||
Alternative Abstract: |
|
||||
URN: | urn:nbn:de:tuda-tuprints-69474 | ||||
Classification DDC: | 000 Generalities, computers, information > 004 Computer science | ||||
Divisions: | 20 Department of Computer Science > Intelligent Autonomous Systems | ||||
Date Deposited: | 05 Dec 2017 14:35 | ||||
Last Modified: | 05 Dec 2017 14:35 | ||||
URI: | https://tuprints.ulb.tu-darmstadt.de/id/eprint/6947 | ||||
PPN: | 423591576 | ||||
Export: |
View Item |