This paper introduces a benchmark framework to evaluate the performance of reaching motion generation approaches that learn from demonstrated examples. The system implements ten different performance measures for typical generalization tasks in robotics using open source MATLAB software. Systematic comparisons are based on a default training data set of human motions, which specify the respective ground truth. In technical terms, an evaluated motion generation method needs to compute velocities, given a state provided by the simulation system. This however is agnostic to how this is done by the method or how the methods learns from the provided demonstrations. The framework focuses on robustness, which is tested statistically by sampling from a set of perturbation scenarios. These perturbations interfere with motion generation and challenge its generalization ability. The benchmark thus helps to identify the strengths and weaknesses of competing approaches, while allowing the user the opportunity to configure the weightings between different measures.
Contents
- Special Issue on Benchmarking Motion Primitives
-
March 10, 2015
- Topical Issues on Assistive Robotics
-
January 16, 2015
-
March 12, 2015
-
March 30, 2015
-
July 20, 2015
-
August 11, 2015
-
December 29, 2015
- Topical Issue on Neuronal Dynamics for Cognitive Robotics
-
Open AccessThe dynamics of neural activation variablesMarch 11, 2015
-
May 15, 2015
-
Open AccessLearning the Condition of Satisfaction of an Elementary Behavior in Dynamic Field TheoryNovember 18, 2015