García, N.; Rosell, J.; Suarez, R. IEEE Transactions on Systems, Man, and Cybernetics. Systems Vol. PP, num. 99, p. 1-10 DOI: 10.1109/TSMC.2017.2756856 Data de publicació: 2017-10-30 Article en revista
The paper presents a planning procedure that allows an anthropomorphic dual-arm robotic system to perform a manipulation task in a natural human-like way by using demonstrated human movements. The key idea of the proposal is to convert the demonstrated trajectories into attractive potential fields defined over the configuration space and then use an RRT*-based planning algorithm that minimizes a path-cost function designed to bias the tree growth towards the human-demonstrated configurations. The paper presents a description of the proposed approach as well as results from a conceptual and a real application example, the latter using a real anthropomorphic dual-arm robotic system. A path-quality measure, based on first-order synergies (correlations between joint velocities) obtained from real human movements, is also proposed and used for evaluation and comparison purposes.
The obtained results show that the paths obtained with the
proposed procedure are more human-like.
Physics-based motion planning is a challenging task, since it requires the computation of the robot motions while allowing possible interactions with (some of) the obstacles in the environment. Kinodynamic motion planners equipped with a dynamic engine acting as state propagator are usually used for that purpose. The difficulties arise in the setting of the adequate forces for the interactions and because these interactions may change the pose of the manipulatable obstacles, thus either facilitating or preventing the finding of a solution path. The use of knowledge can alleviate the stated difficulties. This paper proposes the use of an enhanced state propagator composed of a dynamic engine and a low-level geometric reasoning process that is used to determine how to interact with the objects, i.e. from where and with which forces. The proposal, called ¿-PMP can be used with any kinodynamic planner, thus giving rise to e.g. ¿-RRT. The approach also includes a preprocessing step that infers from a semantic abstract knowledge described in terms of an ontology the manipulation knowledge required by the reasoning process. The proposed approach has been validated with several examples involving an holonomic mobile robot, a robot with differential constraints and a serial manipulator, and benchmarked using several state-of-the art kinodynamic planners. The results showed a significant difference in the power consumption with respect to simple physics-based planning, an improvement in the success rate and in the quality of the solution paths.
One of the main foci of robotics is nowadays centered in providing a great degree of autonomy to robots. A fundamental step in this direction is to give them the ability to plan in discrete and continuous spaces to find the required motions to complete a complex task. In this line, some recent approaches describe tasks with Linear Temporal Logic (LTL) and reason on discrete actions to guide sampling-based motion planning, with the aim of finding dynamically-feasible motions that satisfy the temporal-logic task specifications. The present paper proposes an LTL planning approach enhanced with the use of ontologies to describe and reason about the task, on the one hand, and that includes physics-based motion planning to allow the purposeful manipulation of objects, on the other hand. The proposal has been implemented and is illustrated with didactic examples with a mobile robot in simple scenarios where some of the goals are occupied with objects that must be removed in order to fulfill the task.
The paper deals with the problem of motion planning for anthropomorphic dual-arm robots. It introduces a measure of the similarity of the movements needed to solve two given tasks. Planning using this measure to select proper arm synergies for a given task improves the planning performance and the resulting plan.
Current approaches do not allow robots to execute a task and simultaneously convey emotions to users using their body motions. This paper explores the capabilities of the Jacobian null space of a humanoid robot to convey emotions. A task priority formulation has been implemented in a Pepper robot which allows the specification of a primary task (waving gesture, transportation of an object, etc.) and exploits the kinematic redundancy of the robot to convey emotions to humans as a lower priority task. The emotions, defined by Mehrabian as points in the pleasure–arousal–dominance space, generate intermediate motion features (jerkiness, activity and gaze) that carry the emotional information. A map from this features to the joints of the robot is presented. A user study has been conducted in which emotional motions have been shown to 30 participants. The results show that happiness and sadness are very well conveyed to the user, calm is moderately well conveyed, and fear is not well conveyed. An analysis on the dependencies between the motion features and the emotions perceived by the participants shows that activity correlates positively with arousal, jerkiness is not perceived by the user, and gaze conveys dominance when activity is low. The results indicate a strong influence of the most energetic motions of the emotional task and point out new directions for further research. Overall, the results show that the null space approach can be regarded as a promising mean to convey emotions as a lower priority task.
—This work presents a knowledge-based task and motion planning framework based on a version of the FastForward task planner. A reasoning process on symbolic literals in terms of knowledge and geometric information about the workspace, together with the use of a physics-based motion planner, is proposed to evaluate the applicability and feasibility of manipulation actions and to compute the heuristic values that guide the search. The proposal results in low-cost physically-feasible plans