Rozo Castañeda, Leonel
Total activity: 10
Institute
Institute of Robotics and Industrial Informatics
E-mail
leonel.rozoestudiant.upc.edu
Contact details
UPC directory Open in new window

Graphic summary
  • Show / hide key
  • Information


Scientific and technological production
  •  

1 to 10 of 10 results
  • Access to the full text
    A robot learning from demonstration framework to perform force-based manipulation tasks  Open access

     Rozo Castañeda, Leonel; Jimenez Schlegl, Pablo; Torras, Carme
    Intelligent Service Robotics
    Vol. 6, num. 1, p. 33-51
    DOI: 10.1007/s11370-012-0128-9
    Date of publication: 2013
    Journal article

    Read the abstract Read the abstract Access to the full text Access to the full text Open in new window  Share Reference managers Reference managers Open in new window

    This paper proposes an end-to-end learning from demonstration framework for teaching force-based manipulation tasks to robots. The strengths of this work are manyfold. First, we deal with the problem of learning through force perceptions exclusively. Second, we propose to exploit haptic feedback both as a means for improving teacher demonstrations and as a human¿robot interaction tool, establishing a bidirectional communication channel between the teacher and the robot, in contrast to the works using kinesthetic teaching. Third, we address the well-known what to imitate? problem from a different point of view, based on the mutual information between perceptions and actions. Lastly, the teacher¿s demonstrations are encoded using a Hidden Markov Model, and the robot execution phase is developed by implementing a modified version of Gaussian Mixture Regression that uses implicit temporal information from the probabilistic model, needed when tackling tasks with ambiguous perceptions. Experimental results show that the robot is able to learn and reproduce two different manipulation tasks, with a performance comparable to the teacher¿s one.

    This paper proposes an end-to-end learning from demonstration framework for teaching force-based manipulation tasks to robots. The strengths of this work are manyfold. First, we deal with the problem of learning through force perceptions exclusively. Second, we propose to exploit haptic feedback both as a means for improving teacher demonstrations and as a human–robot interaction tool, establishing a bidirectional communication channel between the teacher and the robot, in contrast to the works using kinesthetic teaching. Third, we address the well-known what to imitate? problem from a different point of view, based on the mutual information between perceptions and actions. Lastly, the teacher’s demonstrations are encoded using a Hidden Markov Model, and the robot execution phase is developed by implementing a modified version of Gaussian Mixture Regression that uses implicit temporal information from the probabilistic model, needed when tackling tasks with ambiguous perceptions. Experimental results show that the robot is able to learn and reproduce two different manipulation tasks, with a performance comparable to the teacher’s one.

    Postprint (author’s final draft post-refereeing)

  • Access to the full text
    Force-based robot learning of pouring skills using parametric hidden Markov models  Open access

     Rozo Castañeda, Leonel; Jimenez Schlegl, Pablo; Torras, Carme
    International Workshop on Robot Motion and Control
    p. 227-232
    DOI: 10.1109/RoMoCo.2013.6614613
    Presentation's date: 2013
    Presentation of work at congresses

    Read the abstract Read the abstract Access to the full text Access to the full text Open in new window  Share Reference managers Reference managers Open in new window

    Robot learning from demonstration faces new challenges when applied to tasks in which forces play a key role. Pouring liquid from a bottle into a glass is one such task, where not just a motion with a certain force profile needs to be learned, but the motion is subtly conditioned by the amount of liquid in the bottle. In this paper, the pouring skill is taught to a robot as follows. In a training phase, the human teleoperates the robot using a haptic device, and data from the demonstrations are statistically encoded by a parametric hidden Markov model, which compactly encapsulates the relation between the task parameter (dependent on the bottle weight) and the force-torque traces. Gaussian mixture regression is then used at the reproduction stage for retrieving the suitable robot actions based on the force perceptions. Computational and experimental results show that the robot is able to learn to pour drinks using the proposed framework, outperforming other approaches such as the classical hidden Markov models in that it requires less training, yields more compact encodings and shows better generalization capabilities.

    Postprint (author’s final draft)

  • Access to the full text
    Learning collaborative impedance-based robot behaviors  Open access

     Rozo Castañeda, Leonel; Calinon, Sylvain; Caldwell, Darwin; Jimenez Schlegl, Pablo; Torras, Carme
    AAAI Conference on Artificial Intelligence
    p. 1422-1428
    Presentation's date: 2013
    Presentation of work at congresses

    Read the abstract Read the abstract Access to the full text Access to the full text Open in new window  Share Reference managers Reference managers Open in new window

    Research in learning from demonstration has focused on transferring movements from humans to robots. However, a need is arising for robots that do not just replicate the task on their own, but that also interact with humans in a safe and natural way to accomplish tasks cooperatively. Robots with variable impedance capabilities opens the door to new challenging applications, where the learning algorithms must be extended by encapsulating force and vision information. In this paper we propose a framework to transfer impedance-based behaviors to a torque-controlled robot by kinesthetic teaching. The proposed model encodes the exam- ples as a task-parameterized statistical dynamical system, where the robot impedance is shaped by estimating virtual stiffness matrices from the set of demonstrations. A collaborative assembly task is used as testbed. The results show that the model can be used to modify the robot impedance along task execution to facilitate the collaboration, by triggering stiff and compliant behaviors in an on-line manner to adapt to the user's actions.

    Postprint (author’s final draft)

  • Access to the full text
    Dynamically consistent probabilistic model for robot motion learning  Open access

     Pardo Ayala, Diego Esteban; Rozo Castañeda, Leonel; Alenyà Ribas, Guillem; Torras, Carme
    Workshop on Learning and Interaction in Haptic Robots
    p. 1-2
    Presentation's date: 2012
    Presentation of work at congresses

    Read the abstract Read the abstract Access to the full text Access to the full text Open in new window  Share Reference managers Reference managers Open in new window

    This work presents a probabilistic model for learning robot tasks from human demonstrations using kinesthetic teaching. The difference with respect to previous works is that a complete state of the robot is used to obtain a consistent representation of the dynamics of the task. The learning framework is based on hidden Markov models and Gaussian mixture regression, used for coding and reproducing the skills. Benefits of the proposed approach are shown in the execution of a simple self-crossing trajectory by a 7-DoF manipulator.

  • Robot learning from demonstration in the force domain

     Rozo Castañeda, Leonel; Jimenez Schlegl, Pablo; Torras, Carme
    IJCAI Workshop on Agents Learning Interactively from Human Teachers
    p. 1-6
    Presentation's date: 2011
    Presentation of work at congresses

    Read the abstract Read the abstract View View Open in new window  Share Reference managers Reference managers Open in new window

    Researchers are becoming aware of the importance of other information sources besides visual data in robot learning by demonstration (LbD). Forcebased perceptions are shown to convey very relevant information – missed by visual and position sensors – for learning specific tasks. In this paper, we review some recent works using forces as input data in LbD and Human-Robot interaction (HRI) scenarios, and propose a complete learning framework for teaching force-based manipulation skills to a robot through a haptic device. We suggest to use haptic interfaces not only as a demonstration tool but also as a communication channel between the human and the robot, getting the teacher more involved in the teaching process by experiencing the force signals sensed by the robot. Within the proposed framework, we provide solutions for treating force signals, extracting relevant information about the task, encoding the training data and generalizing to perform successfully under unknown conditions.

    Postprint (author’s final draft)

  • Robot learning from demonstration of force-based tasks with multiple solution trajectories

     Rozo Castañeda, Leonel; Jimenez Schlegl, Pablo; Torras, Carme
    International Conference on Advanced Robotics
    p. 124-129
    DOI: 10.1109/ICAR.2011.6088633
    Presentation's date: 2011
    Presentation of work at congresses

    View View Open in new window  Share Reference managers Reference managers Open in new window

  • GARNICS: Gardening with a Cognitive System (FP7-ICT-247947)

     Moreno Noguer, Francesc d'Assis; Torras, Carme; Agostini, Alejandro Gabriel; Husain, Syed Farzad; Dellen, Babette Karla Margarete; Alenyà Ribas, Guillem; Jimenez Schlegl, Pablo; Thomas Arroyo, Federico; Rozo Castañeda, Leonel; Foix Salmeron, Sergi
    Competitive project

     Share

  • Learning force-based robot skills from haptic demonstration

     Rozo Castañeda, Leonel; Jimenez Schlegl, Pablo; Torras, Carme
    International Conference of the Catalan Association for Artificial Intelligence
    p. 331-341
    DOI: 10.3233/978-1-60750-643-0-331
    Presentation's date: 2010
    Presentation of work at congresses

    View View Open in new window  Share Reference managers Reference managers Open in new window

  • Access to the full text
    Sharpening haptic inputs for teaching a manipulation skill to a robot  Open access

     Rozo Castañeda, Leonel; Jimenez Schlegl, Pablo; Torras, Carme
    IEEE International Conference on Applied Bionics and Biomechanics
    p. 331-340
    Presentation's date: 2010
    Presentation of work at congresses

    Read the abstract Read the abstract Access to the full text Access to the full text Open in new window  Share Reference managers Reference managers Open in new window

    Gaussian mixtures-based learning algorithms are suitable strategies for trajectory learning and skill acquisition, in the context of programming by demonstration (PbD). Input streams other than visual information, as used in most applications up to date, reveal themselves as quite useful in trajectory learning experiments where visual sources are not available. In this work we have used force/torque feedback through a haptic device for teaching a teleoperated robot to empty a rigid container. Structure vibrations and container inertia appeared to considerably disrupt the sensing process, so a filtering algorithm had to be devised. Moreover, some input variables seemed much more relevant to the particular task to be learned than others, which lead us to analyze the training data in order to select those relevant features through principal component analysis and a mutual information criterion. Then, a batch version of GMM/GMR [1], [2] was implemented using different training datasets (original, pre-processed data through PCA and MI). Tests where the teacher was instructed to follow a strategy compared to others where he was not lead to useful conclusions that permit devising the new research stages.

    Postprint (author’s final draft)

  • Robot learning of container-emptying skills through haptic demonstration

     Rozo Castañeda, Leonel; Jimenez Schlegl, Pablo; Torras, Carme
    Date: 2009
    Report

     Share Reference managers Reference managers Open in new window