Alenyà Ribas, Guillem
Total activity: 44
Research group
ROBiri - IRI Robotics Group
Institute
Institute of Robotics and Industrial Informatics
E-mail
guillem.alenyaupc.edu
Contact details
UPC directory Open in new window

Graphic summary
  • Show / hide key
  • Information


Scientific and technological production
  •  

1 to 44 of 44 results
  • Access to the full text
    The MoveIt motion planning framework configuration for the IRI WAM robot  Open access

     Gonzàlez Esteve, Adrià; Alenyà Ribas, Guillem
    Date: 2013
    Report

    Read the abstract Read the abstract Access to the full text Access to the full text Open in new window  Share Reference managers Reference managers Open in new window

    Perform the setup and prepare WAM robots to use the the motion planning and obstacle avoid- ance facilities present at the MoveIt! framework. In particular, determine the configurations and API function calls to set the target of the robot arm with joint state and add collision objects programatically, without using the graphical interface. Supporting page with videos: http://www.iri.upc.edu/groups/perception/MoveItForIRIWam

  • Access to the full text
    Robot de telepresencia: imatge i so  Open access

     Serra Ortega, Joan; Alenyà Ribas, Guillem
    Date: 2013
    Report

    Read the abstract Read the abstract Access to the full text Access to the full text Open in new window  Share Reference managers Reference managers Open in new window

    El robot Helena està dissenyat per ser un robot de telepresència que ha de servir per comunicar diferents persones d'un edifici sense necessitat de moure's.

  • Access to the full text
    Robotized plant probing: leaf segmentation utilizing time-of-flight data  Open access

     Alenyà Ribas, Guillem; Dellen, Babette Karla Margarete; Foix Salmeron, Sergi; Torras, Carme
    IEEE robotics and automation magazine
    Vol. 20, num. 3, p. 50-59
    DOI: 10.1109/MRA.2012.2230118
    Date of publication: 2013
    Journal article

    Read the abstract Read the abstract Access to the full text Access to the full text Open in new window  Share Reference managers Reference managers Open in new window

    Supervision of long-lasting extensive botanic experiments is a promising robotic application that some recent technological advances have made feasible. Plant modeling for this application has strong demands, particularly in what concerns three-dimensional (3-D) information gathering and speed.

  • Access to the full text
    Planning surface cleaning tasks by learning uncertain drag actions outcomes  Open access

     Martínez Martínez, David; Alenyà Ribas, Guillem; Torras, Carme
    ICAPS Workshop on Planning and Robotics
    p. 106-111
    Presentation's date: 2013
    Presentation of work at congresses

    Read the abstract Read the abstract Access to the full text Access to the full text Open in new window  Share Reference managers Reference managers Open in new window

    A method to perform cleaning tasks is presented where a robot manipulator autonomously grasps a textile and uses different dragging actions to clean a surface. Ac- tions are imprecise, and probabilistic planning is used to select the best sequence of actions. The character- ization of such actions is complex because the initial autonomous grasp of the textile introduces differences in the initial conditions that change the efficacy of the robot cleaning actions. We demonstrate that the action outcome probabilities can be learned very fast while the task is being executed, so as to progressively improve robot performance. The learner adds only a little over- head to the system compared to the improvements ob- tained. Experiments with a real robot show that the most effective plan varies depending on the initial grasp, and that plans become better after only a few learning itera- tions

    Postprint (author’s final draft)

  • Access to the full text
    External force estimation during compliant robot manipulation  Open access

     Colomé Figueras, Adrià; Pardo Ayala, Diego Esteban; Alenyà Ribas, Guillem; Torras, Carme
    IEEE International Conference on Robotics and Automation
    p. 3535-3540
    DOI: 10.1109/ICRA.2013.6631072
    Presentation's date: 2013
    Presentation of work at congresses

    Read the abstract Read the abstract Access to the full text Access to the full text Open in new window  Share Reference managers Reference managers Open in new window

    This paper presents a method to estimate external forces exerted on a manipulator, avoiding the use of a sensor. The method is based on task-oriented dynamics model learning and a robust disturbance state observer. The combination of both leads to an efficient torque observer that can be incorporated to any control scheme. The use of a learned based approach avoids the need of analytical models of friction or Coriolis dynamics effects.

    Postprint (author’s final draft)

  • External force estimation for textile grasp detection

     Colomé Figueras, Adrià; Pardo Ayala, Diego Esteban; Alenyà Ribas, Guillem; Torras, Carme
    IROS Workshop Beyond Robot Grasping: Modern Approaches for Learning Dynamic Manipulation
    p. 1
    Presentation's date: 2012
    Presentation of work at congresses

    View View Open in new window  Share Reference managers Reference managers Open in new window

  • POMDP approach to robotized clothes separation

     Monsó Purtí, Pol; Alenyà Ribas, Guillem; Torras, Carme
    IEEE/RSJ International Conference on Intelligent Robots and Systems
    p. 1324-1329
    DOI: 10.1109/IROS.2012.6386011
    Presentation's date: 2012
    Presentation of work at congresses

    View View Open in new window  Share Reference managers Reference managers Open in new window

  • Access to the full text
    Object detection methods for robot grasping: Experimental assessment and tuning  Open access

     Rigual Aparici, Ferran; Ramisa Ayats, Arnau; Alenyà Ribas, Guillem; Torras, Carme
    International Conference of the Catalan Association for Artificial Intelligence
    p. 123-132
    DOI: 10.3233/978-1-61499-139-7-123
    Presentation's date: 2012
    Presentation of work at congresses

    Read the abstract Read the abstract Access to the full text Access to the full text Open in new window  Share Reference managers Reference managers Open in new window

    In this work we address the problem of object detection for the purpose of object manipulation in a service robotics scenario. Several implementations of state-of-the-art object detection methods were tested, and the one with the best performance was selected. During the evaluation, three main practical limitations of current methods were identified in relation with long-range object detection, grasping point detection and automatic learning of new objects; and practical solutions are proposed for the last two. Finally, the complete pipeline is evaluated in a real grasping experiment.

  • Robotic leaf probing via segmentation of range data into surface patches

     Alenyà Ribas, Guillem; Dellen, Babette Karla Margarete; Foix Salmeron, Sergi; Torras, Carme
    IROS Workshop on Agricultural Robotics: Enabling Safe, Efficient, Affordable Robots for Food Production (IROS AGROBOTICS)
    p. 1-6
    Presentation's date: 2012
    Presentation of work at congresses

    View View Open in new window  Share Reference managers Reference managers Open in new window

  • Access to the full text
    Information-gain view planning for free-form object reconstruction with a 3D ToF camera  Open access

     Foix Salmeron, Sergi; Kriegel, Simon; Fuchs, Stefan; Alenyà Ribas, Guillem; Torras, Carme
    International Conference on Advanced Concepts for Intelligent Vision Systems
    p. 36-47
    DOI: 10.1007/978-3-642-33140-4_4
    Presentation's date: 2012
    Presentation of work at congresses

    Read the abstract Read the abstract Access to the full text Access to the full text Open in new window  Share Reference managers Reference managers Open in new window

    Active view planning for gathering data from an unexplored 3D complex scenario is a hard and still open problem in the computer vision community. In this paper, we present a general task-oriented approach based on an information-gain maximization that easily deals with such a problem. Our approach consists of ranking a given set of possible actions, based on their task-related gains, and then executing the best-ranked action to move the required sensor. An example of how our approach behaves is demonstrated by applying it over 3D raw data for real-time volume modelling of complex-shaped objects. Our setting includes a calibrated 3D time-of-flight (ToF) camera mounted on a 7 degrees of freedom (DoF) robotic arm. Noise in the sensor data acquisition, which is too often ignored, is here explicitly taken into account by computing an uncertainty matrix for each point, and refining this matrix each time the point is seen again. Results show that, by always choosing the most informative view, a complete model of a 3D free-form object is acquired and also that our method achieves a good compromise between speed and precision.

  • Characterization of textile grasping experiments

     Alenyà Ribas, Guillem; Ramisa Ayats, Arnau; Moreno Noguer, Francesc d'Assis; Torras, Carme
    IEEE International Conference on Robotics and Automation
    p. 1-6
    Presentation's date: 2012
    Presentation of work at congresses

    View View Open in new window  Share Reference managers Reference managers Open in new window

  • Access to the full text
    Plant leaf imaging using time of flight camera under sunlight, shadow and room conditions  Open access

     Kazmi, Wajahat; Alenyà Ribas, Guillem; Foix Salmeron, Sergi
    IEEE International Symposium on Robotic and Sensors Environments
    p. 192-197
    DOI: 10.1109/ROSE.2012.6402615
    Presentation's date: 2012
    Presentation of work at congresses

    Read the abstract Read the abstract Access to the full text Access to the full text Open in new window  Share Reference managers Reference managers Open in new window

    In this article, we analyze the effects of ambient light on Time of Flight (ToF) depth imaging for a plant’s leaf in sunlight, shadow and room conditions. ToF imaging is sensitive to ambient light and we try to find the best possible integration times (IT) for each condition. This is important in order to optimize camera calibration. Our analysis is based on several statistical metrics estimated from the ToF data. We explain the estimation of the metrics and propose a method of predicting the deteriorating behavior of the data in each condition using camera flags. Finally, we also propose a method to improve the quality of a ToF image taken in a mixed condition having different ambient light exposures.

  • Access to the full text
    Dynamically consistent probabilistic model for robot motion learning  Open access

     Pardo Ayala, Diego Esteban; Rozo Castañeda, Leonel; Alenyà Ribas, Guillem; Torras, Carme
    Workshop on Learning and Interaction in Haptic Robots
    p. 1-2
    Presentation's date: 2012
    Presentation of work at congresses

    Read the abstract Read the abstract Access to the full text Access to the full text Open in new window  Share Reference managers Reference managers Open in new window

    This work presents a probabilistic model for learning robot tasks from human demonstrations using kinesthetic teaching. The difference with respect to previous works is that a complete state of the robot is used to obtain a consistent representation of the dynamics of the task. The learning framework is based on hidden Markov models and Gaussian mixture regression, used for coding and reproducing the skills. Benefits of the proposed approach are shown in the execution of a simple self-crossing trajectory by a 7-DoF manipulator.

  • Access to the full text
    Using depth and appearance features for informed robot grasping of highly wrinkled clothes  Open access

     Ramisa Ayats, Arnau; Alenyà Ribas, Guillem; Moreno Noguer, Francesc d'Assis; Torras, Carme
    IEEE International Conference on Robotics and Automation
    p. 1703-1708
    DOI: 10.1109/ICRA.2012.6225045
    Presentation's date: 2012
    Presentation of work at congresses

    Read the abstract Read the abstract Access to the full text Access to the full text Open in new window  Share Reference managers Reference managers Open in new window

    Detecting grasping points is a key problem in cloth manipulation. Most current approaches follow a multiple regrasp strategy for this purpose, in which clothes are sequentially grasped from different points until one of them yields to a desired configuration. In this paper, by contrast, we circumvent the need for multiple re-graspings by building a robust detector that identifies the grasping points, generally in one single step, even when clothes are highly wrinkled. In order to handle the large variability a deformed cloth may have, we build a Bag of Features based detector that combines appearance and 3D geometry features. An image is scanned using a sliding window with a linear classifier, and the candidate windows are refined using a non-linear SVM and a “grasp goodness” criterion to select the best grasping point. We demonstrate our approach detecting collars in deformed polo shirts, using a Kinect camera. Experimental results show a good performance of the proposed method not only in identifying the same trained textile object part under severe deformations and occlusions, but also the corresponding part in other clothes, exhibiting a degree of generalization.

  • Access to the full text
    Single image 3D human pose estimation from noisy observations  Open access

     Simo Serra, Edgar; Ramisa Ayats, Arnau; Torras, Carme; Alenyà Ribas, Guillem; Moreno Noguer, Francesc d'Assis
    IEEE Conference on Computer Vision and Pattern Recognition
    p. 2673-2680
    DOI: /10.1109/CVPR.2012.6247988
    Presentation's date: 2012
    Presentation of work at congresses

    Read the abstract Read the abstract Access to the full text Access to the full text Open in new window  Share Reference managers Reference managers Open in new window

    Markerless 3D human pose detection from a single image is a severely underconstrained problem because different 3D poses can have similar image projections. In order to handle this ambiguity, current approaches rely on prior shape models that can only be correctly adjusted if 2D image features are accurately detected. Unfortunately, although current 2D part detector algorithms have shown promising results, they are not yet accurate enough to guarantee a complete disambiguation of the 3D inferred shape. In this paper, we introduce a novel approach for estimating 3D human pose even when observations are noisy. We propose a stochastic sampling strategy to propagate the noise from the image plane to the shape space. This provides a set of ambiguous 3D shapes, which are virtually undistinguishable from their image projections. Disambiguation is then achieved by imposing kinematic constraints that guarantee the resulting pose resembles a 3D human shape. We validate the method on a variety of situations in which state-of-the-art 2D detectors yield either inaccurate estimations or partly miss some of the body parts.

  • Taller d'Humanoides

     Hernandez Juan, Sergi; Alenyà Ribas, Guillem; Torras, Carme
    Competitive project

     Share

  • Access to the full text
    Lock-in time-of-flight (ToF) cameras: a survey  Open access

     Foix Salmeron, Sergi; Alenyà Ribas, Guillem; Torras, Carme
    IEEE sensors journal
    Vol. 11, num. 9, p. 1917-1926
    DOI: 10.1109/JSEN.2010.2101060
    Date of publication: 2011
    Journal article

    Read the abstract Read the abstract Access to the full text Access to the full text Open in new window  Share Reference managers Reference managers Open in new window

    This paper reviews the state-of-the art in the field of lock-in ToF cameras, their advantages, their limitations, the existing calibration methods, and the way they are being used, sometimes in combination with other sensors. Even though lockin ToF cameras provide neither higher resolution nor larger ambiguity-free range compared to other range map estimation systems, advantages such as registered depth and intensity data at a high frame rate, compact design, low weight and reduced power consumption have motivated their increasing usage in several research areas, such as computer graphics, machine vision and robotics.

    Postprint (author’s final draft)

  • Access to the full text
    Towards plant monitoring through next best view  Open access

     Foix Salmeron, Sergi; Alenyà Ribas, Guillem; Torras, Carme
    International Conference of the Catalan Association for Artificial Intelligence
    p. 101-109
    DOI: 10.3233/978-1-60750-842-7-101
    Presentation's date: 2011
    Presentation of work at congresses

    Read the abstract Read the abstract Access to the full text Access to the full text Open in new window  Share Reference managers Reference managers Open in new window

    Monitoring plants using leaf feature detection is a challenging perception task because different leaves, even from the same plant, may have very different shapes, sizes and deformations. In addition, leaves may be occluded by other leaves making it hard to determine some of their characteristics. In this paper we use a Time-of-Flight (ToF) camera mounted on a robot arm to acquire the depth information needed for plant leaf detection. Under a Next Best View (NBV) paradigm, we propose a criterion to compute a new camera position that offers a better view of a target leaf. The proposed criterion exploits some typical errors of the ToF camera, which are common to other 3D sensing devices as well. This approach is also useful when more than one leaf is segmented as the same region, since moving the camera following the same NBV criterion helps to disambiguate this situation.

    Postprint (author’s final draft)

  • Access to the full text
    Segmenting color images into surface patches by exploiting sparse depth data  Open access

     Dellen, Babette Karla Margarete; Alenyà Ribas, Guillem; Foix Salmeron, Sergi; Torras, Carme
    Winter Vision Meeting: Workshop on Applications of Computer Vision
    p. 591-598
    DOI: 10.1109/WACV.2011.5711558
    Presentation's date: 2011
    Presentation of work at congresses

    Read the abstract Read the abstract Access to the full text Access to the full text Open in new window  Share Reference managers Reference managers Open in new window

    We present a new method for segmenting color images into their composite surfaces by combining color segmentation with model-based fitting utilizing sparse depth data, acquired using time-of-flight (Swissranger, PMD CamCube) and stereo techniques. The main target of our work is the segmentation of plant structures, i.e., leaves, from color-depth images, and the extraction of color and 3D shape information for automating manipulation tasks. Since segmentation is performed in the dense color space, even sparse, incomplete, or noisy depth information can be used. This kind of data often represents a major challenge for methods operating in the 3D data space directly. To achieve our goal, we construct a three-stage segmentation hierarchy by segmenting the color image with different resolutions-assuming that “true” surface boundaries must appear at some point along the segmentation hierarchy. 3D surfaces are then fitted to the color-segment areas using depth data. Those segments which minimize the fitting error are selected and used to construct a new segmentation. Then, an additional region merging and a growing stage are applied to avoid over-segmentation and label previously unclustered points. Experimental results demonstrate that the method is successful in segmenting a variety of domestic objects and plants into quadratic surfaces. At the end of the procedure, the sparse depth data is completed using the extracted surface models, resulting in dense depth maps. For stereo, the resulting disparity maps are compared with ground truth and the average error is computed.

    Postprint (author’s final draft)

  • 3D modelling of leaves from color and ToF data for robotized plant measuring

     Alenyà Ribas, Guillem; Dellen, Babette Karla Margarete; Torras, Carme
    IEEE International Conference on Robotics and Automation
    p. 3408-3414
    DOI: 10.1109/ICRA.2011.5980092
    Presentation's date: 2011
    Presentation of work at congresses

    Read the abstract Read the abstract View View Open in new window  Share Reference managers Reference managers Open in new window

    Supervision of long-lasting extensive botanic experiments is a promising robotic application that some recent technological advances have made feasible. Plant modelling for this application has strong demands, particularly in what concerns 3D information gathering and speed. This paper shows that Time-of-Flight (ToF) cameras achieve a good compromise between both demands, providing a suitable complement to color vision. A new method is proposed to segment plant images into their composite surface patches by combining hierarchical color segmentation with quadratic surface fitting using ToF depth data. Experimentation shows that the interpolated depth maps derived from the obtained surfaces fit well the original scenes. Moreover, candidate leaves to be approached by a measuring instrument are ranked, and then robot-mounted cameras move closer to them to validate their suitability to being sampled. Some ambiguities arising from leaves overlap or occlusions are cleared up in this way. The work is a proof-of-concept that dense color data combined with sparse depth as provided by a ToF camera yields a good enough 3D approximation for automated plant measuring at the high throughput imposed by the application.

  • Access to the full text
    Active perception of deformable objects using 3D cameras  Open access

     Alenyà Ribas, Guillem; Moreno Noguer, Francesc d'Assis; Ramisa Ayats, Arnau; Torras, Carme
    Workshop Español de Robótica
    p. 434-440
    Presentation's date: 2011
    Presentation of work at congresses

    Read the abstract Read the abstract Access to the full text Access to the full text Open in new window  Share Reference managers Reference managers Open in new window

    Perception and manipulation of rigid objects has received a lot of attention, and several solutions have been proposed. In contrast, dealing with deformable objects is a relatively new and challenging task because they are more complex to model, their state is difficult to determine, and self-occlusions are common and hard to estimate. In this paper we present our progress/results in the perception of deformable objects both using conventional RGB cameras and active sensing strategies by means of depth cameras.We provide insights in two different areas of application: grasping of textiles and plant leaf modelling.

    Postprint (author’s final draft)

  • Access to the full text
    Determining where to grasp cloth using depth information  Open access

     Ramisa Ayats, Arnau; Alenyà Ribas, Guillem; Moreno Noguer, Francesc d'Assis; Torras, Carme
    International Conference of the Catalan Association for Artificial Intelligence
    p. 199-207
    DOI: 10.3233/978-1-60750-842-7-199
    Presentation's date: 2011
    Presentation of work at congresses

    Read the abstract Read the abstract Access to the full text Access to the full text Open in new window  Share Reference managers Reference managers Open in new window

    In this paper we address the problem of finding an initial good grasping point for the robotic manipulation of textile objects lying on a flat surface. Given as input a point cloud of the cloth acquired with a 3D camera, we propose choosing as grasping points those that maximize a new measure of wrinkledness, computed from the distribution of normal directions over local neighborhoods. Real grasping experiments using a robotic arm are performed, showing that the proposed measure leads to promising results.

    Postprint (author’s final draft)

  • GARNICS: Gardening with a Cognitive System (FP7-ICT-247947)

     Moreno Noguer, Francesc d'Assis; Torras, Carme; Agostini, Alejandro Gabriel; Husain, Syed Farzad; Dellen, Babette Karla Margarete; Alenyà Ribas, Guillem; Jimenez Schlegl, Pablo; Thomas Arroyo, Federico; Rozo Castañeda, Leonel; Foix Salmeron, Sergi
    Competitive project

     Share

  • Access to the full text
    Exploitation of time-of-flight (ToF) cameras  Open access

     Foix Salmeron, Sergi; Alenyà Ribas, Guillem; Torras, Carme
    Date: 2010
    Report

    Read the abstract Read the abstract Access to the full text Access to the full text Open in new window  Share Reference managers Reference managers Open in new window

    This technical report reviews the state-of-the art in the field of ToF cameras, their advantages, their limitations, and their present-day applications sometimes in combination with other sensors. Even though ToF cameras provide neither higher resolution nor larger ambiguity-free range compared to other range map estimation systems, advantages such as registered depth and intensity data at a high frame rate, compact design, low weight and reduced power consumption have motivated their use in numerous areas of research. In robotics, these areas range from mobile robot navigation and map building to vision-based human motion capture and gesture recognition, showing particularly a great potential in object modeling and recognition.

  • Access to the full text
    iCub platform: IIT workshop in Genova  Open access

     Hernandez Juan, Sergi; Alenyà Ribas, Guillem
    Date: 2010
    Report

    Read the abstract Read the abstract Access to the full text Access to the full text Open in new window  Share Reference managers Reference managers Open in new window

    This document outlines the most important concepts presented during a workshop about the iCub robot done at the Instituto Italiano di Technologia in Genova. Mechanical, electronic as well as rmware and software issues are presented, and the basic procedures to detect and solve the most common problems are described. The most important goal of this workshop was to get the necessary skills to perform the most basic maintenance of the robot without having to depend on the support from IIT. Also a brief introduction to the main issues of the control of the robot were provided.

  • Access to the full text
    Diseño de un pie para un robot humanoide  Open access

     Barbadillo Villanueva, Guillermo; Alenyà Ribas, Guillem
    Date: 2010
    Report

    Read the abstract Read the abstract Access to the full text Access to the full text Open in new window  Share Reference managers Reference managers Open in new window

    La propuesta se enmarca dentro del proyecto Humanoid Lab del Institut de Robòtica i Informàtica Industrial (IRI). El grupo dispone de múltiples plataformas humanoides educativas (Robonova y Bioloid). Existe una primera versión de un sistema de realimentación de fuerza para los pies de estos robots, desarrollada en el propio grupo, que funciona correctamente pero presenta algunas deficiencias y limitaciones que se pretenden subsanar. El objetivo de este trabajo es diseñar e implementar un nuevo sistema de sensores del pie.

  • Access to the full text
    Camera motion estimation by tracking contour deformation: precision analysis  Open access

     Alenyà Ribas, Guillem; Torras, Carme
    Image and vision computing
    Vol. 28, num. 3, p. 474-490
    DOI: 10.1016/j.imavis.2009.07.011
    Date of publication: 2010-03
    Journal article

    Read the abstract Read the abstract Access to the full text Access to the full text Open in new window  Share Reference managers Reference managers Open in new window

    An algorithm to estimate camera motion from the progressive deformation of a tracked contour in the acquired video stream has been previously proposed. It relies on the fact that two views of a plane are related by an affinity, whose six parameters can be used to derive the six degrees-of-freedom of camera motion between the two views. In this paper we evaluate the accuracy of the algorithm. Monte Carlo simulations show that translations parallel to the image plane and rotations about the optical axis are better recovered than translations along this axis, which in turn are more accurate than rotations out of the plane. Concerning covariances, only the three less precise degrees-of-freedom appear to be correlated. In order to obtain means and covariances of 3D motions quickly on a working robot system, we resort to the Unscented Transformation (UT) requiring only 13 samples per view, after validating its usage through the previous Monte Carlo simulations. Two sets of experiments have been performed: short-range motion recovery has been tested using a Staübli robot arm in a controlled lab setting, while the precision of the algorithm when facing long translations has been assessed by means of a vehicle-mounted camera in a factory floor. In the latter more unfavourable case, the obtained errors are around 3%, which seems accurate enough for transferring operations

  • Access to the full text
    Object modeling using a ToF camera under an uncertainty reduction approach  Open access

     Foix Salmeron, Sergi; Alenyà Ribas, Guillem; Andrade Cetto, Juan; Torras, Carme
    IEEE International Conference on Robotics and Automation
    p. 1306-1312
    DOI: 10.1109/ROBOT.2010.5509197
    Presentation's date: 2010
    Presentation of work at congresses

    Read the abstract Read the abstract Access to the full text Access to the full text Open in new window  Share Reference managers Reference managers Open in new window

    Time-of-Flight (ToF) cameras deliver 3D images at 25 fps, offering great potential for developing fast object modeling algorithms. Surprisingly, this potential has not been extensively exploited up to now. A reason for this is that, since the acquired depth images are noisy, most of the available registration algorithms are hardly applicable. A further difficulty is that the transformations between views are in general not accurately known, a circumstance that multi-view object modeling algorithms do not handle properly under noisy conditions. In this work, we take into account both uncertainty sources (in images and camera poses) to generate spatially consistent 3D object models fusing multiple views with a probabilistic approach. We propose a method to compute the covariance of the registration process, and apply an iterative state estimation method to build object models under noisy conditions.

  • Access to the full text
    Planning stacking operations with an unknown number of objects  Open access

     Trilla Romero, Lluís; Alenyà Ribas, Guillem
    International Conference on Informatics in Control, Automation and Robotics
    p. 348-353
    Presentation's date: 2010
    Presentation of work at congresses

    Read the abstract Read the abstract Access to the full text Access to the full text Open in new window  Share Reference managers Reference managers Open in new window

    A planning framework is proposed for the task of cleaning a table and stack an unknown number of objects of different size on a tray. We propose to divide this problem in two, and combine two different planning algorithms. One, plan hand motions in the euclidean space to be able to move the hand in a noisy scenario using a novel Time-of-Flight camera (ToF) to perform the perception of the environment. The other one, chooses the strategy to effectively clean the table, considering the symbolic position of the objects, and also its size for stacking considerations. Our formulation does not use information about the number of objects available, and thus is general in this sense. Also, it can deal with different object sizes, planning adequately to stack them. The special definition of the possible actions allows a simple and elegant way of characterizing the problem, and is one of the key ingredients of the proposed solution. Some experiments are provided in simulated and real scenarios that validate our approach.

  • 3D object reconstruction from Swissranger sensor data using a spring-mass model

     Dellen, Babette Karla Margarete; Alenyà Ribas, Guillem; Foix Salmeron, Sergi; Torras, Carme
    Date of publication: 2009
    Book chapter

    View View Open in new window  Share Reference managers Reference managers Open in new window

  • Access to the full text
    A comparison of three methods for measure of time to contact  Open access

     Alenyà Ribas, Guillem; Nègre, Amaury; Crowley, James L
    IEEE/RSJ International Conference on Intelligent Robots and Systems
    p. 4565-4570
    Presentation's date: 2009-10-15
    Presentation of work at congresses

    Read the abstract Read the abstract Access to the full text Access to the full text Open in new window  Share Reference managers Reference managers Open in new window

    Time to Contact (TTC) is a biologically inspired method for obstacle detection and reactive control of motion that does not require scene reconstruction or 3D depth estimation. Estimating TTC is difficult because it requires a stable and reliable estimate of the rate of change of distance between image features. In this paper we ropose a new method to measure time to contact, Active Contour Affine Scale (ACAS). We experimentally and analytically compare ACAS with two other recently proposed methods: Scale Invariant Ridge Segments (SIRS), and Image Brightness Derivatives (IBD). Our results show that ACAS provides a more accurate estimation of TTC when the image flow may be approximated by an affine transformation, while SIRS provides an estimate that is generally valid, but may not always be as accurate as ACAS, and IBD systematically over-estimate time to contact.

  • Time to contact for obstacle avoidance

     Alenyà Ribas, Guillem; Nègre, Amaury; Crowley, James L
    European Conference on Mobile Robots
    p. 19-24
    Presentation's date: 2009
    Presentation of work at congresses

    View View Open in new window  Share Reference managers Reference managers Open in new window

  • Access to the full text
    Humanoid robotics and human-centered initiatives at IRI  Open access

     Alenyà Ribas, Guillem; Hernandez Juan, Sergi; Andrade Cetto, Juan; Sanfeliu Cortes, Alberto; Torras, Carme
    Jornadas de Automática
    p. 1-8
    Presentation's date: 2009
    Presentation of work at congresses

    Read the abstract Read the abstract Access to the full text Access to the full text Open in new window  Share Reference managers Reference managers Open in new window

    The interest of the robotics community on humanoid robots is growing, specially in perception, scene understanding and manipulation in humancentered environments, as well as in human-robot interaction. Moreover, humanoid robotics is one of the main research areas promoted by the European research program. Here we present some projects and educational initiatives in this direction carried out at the Institut de Robòtica i Informàtica Industrial, CSIC-UPC.

  • Recovering Epipolar Direction from Two Affine Views of a Planar Object

     Alberich Carramiñana, Maria; Alenyà Ribas, Guillem; Andrade Cetto, Juan; Torras, Carme
    Computer vision and image understanding
    Vol. 112, p. 195-209
    Date of publication: 2008-09
    Journal article

     Share Reference managers Reference managers Open in new window

  • Access to the full text
    Recovering the epipolar direction from two affine views of a planar object  Open access

     Alberich Carramiñana, Maria; Alenyà Ribas, Guillem; Martinez, Elisa; Andrade Cetto, Juan; Torras, Carme
    Computer vision and image understanding
    Vol. 112, num. 2, p. 195-209
    DOI: 10.1016/j.cviu.2008.02.007
    Date of publication: 2008-11
    Journal article

    Read the abstract Read the abstract Access to the full text Access to the full text Open in new window  Share Reference managers Reference managers Open in new window

    The mainstream approach to estimate epipolar geometry from two views requires matching the projections of at least 4 non-coplanar points in the scene, assuming a full projective camera model. Our work deviates from this in three respects: affine camera, planar scene and active contour tracking instead of point matching. Using results in Projective Geometry, we prove that the affine epipolar direction can be recovered provided camera motion is free of cyclorotation. A setup consisting of a Staubli robot holding a planar object in front of a camera is used to obtain calibrated image streams, which are used as ground truth to evaluate the performance of the method, and to test its limiting conditions in practice. The fact that our method (applicable to planar, poorly textured scenes) and the Gold Standard algorithm (applicable to highly textured scenes with significant relief) produce comparable results shows the potential of our proposal.

  • Monocular object pose computation with the foveal-peripheral camera of the humanoid robot Armar-III

     Alenyà Ribas, Guillem; Torras, Carme
    International Conference of the Catalan Association for Artificial Intelligence
    p. 355-362
    Presentation's date: 2008
    Presentation of work at congresses

    View View Open in new window  Share Reference managers Reference managers Open in new window

  • Depth from the visual motion of a planar target induced by zooming

     Alenyà Ribas, Guillem; Alberich Carramiñana, Maria; Torras, Carme
    IEEE robotics and automation magazine
    p. 4727-4732
    Date of publication: 2007-01
    Journal article

     Share Reference managers Reference managers Open in new window

  • Affine edipolar direction from two views of a planar contour

     Alberich Carramiñana, Maria; Alenyà Ribas, Guillem; Andrade Cetto, Juan; Martinez, Elisa; Torras, Carme
    Lecture notes in computer science
    Vol. 4179, p. 944-955
    Date of publication: 2007
    Journal article

     Share Reference managers Reference managers Open in new window

  • Depth from the visual motion of a planar target induced by zooming

     Alenyà Ribas, Guillem; Alberich Carramiñana, Maria; Torras, Carme
    IEEE International Conference on Robotics and Automation
    p. 4727-4732
    Presentation of work at congresses

     Share Reference managers Reference managers Open in new window

  • Affine epipolar direction from two views of a planar contour

     Alberich Carramiñana, Maria; Alenyà Ribas, Guillem; Andrade Cetto, Juan; Martínez, E; Torras, C
    Advanced concepts for intelligent vision systems. 8th international conference
    p. 944-955
    Presentation of work at congresses

     Share Reference managers Reference managers Open in new window

  • Affine epipolar direction from two views of a planar contour

     Alberich Carramiñana, Maria; Alenyà Ribas, Guillem; Andrade Cetto, Juan; Martínez, Elisa; Torras, Carme
    Lecture notes in computer science
    Vol. 4179, p. 944-955
    Date of publication: 2006
    Journal article

    View View Open in new window  Share Reference managers Reference managers Open in new window

  • Fusing visual and inertial sensing to recover robot egomotion

     Alenyà Ribas, Guillem; Martínez, E; Torras, Carme
    Journal of robotic systems
    Vol. 21, num. 1, p. 23-32
    Date of publication: 2004-01
    Journal article

     Share Reference managers Reference managers Open in new window

  • Fusing visual contour tracking with inertial sensing to recover robot egomotion

     Alenyà Ribas, Guillem; Martínez Castro, Elena; Torras, Carme
    International Conference on Advanced Robotics
    p. 1891-1898
    Presentation of work at congresses

     Share Reference managers Reference managers Open in new window

  • Programari de base del robot mòbil STAFF

     Alenyà Ribas, Guillem; Jordi, Gay; Ibañez Borau, Jose Miguel; Marzàbal, Albert; Martinez Velasco, Antonio Benito; Torrens Balet, Carles
    Date: 2000-05
    Report

     Share Reference managers Reference managers Open in new window