We present a novel robot social-aware navigation framework to walk side-by-side with people in crowded urban areas in a safety and natural way. The new system includes the following key issues: to propose a new robot social-aware navigation model to accompany a person; to extend the Social Force Model, "Extended Social-Force Model", to consider the person and robot's interactions; to use a human predictor to estimate the destination of the person the robot is walking with; and to interactively learning the parameters of the social-aware navigation model using multimodal human feedback. Finally, a quantitative metric based on people's personal spaces and comfortableness criteria, is introduced in order to evaluate quantitatively the performance of the robot's task. The validation of the model is accomplished throughout an extensive set of simulations and real-life experiments. In addition, a volunteers' survey is used to measure the acceptability of our robot companion's behavior.
The final publication is available at link.springer.com
We present a novel robot social-aware navigation framework to walk side-by-side with people in crowded urban areas in a safety and natural way. The new system includes the following key issues: to propose a new robot social-aware navigation model to accompany a person; to extend the Social Force Model,
Amor, A.; Santamaria, A.; Herrero, F.; Ruiz, A.; Sanfeliu, A. IEEE International Symposium on Safety, Security and Rescue Robotics p. 15-20 DOI: 10.1109/SSRR.2016.7784271 Data de presentació: 2016 Presentació treball a congrés
We present a featureless pose estimation method that, in contrast to current Perspective-n-Point (PnP) approaches, it does not require n point correspondences to obtain the camera pose, allowing for pose estimation from natural shapes that do not necessarily have distinguished features like corners or intersecting edges. Instead of using n correspondences (e.g. extracted with a feature detector) we will use the raw polygonal representation of the observed shape and directly estimate the pose in the pose-space of the camera. This method compared with a general PnP method, does not require n point correspondences neither a priori knowledge of the object model (except the scale), which is registered with a picture taken from a known robot pose. Moreover, we achieve higher precision because all the information of the shape contour is used to minimize the area between the projected and the observed shape contours. To emphasize the non-use of n point correspondences between the projected template and observed contour shape, we call the method Planar PØP. The method is shown both in simulation and in a real application consisting on a UAV localization where comparisons with a precise ground-truth are provided.
This technical report describes the work done to develop a new navigation scheme for an autonomous car-like robot available at the Mobile Robotics Laboratory at IRI. To plan the general path the robot should follow (i.e. the global planner), a search based planner algorithm, with motion primitives which take into account the kinematic constraints of the robot, is used. To actually execute the path and avoid dynamic obstacles (i.e the local planner) a modification of the DWA algorithm is used, which takes into account the kinematic constraints of the ackermann configuration to generate and evaluate possible trajectories for the robot. The whole navigation scheme has been integrated into the ROS middleware navigation framework and tested on the real robot and also in a simulator.
This technical report introduces the concepts, problems and a possible solution for ROS multi-master systems, that is, systems build from two or more ROS networks, each with its own roscore node. In general this environment would correspond to multi-robot systems, either mobile platforms or manipulators.
The ROS framework already provides a solution for such systems, multimaster_fkie, which is presented and briefly described in thisThe ROS framework already provides a solution for such systems, multimaster_fkie, which is presented and briefly described in this technical report, together with the network setup necessary to make it work properly.
Two different configurations are discussed in this technical report, simple ROS networks with a single computer each, and more complex ROS networks with two or more computers each. In both cases, real examples are provided using robots available at IRI.
Huerta, I.; Ferrer, G.; Herrero, F.; Prati, A.; Sanfeliu, A. International Conference on Distributed Smart Cameras p. 25:1-25:6 DOI: 10.1145/2659021.2659054 Data de presentació: 2014 Presentació treball a congrés
In the present paper, we propose a highly accurate and robust people detector, which works well under highly variant and uncertain conditions, such as occlusions, false positives and false detections. These adverse conditions, which initially motivated this research, occur when a robotic platform navigates in an urban environment, and although the scope is originally within the robotics field, the authors believe that our contributions can be extended to other fields. To this end, we propose a multimodal information fusion consisting of laser and monocular camera information. Laser information is modelled using a set of weak classifiers (Adaboost) to detect people. Camera information is processed by using HOG descriptors to classify person/non person based on a linear SVM. A multi-hypothesis tracker trails the position and velocity of each of the targets, providing temporal information to the fusion, allowing recovery of detections even when the laser segmentation fails. Experimental results show that our feedback-based system outperforms previous state-of-the-art methods in performance and accuracy, and that near real-time detection performance can be achieved.