Santamaria, A.; Grosch, P.; Lippiello, V.; Solá, J.; Andrade-Cetto, J. IEEE-ASME transactions on mechatronics Vol. 22, num. 4, p. 1610-1621 DOI: 10.1109/TMECH.2017.2682283 Data de publicació: 2017-08-01 Article en revista
This paper addresses the problem of autonomous servoing an unmanned redundant aerial manipulator using computer vision. The overactuation of the system is exploited by means of a hierarchical control law, which allows to prioritize several tasks during flight. We propose a safety-related primary task to avoid possible collisions. As a secondary task, we present an uncalibrated image-based visual servo strategy to drive the arm end-effector to a desired position and orientation by using a camera attached to it. In contrast to the previous visual servo approaches, a known value of camera focal length is not strictly required. To further improve flight behavior, we hierarchically add one task to reduce dynamic effects by vertically aligning the arm center of gravity to the multirotor gravitational vector, and another one that keeps the arm close to a desired configuration of high manipulability and avoiding arm joint limits. The performance of the hierarchical control law, with and without activation of each of the tasks, is shown in simulations and in real experiments confirming the viability of such prioritized control scheme for aerial manipulation.
We address in this paper the problem of loop closure detection for laser-based simultaneous localization and mapping (SLAM) of very large areas. Consistent with the state of the art, the map is encoded as a graph of poses, and to cope with very large mapping capabilities, loop closures are asserted by comparing the features extracted from a query laser scan against a previously acquired corpus of scan features using a bag-ofwords (BoW) scheme. Two contributions are here presented. First, to benefit from the graph topology, feature frequency scores in the BoW are computed not only for each individual scan but also from neighboring scans in the SLAM graph. This has the effect of enforcing neighbor relational information during document matching. Secondly, a weak geometric check that takes into account feature ordering and occlusions is introduced that substantially improves loop closure detection performance. The two contributions are evaluated both separately and jointly on four common SLAM datasets, and are shown to improve the state-of-the-art performance both in terms of precision and recall in most of the cases. Moreover, our current implementation is designed to work at nearly frame rate, allowing loop closure query resolution at nearly 22 Hz for the best case scenario and 2 Hz for the worst case scenario.
Corominas, A.; Vallve, J.; Solá, J.; Flores, I.; Andrade-Cetto, J. IEEE International Conference on Robotics and Automation p. 3161-3166 DOI: 10.1109/ICRA.2016.7487484 Data de presentació: 2016-06 Presentació treball a congrés
Localization is the key perceptual process closing the loop of autonomous navigation, allowing self-driving vehicles to operate in a deliberate way. To ensure robust localization, autonomous vehicles have to implement redundant estimation processes, ideally independent in terms of the underlying physics behind sensing principles. This paper presents a stereo radar odometry system, which can be used as such a redundant system, complementary to other odometry estimation processes, providing robustness for long-term operability. The presented work is novel with respect to previously published methods in that it contains: (i) a detailed formulation of the Doppler error and its associated uncertainty; (ii) an observability analysis that gives the minimal conditions to infer a 2D twist from radar readings; and (iii) a numerical analysis for optimal vehicle sensor placement. Experimental results are also detailed that validate the theoretical insights.
Event-based cameras have an incredible potential in real-time and real-world robotics. They would enable more efficient algorithms in applications where high demanding requirements, such as rapid dynamic motion and high dynamic range, make standard cameras run into problems rapid dynamic motion and high dynamic range. While traditional cameras are based in the frame-base paradigm - a shutter captures a certain amount of pictures per second -, the bio-inspired event cameras have pixels that respond independently to the change of log-intensity generating asynchronous events. An special appeal for this type of cameras is their low band-width, since the stream of events contain all the information getting rid of the redundancy. This sensors that mimic some properties of the human retina has microseconds latency and 120 dB dynamic range (in contrast to the 60 dB of the standard cameras).
However, the current impact of the event cameras has been tiny due to the necessity of completely new algorithm, there is no global measurement of the intensity which would allow the use of current methods. The fact that an event corresponds to an asynchronous local intensity difference turns out to be a challenging problem if one wants to recover the motion as well as the scene. This article tries to illustrate the several problems that are needed to face when dealing with this problem and some of the different approaches taken.
First of all, we will explain the generative model of the event camera and the preliminaries, followed by the different approaches. Finally will the conclusions and a glossary of the code.
The lack of depth information in camera images has triggered much work on their use for localization and mapping in robotics. In particular, specific landmark parametrizations that isolate the unknown depth in one variable, and that allows to handle the associated large uncertainties have been proposed. Recently, an innovative parametrization (Parallax Angle) has shown to outperform the others in the context of a Bundle Adjustment approach. This paper investigates the way to exploit this parametrization in an incremental graph-based SLAM approach, in a robotics context in which motions measures can be incorporated in the overall estimation. It presents the factors required to initialize landmarks and manage their observations. Simulation results show that the proposed algorithms are able to incrementally incorporate observations, and a discussion analyzes how the incremental updates on ISAM2 are affected by these new factors.
Santamaria, A.; Solá, J.; Andrade-Cetto, J. IEEE/RSJ International Conference on Intelligent Robots and Systems p. 1864-1871 DOI: 10.1109/IROS.2015.7353621 Data de presentació: 2015 Presentació treball a congrés
This paper develops a new method for 3D, high rate vehicle state estimation, specially designed for free-flying Micro Aerial Vehicles (MAVs). We fuse observations from inertial and optical flow low-cost measurement units, and extend the current use of this optical sensors from hovering purposes to odometry estimation. Two Kalman filters, with its extended and error-state versions, are developed, and benchmarked alongside a large number of algorithm variations, using both simulations and real experiments with precise ground-truth. In contrast to state-of-the-art visual-inertial odometry methods, the proposed solution does not require image processing in the main CPU. Instead, the data correction is done taking advantage of the recently appeared optical flow sensors, which directly provide metric information about the MAV motion. We hence reduce the computational load of the main processor unit, and obtain an accurate estimation of the vehicle state at a high update rate.
The paper has supplementary downloadable material available at http://ieeexplore.ieee.org, provided by the authors. The material includes a video of the state estimation presented in the paper.