Obtaining 3D realistic models of urban scenes from accurate range data is nowadays an important research topic, with applications in a variety of fields ranging from Cultural Heritage and digital 3D archiving to monitoring of public works. Processing massive point clouds acquired from laser scanners involves a number of challenges, from data management to noise removal, model compression and interactive visualization and inspection. In this paper, we present a new methodology for the reconstruction of 3D scenes from massive point clouds coming from range lidar sensors. Our proposal includes a panorama-based compact reconstruction where colors and normals are estimated robustly through an error-aware algorithm that takes into account the variance of expected errors in depth measurements. Our representation supports efficient, GPU-based visualization with advanced lighting effects. We discuss the proposed algorithms in a practical application on urban and historical preservation, described by a massive point cloud of 3.5 billion points. We show that we can achieve compression rates higher than 97% with good visual quality during interactive inspections.
Sunet, M.; Comino, M.; Karatzas, D.; Chica, A.; Vazquez, P. IADIs international journal on computer science and information systems Vol. 11, num. 2, p. 1-18 Data de publicació: 2016-12-20 Article en revista
Despite the large amount of methods and applications of augmented reality, there is little homogenizatio n on the software platforms that support them. An exception may be the low level control software that is provided by some high profile vendors such as Qualcomm and Metaio. However, these provide fine grain modules for e.g. element tracking. We are more co ncerned on the application framework, that includes the control of the devices working together for the development of the AR experience. In this paper we describe the development of a software framework for AR setups. We concentrate on the modular design of the framework, but also on some hard problems such as the calibration stage, crucial for projection - based AR. The developed framework is suitable and has been tested in AR applications using camera - projector pairs, for both fixed and nomadic setups.
Sunet, M.; Comino, M.; Karatzas, D.; Chica, A.; Vazquez, P. International Conference on Computer Graphics, Visualization, Computer Vision and image Processing p. 133-140 Data de presentació: 2016-07 Presentació treball a congrés
Despite the large amount of methods and applications of augmented reality, there is little homogenization on the software platforms that support them. An exception may be the low level control software that is provided by some high profile vendors such as Qualcomm and Metaio. However, these provide fine grain modules for e.g. element tracking. We are more concerned on the application framework, that includes the control of the devices working together for the development of the AR experience. In this paper we present a software framework that can be used for the development of AR applications based on camera-projector pairs, that is suitable for both fixed, and nomadic setups.
Hernando, R.; Chica, A.; Vazquez, P. International Conferences in Central Europe on Computer Graphics, Visualization and Computer Vision p. 89-96 Data de presentació: 2016-06 Presentació treball a congrés
Skin is one of the most difficult materials to reproduce in computer graphics, mainly due to two major factors: First, the complexity of the light interactions happening at the subsurface layers of skin, and second, the high sensitivity of our perceptual system to the artificial imperfections commonly appearing in synthetic skin models. Many current approaches mix physically-based algorithms with image-based improvements to achieve realistic skin rendering in realtime. Unfortunately, those algorithms still suffer from artifacts such as halos or incorrect diffusion. Some of these artifacts (e.g. incorrect diffusion)
are especially noticeable if the models have not been previously segmented. In this paper we present some extensions to the
Separable Subsurface Scattering (SSSS) framework that reduce those artifacts while still maintaining a high framerate. The result is an improved algorithm that achieves high quality rendering for models directly obtained from scanners, not requiring further processing.
State-of-the-art approaches for tree reconstruction either put limiting constraints on the input side (requiring multiple photographs, a scanned point cloud or intensive user input) or provide a representation only suitable for front views of the tree. In this paper we present a complete pipeline for synthesizing and rendering detailed trees from a single photograph with minimal user effort. Since the overall shape and appearance of each tree is recovered from a single photograph of the tree crown, artists can benefit from georeferenced images to populate landscapes with native tree species. A key element of our approach is a compact representation of dense tree crowns through a radial distance map. Our first contribution is an automatic algorithm for generating such representations from a single exemplar image of a tree. We create a rough estimate of the crown shape by solving a thin-plate energy minimization problem, and then add detail through a simplified shape-from-shading approach. The use of seamless texture synthesis results in an image-based representation that can be rendered from arbitrary view directions at different levels of detail. Distant trees benefit from an output-sensitive algorithm inspired on relief mapping. For close-up trees we use a billboard cloud where leaflets are distributed inside the crown shape through a space colonization algorithm. In both cases our representation ensures efficient preservation of the crown shape. Major benefits of our approach include: it recovers the overall shape from a single tree image, involves no tree modeling knowledge and minimal authoring effort, and the associated image-based representation is easy to compress and thus suitable for network streaming.
Karatzas, D.; Poulain, V.; Rusiñol, M.; Chica, A.; Vazquez, P. IAPR International Workshop on Document Analysis Systems p. 369-374 DOI: 10.1109/DAS.2016.65 Data de presentació: 2016-04 Presentació treball a congrés
All indications show that paper documents will not cede in favour of their digital counterparts, but will instead be used increasingly in conjunction with digital information. An open challenge is how to seamlessly link the physical with the digital – how to continue taking advantage of the important affordances of paper, without missing out on digital functionality. This paper presents the authors’ experience with developing systems for Human-Document Interaction based on augmented document interfaces and examines new challenges and opportunities arising for the document image analysis field in this area. The system presented combines state of the art camera-based document image analysis techniques with a range of complementary technologies to offer fluid Human-Document Interaction. Both fixed
and nomadic setups are discussed that have gone through user
testing in real-life environments, and use cases are presented that
span the spectrum from business to educational applications.
We discuss bi-harmonic fields which approximate signed distance fields. We conclude that the bi-harmonic field approximation can be a powerful tool for mesh completion in general and complex cases. We present an adaptive, multigrid algorithm to extrapolate signed distance fields. By defining a volume mask in a closed region bounding the area that must be repaired, the algorithm computes a signed distance field in well-defined regions and uses it as an over-determined boundary condition constraint for the biharmonic field computation in the remaining regions. The algorithm operates locally, within an expanded bounding box of each hole, and therefore scales well with the number of holes in a single, complex model. We discuss this approximation in practical examples in the case of triangular meshes resulting from laser scan acquisitions which require massive hole repair. We conclude that the proposed algorithm is robust and general, and is able to deal with complex topological cases
In this paper, we present an inexpensive approach to create highly detailed reconstructions of the landscape surrounding a road. Our method is based on a space-efficient semi-procedural representation of the terrain and vegetation supporting high-quality real-time rendering not only for aerial views but also at road level. We can integrate photographs along selected road stretches. We merge the point clouds extracted from these photographs with a low-resolution digital terrain model through a novel algorithm which is robust against noise and missing data. We pre-compute plausible locations for trees through an algorithm which takes into account perceptual cues. At runtime we render the reconstructed terrain along with plants generated procedurally according to pre-computed parameters. Our rendering algorithm ensures visual consistency with aerial imagery and thus it can be integrated seamlessly with current virtual globes.
Chica, A.; Fairen, M.; Pelechano, N. Annual Conference of the European Association for Computer Graphics p. 65-72 DOI: 10.2312/conf/EG2012/education/065-072 Data de presentació: 2012-05-17 Presentació treball a congrés
Callieri, M.; Chica, A.; Dellepiane, M.; Besora, I.; Corsini, M.; Moyés, J.; Ranzuglia, G.; Scopigno, R.; Brunet, P. ACM journal on computing and cultural heritage Vol. 3, num. 4, p. 14:1-14:20 DOI: 10.1145/1957825.1957827 Data de publicació: 2011-04 Article en revista
The dichotomy between full detail representation and the efficient management of data digitization is still a big issue in the context of the acquisition and visualization of 3D objects, especially in the field of the cultural heritage. Modern scanning devices enable very detailed geometry to be acquired, but it is usually quite hard to apply these technologies to large artifacts. In this article we present a project aimed at virtually reconstructing the impressive (7 × 11 m.) portal of the Ripoll Monastery, Spain.
The monument was acquired using triangulation laser scanning technology, producing a dataset of 2212 range maps for a total of more than 1 billion triangles. All the steps of the entire project are described, from the acquisition planning to the final setup for dissemination to the public. We show how time-of-flight laser scanning data can be used to speed-up the alignment process.
In addition we show how, after creating a model and repairing imperfections, an interactive and immersive setup enables the
public to navigate and display a fully detailed representation of the portal. This article shows that, after careful planning and with the aid of state-of-the-art algorithms, it is now possible to preserve and visualize highly detailed information, even for very large surfaces.
In this paper, we present an efficient approach for the interactive rendering of large-scale urban models, which can be integrated seamlessly with virtual globe applications. Our scheme fills the gap between standard approaches for distant views of digital terrains and the polygonal models required for close-up views. Our work is oriented towards city models with real photographic textures of the building facades. At the heart of our approach is a
multi-resolution tree of the scene defining multi-level relief impostors. Key ingredients of our approach include the
pre-computation of a small set of zenithal and oblique relief maps that capture the geometry and appearance of the buildings inside each node, a rendering algorithm combining relief mapping with projective texture mapping which uses only a small subset of the pre-computed relief maps, and the use of wavelet compression to simulate
two additional levels of the tree. Our scheme runs considerably faster than polygonal-based approaches while producing images with higher quality than competing relief-mapping techniques. We show both analytically and empirically that multi-level relief impostors are suitable for interactive navigation through large urban models.
This paper presents the project of the virtual reconstruction and inspection of the "Portalada", the entrance of the Ripoll Monastery. In a first step, the monument of 7 x 11 meters was acquired using triangulation laser scanning technology, producing a dataset of more than 2000 range maps for a total of more than one billion triangles. After alignment and registration, a nearly complete digital model with 173M triangles and a sampling density of the
order of one millimeter was produced and repaired. The paper describes the model acquisition and construction, the use of specific scalable algorithms for model repair and simplification, and then focuses on the design of a hierarchical data structure for data managing and view-dependent navigation of this huge dataset on a PC. Finally, the paper describes the setup for a usable, user-friendly and immersive system that induces a presence perception in the visitors.
Besora, I.; Brunet, P.; Callieri, M.; Chica, A.; Corsini , M.; Dellepiane, M.; Morales, D.; Moyés, J.; Ranzuglia, G.; Scopigno, R. International Symposium on 3D Data Processing Visualization and Transmission p. 89-96 Data de presentació: 2008 Presentació treball a congrés
The dichotomy between detail representation and data management is still a big issue in the context of the acquisition and visualization of 3D objects, especially in the field of Cultural Heritage. New technologies give the possibility to acquire very detailed geometry, but very often
it’s very hard to process the amount of data produced. In this paper we present a project which aimed at virtually reconstructing the impressive (7x11 m.) portal of the Ripoll
Monastery, Spain. The monument was acquired using triangulation laser scanning technology, producing a dataset of more than 2000 range maps for a total of more than 1 billion triangles. All the steps of the entire project are described, from the acquisition planning to the final setup for the dissemination to the public. In particular, we show how timeof-flight laser scanning data can be used to obtain a speed
up in the alignment process, and how, after model creation and imperfections repairing, an interactive and immersive setup gives the public the possibility to navigate and visualize the high detail representation of the portal. This paper shows that, after careful planning and with the aim of new
algorithms, it’s now possible to preserve and visualize the highly detailed information provided by triangulation laser
scanning also for very large surfaces.
In this paper, we present a new visibility-based feature extraction algorithm
from discrete models as dense point clouds resulting from laser scans. Based on the observation that one can characterize local
properties of the surface by what can be seen by an imaginary creature on the surface, we propose algorithms that extract features using an intermediate representation of the model as a discrete volume for computational efficiency. We describe an efficient algorithm for computing the visibility map among voxels, based on the properties of a discrete erosion. The visibility information obtained
in this first step is then used to extract the model components (faces, edges and vertices) —which may be curved— and to compute the topological connectivity graph in a very efficient and robust way. The results are discussed through several examples.
We explore the automatic recovery of solids from their volumetric discretizations. In particular, we propose an approach, called Pressing, for smoothing isosurfaces extracted from binary volumes while recovering their large planar regions (flats). Pressing yields a surface that is guaranteed to contain the samples of the volume classified as interior and exclude those classified as exterior. It uses global optimization to identify flats and constrained bilaplacian smoothing to eliminate sharp features and high-frequencies from the rest of the isosurface. It recovers sharp edges between flat regions and between flat and smooth regions. Hence, the resulting isosurface is usually a much more accurate approximation of the original solid than isosurfaces produced by previously proposed approaches. Furthermore, the segmentation of the isosurface into flat and curved faces and the sharp/smooth labelling of their edges may be valuable for shape recognition, simplification, compression, and various reverse engineering and manufacturing applications.
Andujar, C.; Brunet, P.; Chica, A.; Navazo, I.; Rossignac, J.; Vinacua, A. Computer-Aided Design Vol. 37, num. 8, p. 847-857 DOI: 10.1016/j.cad.2004.09.013 Data de publicació: 2005-07 Article en revista
Since the publication of the original Marching Cubes algorithm, numerous variations have been proposed for guaranteeing water-tight constructions of triangulated approximations of isosurfaces. Most approaches divide the 3D space into cubes that each occupy the space between eight neighboring samples of a regular lattice. The portion of the isosurface inside a cube may be computed independently
of what happens in the other cubes, provided that the constructions for each pair of neighboring cubes agree along their common face. The portion of the isosurface associated with a cube may consist of one
or more connected components, which we call sheets. The topology and combinatorial complexity of the isosurface is influenced by three types of decisions made during its construction: (1) how to connect the four intersection points on each ambiguous face, (2) how to form interpolating sheets for cubes with more than one loop, and (3) how to triangulate each sheet. To determine topological properties, it is only relevant whether the samples are inside or outside the object, and not their precise value, if there is one. Previously reported techniques make these decisions based on local —per cube— criteria, often using precomputed look-up tables or simple construction rules. Instead, we propose global strategies for
optimizing several topological and combinatorial measures of the isosurfaces: triangle count, genus, and number of shells. We describe efficient implementations of these optimizations and the auxiliary data
structures developed to support them.
Andujar, C.; Brunet, P.; Chica, A.; Navazo, I.; Rossignac, J.; Vinacua, A. Computer graphics forum Vol. 23, num. 3, p. 401-410 DOI: 10.1111/j.1467-8659.2004.00771.x Data de publicació: 2004-08 Article en revista
The computation of the largest planar region approximating a 3D object is an important problem with wide applications in modeling and rendering. Given a voxelization of the 3D object, we propose an efficient algorithm to solve a discrete version of this problem. The input of the algorithm is the set of grid edges connecting the interior and the exterior of the object (called sticks). Using a voting-based approach, we compute the plane that slices the largest number of sticks and is orientation-compatible with these sticks. The robustness and efficiency of our approach rests on the use of two different parameterizations of the planes with suitable properties. The first of these is exact and is used to retrieve precomputed local solutions of the problem. The second one is discrete and is used in a hierarchical voting scheme to compute the global maximum. This problem has diverse applications that range from finding object signatures to generating simplified models. Here we demonstrate the merits of the algorithm for efficiently computing an optimized set of textured impostors for a given polygonal model.
Andujar, C.; Brunet, P.; Chica, A.; Navazo, I.; Rossignac, J.; Vinacua, A. Computer-aided design and applications Vol. 1, num. 1-4, p. 503-511 DOI: 10.3722/cadaps.2004.503-511 Data de publicació: 2004-05 Article en revista
Since the publication of the original Marching Cubes algorithm, numerous variations have been proposed for guaranteeing water-tight constructions of triangulated approximations of iso-surfaces.
Most approaches divide the 3D space into cubes that each occupies the space between eight neighboring samples of a regular lattice. The portion of the iso-surface inside a cube may be computed independently of what happens in the other cubes, provided that the constructions for each pair of neighboring cubes agree along their common face. The portion of the iso-surface associated with a cube may consist of one or more connected components, which we call sheets. We distinguish three types of decisions in the construction of the iso-surface connectivity: (1) how to split the X-faces, which have alternating in/out samples, (2) how many sheets to use in a cube, and (3) how to triangulate each sheet. Previously reported techniques make these decisions based on local criteria, often using pre-computed look-up tables or simple construction rules. Instead, we propose global strategies for optimizing several topological and combinatorial measures of the isosurfaces: triangle count, genus, and number of shells. We describe efficient implementations of these optimizations and the auxiliary data structures developed to support them.