Diaz, J.; Brunet, P.; Navazo, I.; Vazquez, P. International Conference on Computer Graphics, Visualization, Computer Vision and image Processing p. 12-20 Data de presentació: 2017-07 Presentació treball a congrés
Volume visualization software usually has to deal with datasets that are larger than the GPUs may hold. This is especially true in one of the most popular application scenarios: medical visualization. Typically, medical datasets are available for different personnel, but only radiologists have high-end systems that are able to cope with large data. For the rest of physicians, usually low-end systems are only available. As a result, most volume rendering packages downsample the data prior to uploading to the GPU. The most common approach consists in performing iterative subsampling along the longest axis, until the model fits inside the GPU memory. This causes important information loss that affects the final rendering. Some cleverer techniques may be developed to preserve the volumetric information. In this paper we explore the quality of different downsampling methods and present a new approach that produces smooth lower-resolution representations, yet still preserves small features that are prone to disappear with other approaches.
Ceballos, V.; Monclús, E.; Vazquez, P.; Bendezú, Á.; Mego, M.; Merino, X.; Azpiroz, F.; Navazo, I. Annual EG/VGTC Conference on Visualization p. 121-123 DOI: 10.2312/eurp.20171184 Data de presentació: 2017-06-13 Presentació treball a congrés
The analysis of the morphology and content of the gut is necessary in order to understand metabolic and functional gut activity and for diagnostic purposes. Magnetic resonance imaging (MRI) has become an important modality technique since it is able to visualize soft tissues using no ionizing radiation, and hence removes the need for any contrast agents. In the last few years, MRI of gastrointestinal function has advanced substantially, although scarcely any publication has been devoted to the analysis of the colon content. This paper presents a semi-automatic segmentation tool for the quantitative assessment of the unprepared colon from MRI images. This application has allowed for the analysis of the colon content in various clinical experiments. The results of the assessment have contributed to a better understanding of the functionality of the colon under different diet conditions. The last experiment carried out by medical doctors showed a marked influence of diet on colonic content, accounting for about 30% of the volume variations.
The way in which gradients are computed in volume datasets influences both the quality of the shading and the performance obtained in rendering algorithms. In particular, the visualization of coarse datasets in multi-resolution representations is affected when gradients are evaluated on-the-fly in the shader code by accessing neighbouring positions. This is not only a costly computation that compromises the performance of the visualization process, but also one that provides gradients of low quality that do not resemble the originals as much as desired because of the new topology of downsampled datasets. An obvious solution is to pre-compute the gradients and store them. Unfortunately, this originates two problems: First, the downsampling process, that is also prone to generate artifacts. Second, the limited bit size of storage itself causes the gradients to loss precision. In
order to solve these issues, we propose a downsampling filter for pre-computed gradients that provides improved gradients that better match the originals such that the aforementioned artifacts disappear. Secondly, to address the storage problem, we present a method for the efficient storage of gradient directions that is able to minimize the minimum angle achieved among all representable vectors in a space of 3 bytes. We also provide several examples that show the advantages of the proposed approaches.
Obtaining 3D realistic models of urban scenes from accurate range data is nowadays an important research topic, with applications in a variety of fields ranging from Cultural Heritage and digital 3D archiving to monitoring of public works. Processing massive point clouds acquired from laser scanners involves a number of challenges, from data management to noise removal, model compression and interactive visualization and inspection. In this paper, we present a new methodology for the reconstruction of 3D scenes from massive point clouds coming from range lidar sensors. Our proposal includes a panorama-based compact reconstruction where colors and normals are estimated robustly through an error-aware algorithm that takes into account the variance of expected errors in depth measurements. Our representation supports efficient, GPU-based visualization with advanced lighting effects. We discuss the proposed algorithms in a practical application on urban and historical preservation, described by a massive point cloud of 3.5 billion points. We show that we can achieve compression rates higher than 97% with good visual quality during interactive inspections.
Fairen, M.; Farrés, M.; Moyes, J.; Insa, E. Annual Conference of the European Association for Computer Graphics p. 51-58 DOI: 10.2312/eged.20171026 Data de presentació: 2017-04-26 Presentació treball a congrés
Virtual Reality (VR) and Augmented Reality (AR) have been gradually introduced in the curriculum of schools given the benefits they bring to classical education. We present an experiment designed to expose students to a VR session where they can directly inspect 3D models of several human organs by using Virtual Reality systems. Our systems allow the students to see the models directly visualized in 3D and to interact with them as if they were real. The experiment has involved 254 students of a Nursing Degree, enrolled in the Human anatomy and physiology course during 2 years (2 consecutive courses). It includes 10 3D models representing different anatomical structures which have been enhanced with meta-data to help the students understand the structure. In order to evaluate the students' satisfaction facing such a new teaching methodology, the students were asked to fill in a questionnaire with two categories. The first one measured whether or not, the teaching session using VR facilitates the understanding of the structures. The second one measured the student's satisfaction with this VR session. From the results we can see that the items most valuated are the use of the activity as a learning tool, and the satisfaction of the students' expectations. We can therefore conclude that VR session for teaching is a powerful learning tool that helps to understand the anatomical structures.
Agus, M.; Gobbetti, E.; Marton, F.; Pintore, G.; Vazquez, P. Annual Conference of the European Association for Computer Graphics p. 1-5 DOI: 10.2312/egt.20171032 Data de presentació: 2017-04-24 Presentació treball a congrés
The increased availability and performance of mobile graphics terminals, including smartphones and tablets with high resolution screens and powerful GPUs, combined with the increased availability of high-speed mobile data connections, is opening the door to a variety of networked graphics applications. In this world, native apps or mobile sites coexist to reach the goal of providing us access to a wealth of multimedia information while we are on the move. This half-day tutorial provides a technical introduction to the mobile graphics world spanning the hardware-software spectrum, and explores the state of the art and key advances in specific application domains, including capture and acquisition, real-time high-quality 3D rendering and interactive exploration.
Argudo, O.; Besora, I.; Brunet, P.; Creus, C.; Hermosilla, P.; Navazo, I.; Vinacua, A. Computer-Aided Design Vol. 79, p. 48-59 DOI: 10.1016/j.cad.2016.06.005 Data de publicació: 2016-10-01 Article en revista
The use of virtual prototypes and digital models containing thousands of individual objects is commonplace in complex industrial applications like the cooperative design of huge ships. Designers are interested in selecting and editing specific sets of objects during the interactive inspection sessions. This is however not supported by standard visualization systems for huge models. In this paper we discuss in detail the concept of rendering front in multiresolution trees, their properties and the algorithms that construct the hierarchy and efficiently render it, applied to very complex CAD models, so that the model structure and the identities of objects are preserved. We also propose an algorithm for the interactive inspection of huge models which uses a rendering budget and supports selection of individual objects and sets of objects, displacement of the selected objects and real-time collision detection during these displacements. Our solution–based on the analysis of several existing view-dependent visualization schemes–uses a Hybrid Multiresolution Tree that mixes layers of exact geometry, simplified models and impostors, together with a time-critical, view-dependent algorithm and a Constrained Front. The algorithm has been successfully tested in real industrial environments; the models involved are presented and discussed in the paper.
The final publication is available at Springer via http://dx.doi.org/10.1016/j.cad.2016.06.005
Medical datasets are continuously increasing in size. Although larger models may be available for certain research purposes, in the common clinical practice the models are usually of up to 512x512x2000 voxels. These resolutions exceed the capabilities of conventional GPUs, the ones usually found in the medical doctors’ desktop PCs. Commercial solutions typically reduce the data by downsampling the dataset iteratively until it fits the available target specifications. The data loss reduces the visualization quality and this is not commonly compensated with other actions that might alleviate its effects. In this paper, we propose adaptive transfer functions, an algorithm that improves the transfer function in downsampled multiresolution models so that the quality of renderings is highly improved. The technique is simple and lightweight, and it is suitable, not only to visualize huge models that would not fit in a GPU, but also to render not-so-large models in mobile GPUs, which are less capable than their desktop counterparts. Moreover, it can also be used to accelerate rendering frame rates using lower levels of the multiresolution hierarchy while still maintaining high-quality results in a focus and context approach. We also show an evaluation of these results based on perceptual metrics.
The final publication is available at Springer via http://dx.doi.org/10.1007/s00371-016-1253-9
All-atom simulations are crucial in biotechnology. In Pharmacology, for example, molecular knowledge of protein-drug interactions is essential in the understanding of certain pathologies and in the development of improved drugs. To achieve this detailed information, fast and enhanced molecular visualization is critical. Moreover, hardware and software developments quickly deliver extensive data, providing intermediate results that can be analyzed by scientists in order to interact with the simulation process and direct it to a more promising configuration. In this paper we present a GPU-friendly data structure for real-time illustrative visualization of all-atom simulations. Our system generates both ambient occlusion and halos using an occupancy pyramid that needs no precalculation and that is updated on the fly during simulation, allowing the real time rendering of simulation results at sustained high framerates.
Skanberg, R.; Vazquez, P.; Guallar, V.; Ropinski, T. IEEE transactions on visualization and computer graphics Vol. 22, num. 1, p. 718-727 DOI: 10.1109/TVCG.2015.2467293 Data de publicació: 2016-01-01 Article en revista
Today molecular simulations produce complex data sets capturing the interactions of molecules in detail. Due to the complexity of this time-varying data, advanced visualization techniques are required to support its visual analysis. Current molecular visualization techniques utilize ambient occlusion as a global illumination approximation to improve spatial comprehension. Besides these shadow-like effects, interreflections are also known to improve the spatial comprehension of complex geometric structures. Unfortunately, the inherent computational complexity of interreflections would forbid interactive exploration, which is mandatory in many scenarios dealing with static and time-varying data. In this paper, we introduce a novel analytic approach for capturing interreflections of molecular structures in real-time. By exploiting the knowledge of the underlying space filling representations, we are able to reduce the required parameters and can thus apply symbolic regression to obtain an analytic expression for interreflections. We show how to obtain the data required for the symbolic regression analysis, and how to exploit our analytic solution to enhance interactive molecular visualizations.
Simulation of crowd behavior has been approached through many different methodologies, but the problem of mimicking human decisions and reactions remains a challenge for all. We propose an alternative model for simulation of pedestrian movements using Reinforcement Learning. Taking the approach of microscopic models, we train an agent to move towards a goal while avoiding obstacles. Once one agent has learned, its knowledge is transferred to the rest of the members of the group by sharing the resulting Q-Table. This results in individual behavior leading to emergent group behavior. We present a framework with states, actions and reward functions general enough to easily adapt to different environment configurations.