In this survey we review, classify and compare existing approaches for real-time crowd rendering. We first overview character animation techniques, as they are highly tied to crowd rendering performance, and then we analyze the state of the art in crowd rendering. We discuss different representations for level-of-detail (LoD) rendering of animated characters, including polygon-based, point-based, and image-based techniques, and review different criteria for runtime LoD selection. Besides LoD approaches, we review classic acceleration schemes, such as frustum culling and occlusion culling, and describe how they can be adapted to handle crowds of animated characters. We also discuss specific acceleration techniques for crowd rendering, such as primitive pseudo-instancing, palette skinning, and dynamic key-pose caching, which benefit from current graphics hardware. We also address other factors affecting performance and realism of crowds such as lighting, shadowing, clothing and variability. Finally we provide an exhaustive comparison of the most relevant approaches in the field.
Recent advances in 3D scanning technologies have opened new possibilities in a broad range of applications including cultural heritage, medicine, civil engineering, and urban planning. Virtual Reality systems can provide new tools to professionals that want to understand acquired 3D models. In this review paper, we analyze the concept of data comprehension with an emphasis on visualization and inspection tools on immersive setups. We claim that in most application fields, data comprehension requires model measurements, which in turn should be based on the explicit visualization of uncertainty. As 3D digital representations are not faithful, information on their fidelity at local level should be included in the model itself as uncertainty bounds. We propose the concept of Measurable 3D Models as digital models that explicitly encode such local uncertainty bounds. We claim that professionals and experts can strongly benefit from immersive interaction through new specific, fidelity-aware measurement tools, which can facilitate 3D data comprehension. Since noise and processing errors are ubiquitous in acquired datasets, we discuss the estimation, representation, and visualization of data uncertainty. We show that, based on typical user requirements in Cultural Heritage and other domains, application-oriented measuring tools in 3D models must consider uncertainty and local error bounds. We also discuss the requirements of immersive interaction tools for the comprehension of huge 3D and nD datasets acquired from real objects.
Izquierdo, M.; Beacco, A.; Pelechano, N.; Andujar, C. Annual Conference of the European Association for Computer Graphics p. 1-2 DOI: 10.2312/egp.20151031 Presentation's date: 2015-05-07 Presentation of work at congresses
Per-joint impostors have been used to achieve high performance when rendering thousands of agents, while still allowing us to blend animation. This provides interactively animated crowds and reduces the memory footprint compared to classic impostors. In this poster we exploit the potential of per joint impostors to further increase both visual quality and performance. The CAVAST framework for crowd simulation and rendering has been used to quantitatively evaluate our improvements with the profiling tools that it provides. Since different applications will have different requirements in terms of performance vs. visual quality, we have extended CAVAST with a new user interface to ease this process.
This paper is about the major peculiarities and difficulties we encounter when trying to validate research results in fields such as virtual reality (VR) and 3D user interfaces (3DUI). We review the steps in the empirical method and discuss a number of challenges when conducting human-subject experiments. These challenges include the number of independent variables to control to get useful findings, the within-subjects or between-subjects dilemma, hard-to-collect data, experimenter effects, ethical issues, and the lack of background in the community for proper statistical analysis and interpretation of the results. We show that experiments involving human-subjects hinder the adoption of traditional experimental principles (comparison, repeatability, reproducibility, justification and explanation) and propose some ideas to improve the reliability of findings in VR and 3DUI disciplines.
In this paper, we present an inexpensive approach to create highly detailed reconstructions of the landscape surrounding a road. Our method is based on a space-efficient semi-procedural representation of the terrain and vegetation supporting high-quality real-time rendering not only for aerial views but also at road level. We can integrate photographs along selected road stretches. We merge the point clouds extracted from these photographs with a low-resolution digital terrain model through a novel algorithm which is robust against noise and missing data. We pre-compute plausible locations for trees through an algorithm which takes into account perceptual cues. At runtime we render the reconstructed terrain along with plants generated procedurally according to pre-computed parameters. Our rendering algorithm ensures visual consistency with aerial imagery and thus it can be integrated seamlessly with current virtual globes.
In this paper, a novel four wall, passive stereo multi-projector CAVE architecture is presented. It is powered by 40 —possibly different— off the shelf DLP projectors controlled by 12 PCs. We have achieved high resolution while significantly reducing the overall cost, resulting on a high brigthness, 2000 x 2000 pixel resolution on each of the 4 walls. The AdaptiveCave VR System has an increased versatility both in terms of projectors and screen architecture. First, the system works with any mix of a wide range of projector models that can be substituted
—one by one— at any moment, for more modern or cheaper ones. Second, the self-calibration software, which guarantees a uniform final image with concordance and continuity, can be adapted to many other wall and screen configurations. The AdaptiveCave project includes the set-up and all related software components: geometric and chromatic calibration, simultaneous rendering on 40 projected viewports, synchronization and interaction. The interaction is based on a cableless, kinect-based gesture interface with natural interaction paradigms.
This talk is about the major peculiarities and difficulties we encounter when trying to validate research results in fields such as virtual reality (VR) and 3D user interfaces (3DUI). We review the steps in the empirical method and discuss a number of challenges when conducting human-subject experiments. These challenges include the number of independent variables to control to get useful findings, the within-subjects or between-subjects dilemma, hard-to-collect data, experimenter effects, ethical issues, and the lack of background in
the community for proper statistical analysis and interpretation of the results. We show that experiments involving human-subjects hinder the adoption of traditional experimental principles (comparison, repeatability, reproducibility, justification and explanation) and
propose some ideas to improve the reliability of findings in VR and 3DUI disciplines.
Real time rendering of cities with realistic global illumination is still an open problem. In this paper we propose a two-step algorithm to simulate the nocturnal illumination of a city. The first step computes an approximate aerial solution using simple textured quads for each street light. The second step uses photon mapping to locally compute the global illumination coming from light sources close to the viewer. Then, we transfer the local, highquality solution to the low resolution buffers used for aerial views, refining it with accurate information from the local simulation. Our approach achieves real time frame rates in commodity hardware.
High-quality texture minification techniques, including trilinear and anisotropic filtering, require texture data to be arranged into a collection of pre-filtered texture maps called mipmaps. In this paper, we present a compression scheme for mipmapped textures which achieves much higher quality than current native schemes by exploiting image coherence across mipmap levels. The basic idea is to use a high-quality native compressed format for the upper levels of the mipmap pyramid (to retain efficient minification filtering) together with a novel compact representation of the detail provided by the highest-resolution mipmap. Key elements of our approach include delta-encoding of the luminance signal, efficient encoding of coherent regions through texel runs following a Hilbert scan, a scheme for run encoding supporting fast random-access, and a predictive approach for encoding indices of variable-length blocks. We show that our scheme clearly outperforms native 6:1 compressed texture formats in terms of image quality while still providing real-time rendering of trilinearly filtered textures.
In this paper, we present a new impostor-based representation for 3D animated characters supporting real-time rendering
of thousands of agents. We maximize rendering performance by using a collection of pre-computed impostors sampled
from a discrete set of view directions. Our approach differs from previous work on view-dependent impostors in that we
use per-joint rather than per-character impostors. Our characters are animated by applying the joint rotations directly to the
impostors, instead of choosing a single impostor for the whole character from a set of pre-deﬁned poses. This offers more
ﬂexibility in terms of animation clips, as our representation supports any arbitrary pose, and thus, the agent behavior is not
constrained to a small collection of pre-deﬁned clips. Because our impostors are intended to be valid for any pose, a key
issue is to deﬁne a proper boundary for each impostor to minimize image artifacts while animating the agents. We pose
this problem as a variational optimization problem and provide an efﬁcient algorithm for computing a discrete solution as
a pre-process. To the best of our knowledge, this is the ﬁrst time a crowd rendering algorithm encompassing image-based
performance, small graphics processing unit footprint, and animation independence is proposed.
In this paper we present the ViRVIG Institute, a recently created institution that joins two well-known research groups: MOVING in Barcelona, and GGG in Girona. Our
main research topics are Virtual Reality devices and interaction techniques, complex data models, realistic materials and lighting, geometry processing, and medical image visualization. We briefly introduce the history of both research groups and present some representative projects. Finally, we sketch our lines for future research.
Les contribucions d’aquesta tesi s’emmarquen en la interacció home-computador en entorns de realitat virtual. Concretament, en la millora de tasques de selecció d’objectes tridimensionals. La selecció d’objectes és una tasca primordial en la interacció en entorns virtuals, doncs defineixen el curs de les accions de l’usuari envers l’entorn virtual. En aquesta tesi s’analitzen els factors que determinen l’eficiència de les tasques de selecció i es proposen millores en la seva usabilitat incrementant, tant la productivitat de l’usuari, com el seu confort. Al ser una tasca crítica, la seva millora és un factor prioritari.
Les tasques de selecció en un entorn tridimensional requereixen la interacció mitjançant gestos, per exemple, agafar un objecte amb les mans o apuntar-lo amb el dit. L’efectivitat d’una tasca de selecció depèn de múltiples factors, com podria ser l’habilitat de l’usuari o en la percepció espacial de l’entorn. La tesi es centra en l’estudi de les limitacions de les tasques de selecció i de com els models d’interacció home-màquina existents, permeten el seu replantejament per a millorar la seva eficiència. Al llarg de la tesi es proposen diverses tècniques de selecció basades en la llei de Fitts, i s’analitza com evitar la dependència en la percepció espacial. Els anàlisis teòrics es complementen amb estudis d’usabilitat on s’avalua, tant l’efectivitat de les solucions proposades, com les ja existents.
Encara que la major part de contribucions es centren en la selecció d’objectes tridimensionals, també s’estudia la selecció i manipulació d’interfícies gràfiques d’usuari 2D en entorns virtuals, i en la millora de la transferència d’informació entre usuaris en entorns col·laboratius. En aquests tipus d’entorns, és comú l’ús de la selecció per fer referència a determinats objectes o característiques del medi, a altres usuaris. No obstant, la impossibilitat de veure els objectes amb els què d’altres usuaris estan interactuant, dificulta l’intercanvi d’informació entre ells. S’ha fet un estudi de les eines utilitzades per a la gestió de l’oclusió i de com milloren l’intercanvi d’informació entre usuaris.
Limitations of current 3D acquisition technology often lead to polygonal meshes exhibiting a number of geometrical and topological defects which prevent them from widespread use. In this paper we present
a new method for model repair which takes as input an arbitrary polygonal mesh and outputs a valid 2-manifold triangle mesh. Unlike previous work, our method allows users to quickly identify areas with potential topological errors and to choose how to fix them in a user-friendly manner. Key steps of our algorithm include the conversion of the input model into a set of voxels, the use of morphological operators to allow the user to modify the topology of the discrete model, and the conversion of the
corrected voxel set back into a 2-manifold triangle mesh. Our experiments demonstrate that the proposed algorithm is suitable for repairing meshes of a large class of shapes.
Argelaguet, F.; Kulik, A.; Kunert, A.; Andujar, C.; Froehlich, B. International journal of human-computer studies Vol. 69, num. 6, p. 387-400 DOI: 10.1016/j.ijhcs.2011.01.003 Date of publication: 2011 Journal article
Multi-user virtual reality systems enable natural collaboration in share virtual worlds. Users can talk to each other, gesture and point
into the virtual scenery as if it were real. As in reality, refer ring to objects by pointing results often in a situation whereon objects are
occluded from the other users’ viewpoints. While in reality this problem can only be solved by adapting the viewing position, specialized
individual views of the shared virtual scene enable various other solutions. As one such solution we propose show-through techniques to
make sure that the objects one is pointing to can always be seen by others. We first study the impact of such augmented viewing
techniques on the spatial understanding of the scene, the rapidity of mutual information exchange as well as the proxemic behavior of
users. To this end we conducted a user study in a co-located stereoscopic multi-user setup. Our study revealed advantages for show-
through techniques interms of comfort, user acceptance and compliance to social protocols while spatial understanding and mutual
information exchange is retained. Motivated by these results we further analyze whether show-through techniques may also be beneficial
in distributed virtual environments.We investigated a distributed setup for two users, each participant having its own display screen and
a minimalist avatar representation for each participant. In such a configuration there is a lack of mutual awareness, which hinders the
understanding of each other’s pointing gestures and decreases the relevance of social protocols interms of proxemic behavior.
Nevertheless, we found that show-through techniques can improve collaborative interaction tasks even in such situations
Trueba, R.; Andujar, C.; Argelaguet, F. International journal of creative interfaces and computer graphics Vol. 1, num. 2, p. 1-14 DOI: 10.4018/jcicg.2010070101 Date of publication: 2010-12 Journal article
Object occlusion is a major handicap for efficient interaction with 3D virtual environments. The well-known World in Miniature (WIM) metaphor partially solves this problem by providing an additional dynamic view point through a hand-held miniature copy of the scene. However, letting the miniature show a replica of the whole scene makes WIM metaphor suitable for only relatively simple scenes due to occlusion and level of scale issues. In this paper, the authors propose several algorithms to extend the idea behind the WIM to arbitrarily complex scenes. The main idea is to automatically decompose indoor scenes into a collection of cells that define potential extents of the miniature replica. This cell decomposition works well for general indoor scenes and allows for simple and efficient algorithms for preserving the visibility of potential targets inside the cell. The authors also discuss how to support interaction at multiple levels of scale by allowing the user to select the WIM size according to the accurazy required for accomplishing the task.
In this paper, we present an efficient approach for the interactive rendering of large-scale urban models, which can be integrated seamlessly with virtual globe applications. Our scheme fills the gap between standard approaches for distant views of digital terrains and the polygonal models required for close-up views. Our work is oriented towards city models with real photographic textures of the building facades. At the heart of our approach is a
multi-resolution tree of the scene defining multi-level relief impostors. Key ingredients of our approach include the
pre-computation of a small set of zenithal and oblique relief maps that capture the geometry and appearance of the buildings inside each node, a rendering algorithm combining relief mapping with projective texture mapping which uses only a small subset of the pre-computed relief maps, and the use of wavelet compression to simulate
two additional levels of the tree. Our scheme runs considerably faster than polygonal-based approaches while producing images with higher quality than competing relief-mapping techniques. We show both analytically and empirically that multi-level relief impostors are suitable for interactive navigation through large urban models.
Rendering detailed animated characters is a
major limiting factor in crowd simulation. In
this paper we present a new representation
for 3D animated characters which supports
output-sensitive rendering. Each character
is encoded through a small collection of textured boxes storing color and depth values. At runtime, each box is animated according to the rigid transformation of its associated bone. A fragment shader is used to recover the original geometry using an adapted version of relief mapping. Unlike competing outputsensitive approaches, our compact representation is able to recover high-frequency surface details and reproduces view-motion parallax. Furthermore, our approach does not require
us to predefine the animation sequences nor
to select a subset of discrete views. Our user study demonstrates that our approach allows for much more simulated agents with negligible visual artifacts.
In this paper we present an algorithm for parameterizing arbitrary surfaces onto a quadrilateral domain defined by a collection of cubic cells. The parameterization inside each cell is implicit and thus requires storing no texture coordinates. Based upon this parameterization, we propose a unified representation of geometric and appearance information of complex models. The representation consists of a set of cubic cells (providing a coarse representation of the object) together with a collection of distance maps (encoding fine geometric detail
inside each cell). Our new representation has similar uses than geometry images, but it requires storing a single distance value per texel instead of full vertex coordinates. When combined with color and normal maps, our representation can be used to render an approximation of the model through an output-sensitive relief mapping
algorithm, thus being specially amenable for GPU raytracing.
Raster-based topographic maps are commonly used in geoinformation systems to overlay geographic entities on top of digital terrain models. Using compressed texture formats for encoding topographic maps allows reducing latency times while visualizing large geographic datasets. Topographic maps encompass high-frequency content with large uniform regions, making current compressed texture formats inappropriate for encoding them. In this paper we present a method for locally-adaptive compression of topographic maps. Key elements include a Hilbert scan to maximize spatial coherence, efficient encoding of homogeneous image regions through arbitrarily-sized texel runs, a cumulative run-length encoding supporting fast random-access, and a compression algorithm supporting lossless and lossy compression. Our scheme can be easily implemented on current programmable graphics
hardware allowing real-time GPU decompression and rendering of bilinear-filtered topographic maps.
Predefined camera paths are a valuable tool for the exploration of complex virtual environments. The speed at which the virtual
camera travels along different path segments is key for allowing users to perceive and understand the scene while maintaining their attention. Current tools for speed adjustment of camera motion along predefined
paths, such as keyframing, interpolation types and speed curve editors provide the animators with a great deal of flexibility but offer little support for the animator to decide which speed is better for each point along the path. In this paper we address the problem of computing a suitable speed curve for a predefined camera path through an arbitrary scene. We strive at adapting speed along the path to provide non-fatiguing,
informative, interestingness and concise animations. Key elements of our approach include a new metric based on optical flow for quantifying the amount of change between two consecutive frames, the use of perceptual
metrics to disregard optical flow in areas with low image saliency, and the incorporation of habituation metrics to keep the user attention. We also present the results of a preliminary user-study comparing user response with alternative approaches for computing speed curves.
Current schemes for texture compression fail to exploit spatial coherence in an adaptive manner due to the strict efficiency constraints imposed by GPU-based, fragment-level decompression. In this paper we present a texture compression framework for quasi-lossless, locally-adaptive compression of graphics data. Key elements include a Hilbert scan to maximize spatial coherence, efficient encoding of homogeneous image regions through arbitrarilysized texel runs, a cumulative run-length encoding supporting fast random-access, and a compression algorithm suitable for fixed-rate and variable-rate encoding. Our scheme can be easily integrated into the rasterization
pipeline of current programmable graphics hardware allowing real-time GPU decompression. We show that our scheme clearly outperforms competing approaches such as S3TC DXT1 on a large class of images with some degree of spatial coherence. Unlike other proprietary formats, our scheme is suitable for compression of any graphics data including color maps, shadow maps and relief maps. We have observed compression rates of up to 12:1, with minimal or no loss in visual quality and a small impact on rendering time.
he World in Miniature (WIM) metaphor allows users to interact and travel efficiently in virtual environments. In addition to the first-person perspective offered by typical VR applications, the WIM offers a second dynamic viewpoint through a hand-held miniature copy of the environment. In the original WIM paper the miniature was a scaled down replica of the whole scene, thus limiting its application to simple models being manipulated at a single level of scale. Several WIM extensions have been proposed where the replica shows only a part of the environment. In this paper we present a new approach to handle complexity and occlusion in the WIM. We discuss algorithms for selecting the region of the scene which will be covered by the miniature copy and for handling occlusion from an exocentric viewpoint. We also present the results of a user-study showing that our technique can greatly improve user performance on spatial tasks in densely-occluded scenes.
The World in Miniature Metaphor (WIM) allows users to select, manipulate and navigate efficiently in virtual environments. In addition to the first-person perspective offered by typical VR applications, the WIM offers a second dynamic viewpoint through a hand-held miniature copy of the environment. In this paper we explore different strategies to allow the user to interact with the miniature replica at multiple levels of scale. Unlike competing approaches, we support complex indoor environments by explicitly handling occlusion. We discuss algorithms for
selecting the part of the scene to be included in the replica, and for providing a clear view of the region of interest.
Key elements of our approach include an algorithm to recompute the active region from a subdivision of the scene into cells, and a view-dependent algorithm to cull-away occluding geometry through a small set of slicing planes roughly oriented along the main occluding surfaces. We present the results of a user-study showing that our technique clearly outperforms competing approaches on spatial tasks performed in densely-occluded scenes.
The act of pointing to graphical elements is one of the fundamental tasks in Human-Computer Interaction. In this paper we analyze visual feedback techniques for accurate pointing on stereoscopic displays.
Virtual feedback techniques must provide precise information about the pointing tool and its spatial relationship with potential
targets. We show both analytically and empirically that current approaches provide poor feedback on stereoscopic displays, resulting in low user performance when accurate pointing is required. We propose a new feedback technique following a camera viewfinder metaphor. The key idea is to locally flatten the scene objects around
Andujar, C.; Diaz, J.; Brunet, P. IEEE Virtual Reality Workshop on Virtual Cityscapes: Key Research Issues in Modeling Large-Scale Immersive Urban Environments Presentation's date: 2008-03-08 Presentation of work at congresses
Image-based rendering techniques are often the preferred choice to accelerate the exploration of massive outdoor models and complex human-made structures. In the last few years, relief mapping has been shown to be extremely useful as a compact representation of highly-detailed 3D models. In this paper we describe a rendering system for interactive, high-quality visualization of large scale urban models through a hierarchical collection of properly-oriented
relief-mapped polygons. At the heart of our approach is a visibilityaware algorithm for the selection of the set of viewing planes supporting the relief maps. Our selection algorithm optimizes both the sampling density and the coverage of the relief maps and its running time is mostly independent on the underlying geometry. We show that our approach is suitable for navigating through large scale urban models at interactive rates while preserving both geometric and
In this paper we explore the extension of 2D pointing facilitation techniques to 3D object selection. We discuss what problems must be faced when adapting such techniques to 3D interaction on VR applications,
and we propose two strategies to adapt the expanding targets approach to the 3D realm, either by dynamically scaling potential targets or by using depth-sorting to guarantee that potential targets appear
completely unoccluded. We also present three experiments to evaluate both strategies in 3D selection tasks with multiple targets at varying densities. Our user studies show promising results of 3D expanding targets
in terms of error rates and, most importantly, user acceptance.
The World in Miniature (WIM) metaphor allows users to interact and travel efficiently in virtual environments. In addition to the first-person perspective offered by typical VR applications, the WIM offers a second dynamic viewpoint through a hand-held miniature copy of the virtual environment. In the original WIM paper the miniature was a scaled down replica of the whole environment, thus limiting the technique to simple models being manipulated at a single level of scale. Several WIM extensions have been proposed where the replica shows only a part of the virtual environment. In this paper we present an improved visualization of WIM that supports arbitrarily-complex, densely-occluded scenes. In particular, we discuss algorithms for selecting the region of the virtual environment which will be covered by the miniature copy and efficient
algorithms for handling 3D occlusion from an exocentric viewpoint.
Relief impostors have been proposed as a compact and high-quality representation for high-frequency detail in 3D models. In this paper we propose an algorithm to represent a complex object through the combination of a
reduced set of relief maps. These relief maps can be rendered with very few artifacts and no apparent deformation from any view direction. We present an efficient algorithm to optimize the set of viewing planes supporting the relief maps, and an image-space metric to select a sufficient subset of relief maps for each view direction. Selected maps (typically three) are rendered based on the well-known ray-height-field intersection algorithm implemented on the GPU. We discuss several strategies to merge overlapping relief maps while minimizing sampling artifacts and to reduce extra texture requirements. We show that our representation can maintain the geometry and the silhouette of a large class of complex shapes with no limit in the viewing direction. Since the rendering cost is
output sensitive, our representation can be used to build a hierarchical model of a 3D scene.