A haptic rendering procedure for surface relief detail of large models relying on an image-based Hybrid Rugosity Mesostructure Atlas (HyRMA) shell is presented, allowing real-time accurate perception of meshes with millions of triangles. The procedure grows as a generalization of a haptic rendering acceleration method based on rugosity mesostructures, modified for synced visualization of relief detail.
Fairen, M.; Pelechano, N. Annual Conference of the European Association for Computer Graphics p. 9-10 DOI: 10.2312/conf/EG2013/education/009-010 Presentation's date: 2013-05-09 Presentation of work at congresses
This paper presents our first experience teaching WebGL in a master’s degree for a class of students with very different backgrounds. The main challenge was to prepare a course that would be engaging for students with computer graphics experience, and yet interesting and non-frustrating for those students unfamiliar with OpenGL.
In this paper we explain how we prepared this course, and the project assignment to achieve our goal. The results achieved by the students show that the course succeeded in keeping different kinds of students engaged and excited with the implementation of their final project.
Theoktisto, V.; Fairen, M.; Navazo, I. International Conference on Computer Graphics Theory and Applications and International Conference on Information Visualization Theory and Applications p. 191-196 Presentation's date: 2013-02 Presentation of work at congresses
A fast global approach that encodes haptic surface relief detail using an image-based Hybrid Rugosity Me-sostructure Atlas (HyRMA) shell is presented. It is based on a depth/normal texture computed from surface differences of the same mesh object at different resolutions (a dense one with thousands/millions of triangles, and a highly decimated version). Per-face local depth differences are warped from volume space into tangent space, and stored in a sorted relief atlas. Next, the atlas is sampled by a vertex/fragment shader pair, unwarped, displacing the pixels at each face of the decimated mesh to render the original mesh detail with quite fewer triangles. We achieve accurate correspondence between visualization of surface detail and perception of its fine features without compromising rendering framerates, with some loss of detail at mesostructure "holes".
Chica, A.; Fairen, M.; Pelechano, N. Annual Conference of the European Association for Computer Graphics p. 65-72 DOI: 10.2312/conf/EG2012/education/065-072 Presentation's date: 2012-05-17 Presentation of work at congresses
The haptic rendering of surface mesostructure (fine relief features) in dense triangle meshes requires special structures, equipment, and high sampling rates for detailed perception of rugged models. Low cost approaches render haptic texture at the expense of fidelity of perception. We propose a faster method for surface haptic rendering using image-based Hybrid Rugosity Mesostructures (HRMs), paired maps with per-face heightfield displacements and normal maps, which are layered on top of a much decimated mesh, effectively adding greater surface detail than actually present in the geometry. The haptic probe’s force response algorithm
is modulated using the blended HRM coat to render dense surface features at much lower costs. The proposed method solves typical problems at edge crossings, concave foldings and texture transitions. To prove the wellness of the approach, a usability testbed framework was built to measure and compare experimental results of haptic rendering approaches in a common set of specially devised meshes, HRMs, and performance tests. Trial results of user testing evaluations show the goodness of the proposed HRM technique, rendering accurate 3D
surface detail at high sampling rates, deriving useful modeling and perception thresholds for this technique.
We propose a faster method for surface haptic rendering using image-based Hybrid Rugosity Mesostructures (HRMs), paired maps with per-face height eld displacements
and normalmaps, which are layered on top of a much decimated mesh. The haptic probe's
force response algorithm is modulated using the blended HRM coat to render surface features at much lower costs. The proposed method solves typical problems at edge crossings, concave foldings and texture transitions. To prove the wellness of the approach, a usability testbed framework was built to measure and compare experimental results of haptic rendering approaches. Trial results of user testing evaluations show the goodness of the proposed HRM technique, rendering accurate 3D surface detail at high sampling rates, deriving useful modeling and perception thresholds for this technique.
Relief impostors have been proposed as a compact and high-quality representation for high-frequency detail in 3D models. In this paper we propose an algorithm to represent a complex object through the combination of a
reduced set of relief maps. These relief maps can be rendered with very few artifacts and no apparent deformation from any view direction. We present an efficient algorithm to optimize the set of viewing planes supporting the relief maps, and an image-space metric to select a sufficient subset of relief maps for each view direction. Selected maps (typically three) are rendered based on the well-known ray-height-field intersection algorithm implemented on the GPU. We discuss several strategies to merge overlapping relief maps while minimizing sampling artifacts and to reduce extra texture requirements. We show that our representation can maintain the geometry and the silhouette of a large class of complex shapes with no limit in the viewing direction. Since the rendering cost is
output sensitive, our representation can be used to build a hierarchical model of a 3D scene.
In this paper we present a new technique for view-dependent LOD rendering, where the scene is represented through an octree model from which we can obtain a triangle mesh corresponding to a view-dependent LOD. We present the construction of this octree model and the visualization algorithm that generates on-the-fly a closed and valid triangle mesh for each frame of the visualization. This visualization algorithm is a depth-first traversal algorithm which also allows to re-use triangles from one
frame to another.
In many application areas, it is useful to convert the discrete information stored in the nodes of a regular grid into a continuous boundary model. Isosurface extraction algorithms di er on how the discrete information in the grid is generated, on what information does the grid store and on the properties of the output surface.
In this paper we present a new approach for fast development of application-control User Interfaces (UIs) for Virtual Environments (VEs). Our approach allows developers to build sophisticated UIs containing both simple widgets (such as windows, buttons, menus and sliders) and advanced widgets (such as hierarchical views and web browsers) with minimum effort. Rather than providing a new API for defining and managing the interface components, we propose
to extend current 2D toolkits such as Qt so that its full range of widgets can be displayed and manipulated either as 2D shapes on the desktop or as textured 3D objects within the virtual world. This
approach allows 3D UI developers to take advantage of the increasing number of components, layout managers and graphical design tools provided by 2D UI toolkits. Resulting programs can run on platforms ranging from fully immersive systems to generic desktop workstations with little or no modification. The design of the system and the key features required on the host UI toolkit are presented and discussed. A prototype system has been implemented
above Qt and evaluated on a 4-sided CAVE. The results indicate that this approach provides an efficient and cost-effective way for porting and developing application-control GUIs on VEs and thus it can greatly enhance the possibilities of many VE applications.
We derive a complete component framework for transforming standalone virtual reality (VR) applications into fullfledged multithreaded collaborative virtual reality environments (CVREs), after characterizing existing implementations into a feature-rich superset. Our main contribution is placing over the existing VR tool a very concise and
extensible class framework as an add-on component that provides emerging collaboration features. The enhancements
include: a scalable arbitrated peer-to-peer topology for scene sharing; multi-threaded components for graphics rendering, user interaction and network communications; a streaming message protocol for client communications; a collaborative user interface model for session handling; and interchangeable user roles with multicamera perspectives, avatar awareness and shared 3D annotations. We validate the framework by converting the existing ALICE VR Navigator into complete CVRE, with experimental results showing good performance in the collaborative inspection and manipulation of complex models.
Rendering haptic textures seamlessly out of triangle meshes just by using geometry requires heavy work and does not allow high sampling rates for detailed, rugged models. Better approaches simulate surface texture without increasing much the complexity or processing cost, at the expense of fidelity of perception. We propose a method for rendering local height field maps out of an underlying triangle mesh, which relies in a space subdivision representation based on octrees for collision detection, and rendering individual surface detail by modulating force response using local height fields. We compare our method against a force mapping implementation for rendering/perceiving the same models with textures using normal maps. The proposed technique allows for real time perception of 3D surface detail, allowing the user to perceive the best haptic rendering alternative for a given model. Some experimental results are presented to show the goodness of the approach.
The exploration of complex walkthrough models is often a difficult task due to the presence of densely occluded regions which pose a serious challenge to online navigation. In this paper we address the problem of algorithmic generation of exploration paths for complex walkthrough models. We present a characterization of suitable properties for camera paths and we discuss an efficient algorithm for computing them with little or no user intervention.
Our approach is based on identifying the free-space structure of the scene (represented by a cell and portal graph)
and an entropy-based measure of the relevance of a view-point. This metric is key for deciding which cells have to be visited and for computing critical way-points inside each cell. Several results on different model categories are presented and discussed.
This paper describes a new virtual reality system designed to be small enough to be totally portable. It is a semiimmersive
interaction system based on a movable stereoscopic projection screen with a tracking system to capture the movement of the screen with respect to the virtual model. The system can be used for cooperative inspection of very complex computer-aided design environments, allowing very simple interaction among users and designers at distant locations.
Sistema portable de realidad virtual basado en proyección.
Sistema de proyección de reducidas dimensiones para ser utilizado con un equipo informático. Consiste en una pantalla traslúcida (5) con dos asideros a sus lados (6), un espejo (4) situado enfrente de la pantalla y que encuadra en ésta las imágenes estereoscópicas reproducidas por los proyectores (3), los cuales se encuentran sobre una base que une la pantalla y el espejo en una única estructura movible. El usuario (1) puede coger la estructura por los asideros de la pantalla (6) y moverla consiguiendo modificar automáticamente la posición y orientación desde donde se observa el entorno virtual proyectado en la pantalla. El sistema es fácilmente desmontable de forma que puede ser guardado en una maleta permitiendo su portabilidad.
Se han previsto versiones con estéreo pasivo y con estéreo activo, así como con un único proyector.
This paper describes an affordable Virtual Reality system designed and developed by a group of researchers at the Polytechnic University of Catalunya (UPC). The system allows direct selection and manipulation of
virtual 3D objects. The interaction is based on stereoscopic images projected over the user’s working space and on devices tracking the user’s natural movements. The system includes a screen being adjustable both in orientation and height, sensors tracking the head and hand movements, and a tactile device for the forefinger providing touch sense. A prototype of the system is currently exhibited at the Virtual Reality
Center of Barcelona and it is being used in different application fields like architecture, medicine and industrial design.
This thesis presents the design and implementation of a software development platform (ATLAS) which offers some tools and methods to greatly simplify the construction of fairly sophisticated applications. It allows thus programmers to include advanced features in their applications with no or very little extra information and effort. These features include: the splitting of the application in distinct processes that may be distributed over a network; a powerful configuration and scripting language; several tools including an input system to easily construct reasonable interfaces; a flexible journaling mechanism --offering fault-tolerance to crashes of processes or communications--; and other features designed for graphics applications, like a global data identification- --addressing the problem of volatile references and giving support to processes of constraint solving--, and a uniform but flexible view of inputs allowing many different dialogue modes.These can be seen as related or overlapping with CORBA or other systems like Horus or Arjuna, but none of them addresses simultaneously all aspects included in ATLAS; more specifically none of them offers a standardized input model, a configuration and macro language, a journaling mechanism or gives support to processes of constraints solving and parametric design.The contributions of ATLAS are in showing how all these requirements can be addressed together; also in showing means by which this can be attained with little or no performance cost and without imposing on developers the need of mastering all these techniques. Finally, the design of the ATLAS journaling system is to our knowledge original in the simultaneous solution of all of its requirements.
We discuss the design and implementation of a software development platform that allows unsophisticated programmers to include advanced features to their applications with no or very little extra information and effort. These features include the splitting of the application in distinct processes that may be distributed over a network, a powerful configuration and scripting language, and several tools including an input system to easily construct reasonable interfaces. We attempt to describe both the techniques used to achieve transparency for the programmer and what exactly a user must do to build new ATLAS modules.
ATLAS, a platform for developing distributed applications by splitting them into several collaborating processes scattered in a local area network is presented. Although of general use, it has features especially designed for supporting graphics applications. We present its architecture and some aspects of its implementation, and discuss design criteria.