Demo sessions

Demo 1 "The SMART-I² : A window open through audio-visual spatialized spaces"

JPEG - 704.1 kb
The SMART-I2
A window open through audio-visual spatialized spaces.

The SMART-I² platform (Spatial Multi-users Audio-visual Real-Time Interactive Interface) proposes to immerse you during this demo session in a virtual and spatialized audio-visual world. The 3D visual rendering is made using passive tracked stereoscopy. The 3D sound rendering is achieved through the use of Wave Field Synthesis (WFS). These two technologies are merged together to create a virtual but tangible extension of the real physical world. This demo illustrates the bene-ts of advanced spatial sound rendering techniques in conjunction with 3D video. WFS enables to create virtual sound scenes for which the position of sound sources is accurate for a large number of users even for sources located in front of the screen. Simple virtual scenes are provided to users comparing classical stereophonic techniques with WFS. Improvements in terms of immersion sensation and intelligibility of virtual scenes will be shown.

Demo 2 "Collaborative (co-localised) multimodal immersion for automotive industry"

Collaborative interactions with Virtual Reality technologies may be distant and/or co-localised. Both are useful depending of the user needs. With the EVE system, one of the VENISE team research topics is to compare immersive solutions applied to remote as well as to co-localise collaborative interactions.

In this demo we are presenting the work in progress on a software platform which manages multimodal and co-localised multi-user interactions for immersive collaborations. Each user has an exact visual depth perception of the scene. This original immersive feature is based on a BARCO hardware solution combining active and passive technologies, while the software management of this co-located immersion is managed within this demo by the 3DVIA Virtools platform of DS. On the other hand, intuitive interactions are possible thanks to a multimodal system which combines in a real time process: tracking information, haptic events, speech, and gesture commands. In addition, the co-localisation allows natural dialogue between the users, and the haptic device provides information on the physical constraints of the scene and a set of virtual guides that the users may define. The scenario is a two users collaborative task for training or design activities in automotive industry.

In the virtual scene of an assembly chain, users cooperate to define the trajectory of a seat which must be set it in the cockpit of a car. For instance, an apprentice selects a seat and manipulates it by using vocal commands and the haptic device to analysis the possible trajectory paths. In the same time a tutor is constraining the task of the first user by defining virtual guides (i.e. axes and/or planes) with vocal and two hands gesture commands. Moreover, the software platform may be used to a full collaborative interaction for a design task. For such case, the system manages the switching between one user and the other to reverse their roles, by detecting which one is holding the haptic device. In addition, the user which is interacting with this device may constraint the haptic rendering by defining additional virtual guides with vocal commands and some gestures of his/her unoccupied hand.

Demo 3 "Remote driving: a test case for telepresence"

BMP - 585.9 kb
Remote vision in SACARI
Remote vision in SACARI
BMP - 585.9 kb
Stereo capture within the SACARI system
The stereo camera sensing the remote environment for the SACARI experiment.

Augmented Reality is the process by which virtual contents are superimposed in the field of perception of the user, so as to provide useful information for the task at hand. Augmented Virtuality is the reverse process, which adds real content captured remotely into a Virtual Reality simulation.

We will demonstrate this using the SACARI application. SACARI stands for Supervision of an Autonomous Car in an Augmented viRtuality Interface. In the demo, a user will drive an remote electric wheelchair using real-time video and audio inputs rendered inside the EVE system.