Visual Immersive Mathematics

In the Visual Immersive Mathematics group, geometry, visual appearance and physical properties of virtual objects are defined mathematically. Visual and haptic rendering are used together for immersing into images as if they were actual 3D scenes. New ways of user-computer interactions are designed and studied.

Function-based Shape Modelling

We are defining geometry, appearance and tangible physical properties of virtual objects using mathematical functions and procedures.

Tangible Images and Haptic Video Interaction

We add haptic modality to visual rendering and video interaction across the internet.

New User-Computer Interactions

We propose new ways of user-computer interactions which combine visual, audio, haptic and free-hand interactions.

 

Cyberworlds

We are working on virtual shared spaces and communities that augment the way we interact, participate in business and receive information throughout the world.

Laboratory of Immersive Mathematics@NTU

Visit also our Immersive Mathematics research lab website at NTU.

Function-based Shape Modelling

We use mathematical definitions or procedural representation to define geometry, visual appearance and physical properties of virtual objects. We work with FReps, algebraic surfaces, implicit surfaces, CSG solids, volumetric objects, as well as parametric curves, surfaces and solids. We have developed FVRML/FX3D — function-based extension of Virtual Reality Modeling Language (VRML) and Extensible 3D (X3D) which allows for typing mathematical formulae and function scripts straight in the codes of these programming languages for defining practically any type of geometry, appearance as well as physical properties, which can be explored using haptic devices. We are also developing various interactive modelling tools for different applications including medical simulations, free-form shape modelling, and computer science education.

Tangible Images and Haptic Video Interaction

We add haptic interaction modality to visual rendering and common video communication across the internet. Haptic forces are either retrieved from images or video or efficiently exchanged as asynchronous data streams across the internet. We research how to do it in the most efficient way and without modifications of the existing video communication applications. We devise methods supporting haptic interactions done with common desktop haptic devices, video cameras as well as with Microsoft Kinect, Leap Motion controller and wearable devices. We are working on applying the proposed methods to medical simulations and electronic shopping applications.

New User-Computer Interactions

We add new modality to common interactive shape modelling and virtual prototyping and allow for combining visual, audio, haptic and free-hand interactions in virtual environments and simulations. Hand motion is captured by different ways ranging from using various desktop haptic devices to optical motion capture cameras so that it will become possible to start modelling with one type of device and then continue it using another type of hand tracking device. We design and implement a set of robust and efficient interactive hand gestures suitable for various engineering designs (virtual prototyping) and crafts (freeform shape modelling). We simulate existing hand-made assembling and modelling processes so that the required motor skills can be both trained and applied in the simulations. We also work on new ways evaluating the quality of interaction experience.

Cyberworlds

Created intentionally or spontaneously, cyberworlds are information spaces and communities that immensely augment the way we interact, participate in business and receive information throughout the world. Cyberworlds seriously impact our lives and the evolution of the world economy by taking such forms as social networking services, 3D shared virtual communities and massively multiplayer online role-playing games. We work on such virtual shared spaces as well as coordinate the annual International Conferences on Cyberworlds.