Abstract
We describe a methodology for virtual reality designers to capture and resynthesize the variations in sound made by objects when we interact with them through contact such as touch. The timbre of contact sounds can vary greatly, depending on both the listener's location relative to the object and the interaction point on the object itself. We believe that an accurate rendering of this variation greatly enhances the feeling of immersion in a simulation. To do this, we model the variation with an efficient algorithm based on modal synthesis. This model contains a vector field that is defined on the product space of contact locations and listening positions around the object The modal data are sampled on this high dimensional space using an automated measuring platform. A parameter-fitting algorithm is presented that recovers the parameters from a large set of sound recordings around objects and creates a continuous timbre field by interpolation. The model is subsequently rendered in a real-time simulation with integrated haptic, graphic, and audio display. We describe our experience with an implementation of this system and an informal evaluation of the results.
Original language | English (US) |
---|---|
Pages (from-to) | 643-654 |
Number of pages | 12 |
Journal | Presence: Teleoperators and Virtual Environments |
Volume | 16 |
Issue number | 6 |
DOIs | |
State | Published - Dec 2007 |
Externally published | Yes |
ASJC Scopus subject areas
- Software
- Control and Systems Engineering
- Human-Computer Interaction
- Computer Vision and Pattern Recognition