******************
Dear ECE undergrad and grad students,
The talk today is on 3D audio and should be very interesting.
John G. Harris, Professor and Chair
Department of Electrical and Computer Engineering
216 Larsen Hall, P.O. Box 116200
University of Florida, Gainesville, FL 32611-6200
ECE Seminar
Dr. Kyla McMullen
January 29, 2015 1:00 pm
234 Larsen Hall
Title: Spatial Audio: Rendering, Perception, and Applications
Dr. Kyla McMullen
University of Florida
ABSTRACT:
Audio
is an often-neglected portion of virtual environment research. The
addition of audio to virtual systems enhances our overall experience.
Furthermore, increasing the audio quality is computationally less
expensive than upgrading the visual quality of a scene. The use of
spatial (or 3D) audio gives the user the perception of hearing sounds in
a virtual environment in their actual spatial locations. Additionally,
spatial audio can be used in systems with low visual quality as a method
to enhance the feeling of immersion. Currently, the state of the art in
sound rendering significantly lags behind the visual rendering
advances. Current VR sound systems tend to use physical recordings or
render sounds by using artificial reverberation filters or simple ray
tracing algorithms. The resulting sounds tend to be limited and cannot
capture many natural acoustic effects.
Realistic
spatial sound is achieved by using head-related transfer functions
(HRTFs), which model the acoustic transformation that a sound undergoes
as it impinges the human body and is perceived by the auditory system.
When used to synthesize 3D auditory sources, HRTFs are typically
realized as the cascade of a minimum-phase finite impulse response (FIR)
filter and an all-pass filter that accounts for the lag in the
wavefront arrival time between the two ears. Due to the variations in
each person's anthropometric characteristics, the HRTF varies for each
individual. Creating individualized HRTFs for each virtual environment
user is a costly procedure that is not scalable for mass production.
In
this talk, I discuss individualized creation of HRTFs and some recent
advances in creating low-cost individualized HRTFs. Next, the talk will
cover current challenges and our lab’s progress in the areas of spatial
sound perception, evaluation, and rendering. The talk will conclude with
a discussion of applications and future directions.
BIO:
Dr.
Kyla McMullen earned her Bachelor of Science in Computer Science from
the University of Maryland, Baltimore County (UMBC). She earned her
Masters and Ph.D. degrees in Computer Science and Engineering from the
University of Michigan (2007-2012). While earning her Ph.D. she was also
a faculty member at Wayne State University in Detroit, Michigan. At
Wayne State University she taught computer literacy courses to over
2,000 students. Professor McMullen is the first minority woman to earn a
Ph.D. in Computer Science and Engineering from the University of
Michigan. She is currently a tenure-track faculty member in the Computer
& Information Sciences & Engineering Department. Dr. McMullen
has a personal commitment to encouraging women and minorities to pursue
careers in computing and other STEM fields. She currently pursues topics
such as customizing HRTFs for individualized listening experiences,
training listeners to locate sounds in virtual auditory environments,
assessing the memory capacity for localized sounds, and investigating
the real-world system integration of 3D sound.