Technological advances in eXtended Reality (XR) are hinting at an unprecedented explosion of applications where immersive content takes the centre stage. Such an explosion of applications requires that vision and hearing, the two primary sensory modalities that are relevant for spatial perception, are appropriately catered for. 6DoF immersive audio allows for a truly lifelike auditory experience, allowing listeners to not only hear sounds from all directions, but also to perceive their distance and location in space and more crucially to navigate the sound scene without spatial constraints unlike its predecessor 3D audio.
Building upon the success of its first edition in i3DA 2021, this special session will bring together researchers from academia and industry to present, discuss and explore the latest developments in 6DoF immersive audio, including advances in capture, rendering, coding, compression, and perception of 6DoF audio content. Topics of interest for this special session include but are not limited to:
– 6DoF sound capture and rendering techniques
– Spatial audio processing algorithms
– Coding and compression of 6DoF audio content
– Audio-visual integration in extended reality with specific emphasis on 6DoF applications
– Perception and subjective assessment of 6DoF audio
– Applications of 6DoF immersive audio in gaming, film, and other industries
Huseyin Hacihabiboglu is a Professor of Signal Processing at METU in Ankara, Turkey. He received the B.Sc. degree from METU in 2000, the M.Sc. degree from the University of Bristol, Bristol in 2001, both in electrical and electronic engineering, and the Ph.D. degree in computer science from Queen’s University Belfast in 2004. He held research positions at the University of Surrey and King’s College London. His research interests include immersive audio, room acoustics, psychoacoustics of spatial hearing, microphone arrays, and game audio. He has several patents on spatial audio and microphone arrays and is one of the co-founders of sonixpace, an audio technology start-up based in Ankara. He is a member of the IEEE SPS, UKRI Peer Review College, AES, Turkish Acoustics Society, EAA and ASA. He is the official representative of METU in Moving Picture Audio and Data Coding by Artificial Intelligence (MPAI) where he contributed to the development of MPAI-CAE (Context-based Audio Enhancement) standart. Between 2017-2021, he was an Associate Editor for the IEEE/ACM Transactions on Audio, Speech, and Language Processing. He is currently an associate editor for the IEEE Signal Processing Letters.
Zoran Cvetkovic received the Dipl. Ing. and Mag. degrees from the University of Belgrade, Yugoslavia, the M.Phil. degree from Columbia University and the Ph.D. degree in electrical engineering from the University of California, Berkeley. He is currently a Professor of Signal Processing at King’s College London. He held research positions with EPFL, Lausanne, Switzerland, in 1996, and with Harvard University, Cambridge, MA, USA, during 2002–2004. Between 1997 and 2002, he was a Member of the technical staff of AT&T Shannon Laboratory. His research interests include signal processing, ranging from theoretical aspects of signal analysis to applications in audio and speech technology, and neuroscience. He was an Associate Editor for the IEEE TRANSACTIONS ON SIGNAL PROCESSING.