Listening to What Cannot Be Seen: MIT’s Spatial Sound Lab Hosts an Open House


A new feature on many at-home devices, surround sound audio has become increasingly popular. Spatial sound, which gives listeners a “3D” sound experience, is considered to be similar to a movie theater or concert — and it’s being created and studied right here in Cambridge, Mass.

On the evening of April 25, community members and MIT students gathered in The MIT.nano Immersion Lab to experience the latest invention — a spatial sound space. Students and professors presented their creations, ranging from electronic music beats to sound-based Pac-Man.

The project creates external spatialization that uses real localization in the brain to create an immersive listening experience, thus tying together engineering and technology, cognitive science, and music science into the same performance.

Audience members gathered among the many computers, various speakers, and dark walls of the Immersion Lab to both learn about and experience the uses, creation, and implementation of spatial sound.


The open house was largely directed by MIT Professor Ian Condry, founder of the Spatial Sound Lab. Founded in 2019, Condry describes the lab as a community studio, open to projects and people of all kinds. For Condry, who teaches and studies anthropology, the Spatial Sound Lab is about more than just science.

“I’m very interested in sound and culture,” Condry said in an interview with The Crimson. “I’ve been studying the current edges of electronic music and spatial sound is a new format, a new way of listening,” he said.

“What’s so exciting for me is listening behind us, listening to things we cannot see. In a world that’s constantly flashing in my face and trying to grab my attention, spatial sound offers the opportunity to slow down and listen to the people we cannot see.”

In addition to explaining the ideas and software behind the spatial sound space, Condry shared some of his own musical creations with the audience members. Listeners explored his beats by walking around the room, as sounds and rhythms were featured in different prominences depending on location.

Justin Looper, another affiliate of the lab, presented a demo of his spatial Pac-Man creation, in which users can play the classic video game by listening rather than viewing. Sounds alert players to where their character is and where the ghosts are, and a controller is then tilted accordingly. Much to the interest of the audience, Looper explained the mathematical calculations necessary for the creation of spatial Pac-Man and how he was able to attach sound to a virtual object.

For Dr. Kyle Keane, a spatial sound researcher, the experience of working in the lab has been a “rediscovering of passions.” Dr. Keane teaches assistive technology in electrical engineering and computer science, and has been doing data sonification with the goal of helping blind people get access to information they wouldn’t have otherwise.

“One of my favorite parts about this is that I discovered a long tradition of algorithmic music at MIT,” Keane said. He spoke about his interest in MIT’s Csound program, a multi-channel algorithmic music software that dates back to the ’80s.

“Because music production is not normally accessible to people who are blind — all of the tools that people use to mix music are not made compatible with screen-readers — there’s very few ways that blind electronic musicians can actually make music, especially custom music,” Keane said. “Being able to do things in command line and text files is the ultimate accessible experience.”

As spatial sound innovation continues to permeate the tech sphere and increase accessibility for those with visual impairments, the work of MIT’s Spatial Sound Lab is more important than ever.