Controlling sound has fascinated humanity for centuries, often explored in science fiction and fantasy narratives. Classic examples include Frank Herbert’s Dune, where the cone of silence affords characters private conversations in public spaces, and Blade Runner 2049, which features eerie whispering advertisements. The interplay between sound and design is not merely fictional; in reality, architectural features can significantly influence how sound behaves. For instance, in the U.S. Capitol’s hall of statues, whispers can traverse across the room undetected, highlighting the potential of intentional architectural designs to manipulate auditory experiences. Presently, scientists are delving into novel methods of sound direction, aspiring for a future devoid of cumbersome earbuds. This endeavor to direct sound comes with its own challenges, meriting further investigation.
Research indicates that human hearing spans frequencies from 20 to 20,000 hertz, which tend to diffuse easily compared to higher-frequency ultrasound. This characteristic contributes to the ease with which conversations can be overheard. In 2019, innovators attempted to harness lasers that convert light to sound upon being absorbed by water vapor in the air. While initially challenging to localize sound effectively—leading to audible dispersion along the beam—subsequent adjustments, such as employing rotating mirrors, yielded somewhat improved results. However, transmitting detailed audio remained an elusive goal.
In exploration of alternate strategies, researchers have turned to ultrasonic waves. Although inaudible, these waves exploit a phenomenon known as nonlinear interactions, wherein two ultrasonic waves converge to produce a higher frequency sound and also create lower-frequency sounds that can fall within the detectable human audio range. This interaction can be likened to the sizzling sound created when water meets hot oil in a frying pan, a result of tiny steam explosions generating ultrasonic waves that intermingle in the air.
Moving into practical applications, the U.S. military utilized similar acoustic effects to create directional speakers capable of directing sound along a specified path. Over the years, companies like Holosonics capitalized on this technology to commercialize directional speakers. However, as with the laser technology, the audio often remained partially audible along its trajectory, failing to provide true sound privacy. A recent advancement introduced the concept of “audible enclaves,” which, as articulated by Penn State acoustics researcher Yun Jing, resemble wearing an invisible headset. Those positioned correctly can enjoy music or conversation while others nearby remain unaware of the sound.
Utilizing acoustic metasurfaces, researchers have pioneered a way to engineer materials with intricate structures designed to manipulate sound. These metasurfaces act as lenses, capable of bending and directing sound waves in ways that natural materials cannot. By employing 3D printed acoustic panels with zigzag air channels, Jing’s team was able to steer ultrasonic waves into controlled paths. They then combined these waves at specific points to produce audible sound only in those targeted areas. While the initial sound quality was subpar due to the rudimentary transducers used, the results showcased the potential of acoustic metasurfaces to create localized sound experiences.
Though the advent of such technologies isn’t yet capable of creating the iconic silence depicted in Dune, researchers envision a future where private conversations can flourish in public environments without requiring earbuds or other devices. The implications for spaces like libraries and offices are immense; multiple audible enclaves could coexist, facilitating discreet audio experiences for numerous users simultaneously. As these innovations progress, the dream of seamless sound control may one day fulfill the desires expressed in the realms of science fiction.