AudibleLight πŸ”ˆπŸ’‘#

Spatial soundscape synthesis using ray-tracing#

Warning

This project is currently under heavy development. We have done our due diligence to ensure that it works as expected. However, if you encounter any errors, please open an issue and let us know.

What is AudibleLight?#

This project provides a platform for generating synthetic soundscapes by simulating arbitrary microphone configurations and dynamic sources in both parameterized and 3D-scanned rooms. Under the hood, AudibleLight uses Meta’s open-source acoustic ray-tracing engine to simulate spatial room impulse responses and convolve them with recorded events to emulate array recordings of moving sources. The resulting soundscapes can prove useful in training models for a variety of downstream tasks, including acoustic imaging, sound event localisation and detection, direction of arrival estimation, etc.

In contrast to other projects (e.g., sonicsim, spatialscaper), AudibleLight provides a straightforward API without restricting the user to any specific dataset. You can bring your own mesh and your own audio files, and AudibleLight will handle all the spatial logic, validation, and synthesis necessary to ensure that the resulting soundscapes are valid for use in training machine learning models and algorithms.

AudibleLight is developed by researchers at the Centre for Digital Music, Queen Mary University of London in collaboration with Meta Reality Labs.

Indices and tables#