Audio Scene Generation
Note: Audio scene generation is currently in beta.
An audio scene is a formal representation of a listening scenario in which one or more source recordings are positioned within a modeled acoustic environment and rendered to a listener using a specified device configuration. This capability enables the efficient generation of realistic, controllable, and reproducible datasets without manually constructing each mixture. Common applications include training data generation for speech enhancement and source separation, audio-AI benchmarking, evaluation of hearing-device and headset algorithms, and many more.
Audio scene generation in the Treble SDK lets you define realistic audio mixtures from:
- room acoustics (impulse responses),
- recording material (speech, background noise, natural sounds, music, and other source recordings),
- and a listener configuration (device, orientation, device noise specifications and filters).
You can create audio scenes through two complementary workflows:
-
Manual scene generation
Explicitly define eachAudioSceneby assigning IRs, track content, and listener configurations. -
Automated bulk generation
Define reusableSceneRulesand useSceneGeneratorto produce aSceneCollectionof randomizedAudioSceneinstances for scalable dataset generation and benchmarking.