Moving Sources & Receivers (Beta)
Regardless of whether the simulation involves a moving source or a moving receiver, the workflow for handling results and performing postprocessing remains identical.
Impulse responses
The resulting impulse responses along the trajectory are provided through the MovingIR class, which can be obtained from the Results class.
The MovingIR object allows the results to be visualized in multiple ways, including record-section plots, individual impulse responses, and frequency responses. These visualizations can be displayed together with the corresponding spatial position along the trajectory in the model, enabling a clear interpretation of how the acoustic response evolves over time and space.
sim = project.get_simulation_by_name('my_simulation')
res = sim.get_results_object()
moving_irs = res.get_moving_ir(source=sim.sources[0], receiver=sim.receivers[0])
##Post-processing: Creating Moving Auralization
From the MovingIR object, the convolve_with_audio_signal() function can be used to generate a multi-channel rendering of the moving source or receiver convolved with a dry input signal. This function allows the user to specify parameters such as the rendering device, receiver orientation, and movement speed along the trajectory.
For moving sources, the orientation of the static receiver must be specified during rendering.
For moving receivers, the orientation along the trajectory must be defined instead. The orientation can be obtained from the associated Trajectory, if it was defined prior to simulation. Additionally, new orientation configurations can be defined at the rendering stage if needed.
An example for moving sources is given as follows.
# Load device
hrtf_device = tsdk.device_library.get_device_by_name('my_hrtf')
#Load mono dry signal
mono_wav, fs = sf.read('./conversation.wav')
dry_signal = treble.AudioSignal(mono_wavl,fs)
# Define the receiver orientation
audio_convolved = moving_irs.convolve_with_audio_signal(audio=dry_signal,
device=hrtf_device
speed=0.5,
receiver_orientation=treble.Rotation(-90, 0, 0))
Moreover, an example for moving receivers is also given.
# Define the receiver orientation
rec_traj = moving_irs.get_trajectory() # Get trajectory from simulation
orientation_lookahaed = rec_traj.look_ahead() # Creates a new orientation looking forward
audio_convolved_lookahead = moving_irs.convolve_with_audio_signal(audio=dry_signal,device=hrtf_device,speed=0.5,receiver_orientation=orientation_lookahaed)
Note that the orientation can be defined independently of the Trajectory created during the simulation definition
orientation_custom = treble.TrajectoryOrientation({0.0: Rotation(azimuth=225, elevation=0, roll=0),
0.5: Rotation(azimuth=225-90, elevation=0, roll=0)})
audio_convolved_custom = mov_ir_rec.convolve_with_audio_signal(audio=dry_signal,
device=hrtf_device,
speed=0.5,
receiver_orientation=orientation_custom)
The resulting output is a ConvolvedAudioSignal class, a time-domain audio signal that reflects the dynamic acoustic behavior along the motion path. Playback and plots of the waveform, frequency content, and spectrogram are available for the convolved result.
# Result visualization an playback
audio_convolved.playback()
audio_convolved.plot()