Conditioned Wave-U-Net for Acoustic Matching of Speech in Shared XR Environments
Authors/Creators
Description
Mismatch in acoustics between users is an important challenge for interaction in shared XR environments. It can be mitigated through acoustic matching, which traditionally involves dereverberation followed by convolution with a room impulse response (RIR) of the target space.
However, the target RIR in such settings is usually unavailable. We propose to tackle this problem in an end-to-end manner using wave-u-net encoder-decoder network with potential for real-time operation. We use FiLM layers to condition this network on the embeddings extracted by a separate reverb encoder to match the acoustic properties between two arbitrarily chosen signals. We demonstrate that this approach outperforms two baseline methods and provides the flexibility to both dereverberate and rereverberate audio signals.
Files
WASPAA2025_Conditioned_Wave_U_Net_for_Acoustic_Matching_in_Shared_XR_Environments.pdf
Files
(2.2 MB)
| Name | Size | Download all |
|---|---|---|
|
md5:123e55e21efbefc9847551f2f1a67264
|
2.2 MB | Preview Download |
Additional details
Funding
Dates
- Accepted
-
2025-07-01