Published June 9, 2023 | Version 1
Other Open

Adapting a ConvNeXt model to audio classification on AudioSet (pretrained models)

Description

This deposit contains models checkpoints of our paper:

Pellegrini, T., Khalfaoui-Hassani, I., Labbé, E., & Masquelier, T. (2023). Adapting a ConvNeXt model to audio classification on AudioSet. arXiv preprint arXiv:2306.00830

Please check our code: https://github.com/topel/audioset-convnext-inf

Two checkpoints are provided, both a ConvNeXt-Tiny architecture adapted to AudioSet tagging:

  1. convnext_tiny_471mAP.pth
    --> trained on AudioSet unbalanced and balanced subsets. Training set size: 1921982 files
    --> mAP=0.471 on the test subset
  2. convnext_tiny_465mAP_BL_AC_70kit.pth
    --> the same but we removed the files from the AudioCaps dataset, from the AudioSet training set. AudioCaps is an audio captioning dataset, comprised of 57188 files coming from AudioSet. To avoid using a biased audio encoder, this checkpoint may be useful in audio-text retrieval and audio captioning experiments on AudioCaps. BL_AC : Black list of AudioCaps files.

 

Files

Files (755.5 MB)

Name Size Download all
md5:0688ae503f5893be0b6b71cb92f8b428
377.8 MB Download
md5:e069ecd1c7b880268331119521c549f2
377.8 MB Download

Additional details

References

  • Kim, C. D., Kim, B., Lee, H., & Kim, G. (2019, June). AudioCaps: Generating captions for audios in the wild.