Published November 9, 2021
| Version v1
Other
Open
ESPnet2 pretrained model, Siddhant/fsc_challenge_asr_train_asr_hubert_transformer_adam_specaug_raw_en_word_valid.acc.ave_5best, fs=16k, lang=en
Authors/Creators
Description
This model was trained by Siddhant using fsc_challenge recipe in espnet.
- Python API
See https://github.com/espnet/espnet_model_zoo - Evaluate in the recipe
git clone https://github.com/espnet/espnet cd espnet git checkout 97b9dad4dbca71702cb7928a126ec45d96414a3f pip install -e . cd egs2/fsc_challenge/asr1 ./run.sh --skip_data_prep false --skip_train true --download_model Siddhant/fsc_challenge_asr_train_asr_hubert_transformer_adam_specaug_raw_en_word_valid.acc.ave_5best - Results
# RESULTS ## Environments - date: `Sun Oct 3 22:25:25 EDT 2021` - python version: `3.8.11 (default, Aug 3 2021, 15:09:35) [GCC 7.5.0]` - espnet version: `espnet 0.10.3a3` - pytorch version: `pytorch 1.9.0+cu102` - Git hash: `a1a55e1eef2a74d2b8580d8071ce5229e7fa654c` - Commit date: `Mon Nov 8 23:56:06 2021 -0500` ## Using Transformer based encoder-decoder with finetuned Hubert pre encoder and decoding sentence with spectral augmentation and predicting transcript along with intent - ASR config: [conf/tuning/train_asr.yaml](conf/tuning/train_asr_hubert_transformer_adam_specaug_finetune.yaml) - token_type: word - keep_nbest_models: 5 |dataset|Snt|Intent Classification (%)| |---|---|---| |inference_asr_model_valid.acc.ave_5best/spk_test|3366|97.5| |inference_asr_model_valid.acc.ave_5best/utt_test|3970|78.5| |inference_asr_model_valid.acc.ave_5best/valid|2624|98.4| ###ASR results #### WER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |inference_asr_model_valid.acc.ave_5best/spk_test|3349|14588|98.7|0.9|0.4|0.6|1.9|4.7| |inference_asr_model_valid.acc.ave_5best/utt_test|4201|18330|87.1|10.6|2.3|3.8|16.7|44.6| |inference_asr_model_valid.acc.ave_5best/valid|2597|1185|98.9|0.6|0.5|0.3|1.3|2.9 | - ASR config
config: conf/tuning/train_asr_hubert_transformer_adam_specaug.yaml print_config: false log_level: INFO dry_run: false iterator_type: sequence output_dir: exp/asr_train_asr_hubert_transformer_adam_specaug_raw_en_word ngpu: 1 seed: 0 num_workers: 1 num_att_plot: 3 dist_backend: nccl dist_init_method: env:// dist_world_size: null dist_rank: null local_rank: 0 dist_master_addr: null dist_master_port: null dist_launcher: null multiprocessing_distributed: false unused_parameters: false sharded_ddp: false cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: true collect_stats: false write_collected_feats: false max_epoch: 80 patience: null val_scheduler_criterion: - valid - loss early_stopping_criterion: - valid - loss - min best_model_criterion: - - train - loss - min - - valid - loss - min - - train - acc - max - - valid - acc - max keep_nbest_models: 5 grad_clip: 5.0 grad_clip_type: 2.0 grad_noise: false accum_grad: 1 no_forward_run: false resume: true train_dtype: float32 use_amp: false log_interval: null use_tensorboard: true use_wandb: false wandb_project: null wandb_id: null wandb_entity: null wandb_name: null wandb_model_log_interval: -1 detect_anomaly: false pretrain_path: null init_param: [] ignore_init_mismatch: false freeze_param: [] num_iters_per_epoch: null batch_size: 20 valid_batch_size: null batch_bins: 1000000 valid_batch_bins: null train_shape_file: - exp/asr_stats_raw_en_word/train/speech_shape - exp/asr_stats_raw_en_word/train/text_shape.word valid_shape_file: - exp/asr_stats_raw_en_word/valid/speech_shape - exp/asr_stats_raw_en_word/valid/text_shape.word batch_type: folded valid_batch_type: null fold_length: - 80000 - 150 sort_in_batch: descending sort_batch: descending multiple_iterator: false chunk_length: 500 chunk_shift_ratio: 0.5 num_cache_chunks: 1024 train_data_path_and_name_and_type: - - dump/raw/train/wav.scp - speech - sound - - dump/raw/train/text - text - text valid_data_path_and_name_and_type: - - dump/raw/valid/wav.scp - speech - sound - - dump/raw/valid/text - text - text allow_variable_data_keys: false max_cache_size: 0.0 max_cache_fd: 32 valid_max_cache_size: null optim: adam optim_conf: lr: 0.0002 scheduler: warmuplr scheduler_conf: warmup_steps: 25000 token_list: - - - ▁the - ▁ - e - ▁turn - ▁in - s - ▁lights - o - ▁m - c - i - ▁heat - a - t - hroom - ▁up - ▁s - ▁on - ▁down - n - ▁temperature - crease - p - ▁t - u - ▁b - ▁switch - w - h - d - ou - ▁kitchen - ▁volume - ▁off - ing - y - increase_volume_none - ▁bedroom - ▁langu - age - as - decrease_volume_none - ▁l - r - er - at - ▁d - l - decrease_heat_washroom - increase_heat_washroom - k - an - g - increase_heat_none - oo - decrease_heat_none - ge - change_language_none_none - activate_lights_washroom - activate_lights_kitchen - ow - in - activate_music_none - mp - deactivate_music_none - increase_heat_bedroom - increase_heat_kitchen - decrease_heat_kitchen - it - activate_lights_bedroom - deactivate_lights_bedroom - f - re - decrease_heat_bedroom - ed - deactivate_lights_kitchen - bring_newspaper_none - bring_shoes_none - bring_socks_none - activate_lights_none - deactivate_lights_none - q - deactivate_lights_washroom - change_language_Chinese_none - bring_juice_none - j - m - deactivate_lamp_none - activate_lamp_none - change_language_Korean_none - ▁k - me - change_language_German_none - ▁o - change_language_English_none - ▁he - ase - ff - ume - ▁v - x - ▁u - v - init: null input_size: null ctc_conf: dropout_rate: 0.0 ctc_type: builtin reduce: true ignore_nan_grad: true model_conf: ctc_weight: 0.5 ignore_id: -1 lsm_weight: 0.0 length_normalized_loss: false report_cer: true report_wer: true sym_space: sym_blank: extract_feats_in_collect_stats: true use_preprocessor: true token_type: word bpemodel: null non_linguistic_symbols: null cleaner: null g2p: null speech_volume_normalize: null rir_scp: null rir_apply_prob: 1.0 noise_scp: null noise_apply_prob: 1.0 noise_db_range: '13_15' frontend: s3prl frontend_conf: frontend_conf: upstream: hubert_large_ll60k download_dir: ./hub multilayer_feature: true fs: 16k specaug: specaug specaug_conf: apply_time_warp: true time_warp_window: 5 time_warp_mode: bicubic apply_freq_mask: true freq_mask_width_range: - 0 - 30 num_freq_mask: 2 apply_time_mask: true time_mask_width_range: - 0 - 40 num_time_mask: 2 normalize: utterance_mvn normalize_conf: {} preencoder: linear preencoder_conf: input_size: 1024 output_size: 80 encoder: transformer encoder_conf: output_size: 256 attention_heads: 4 linear_units: 2048 num_blocks: 12 dropout_rate: 0.1 positional_dropout_rate: 0.1 attention_dropout_rate: 0.0 input_layer: conv2d normalize_before: true postencoder: null postencoder_conf: {} decoder: transformer decoder_conf: attention_heads: 4 linear_units: 2048 num_blocks: 6 dropout_rate: 0.1 positional_dropout_rate: 0.1 self_attention_dropout_rate: 0.0 src_attention_dropout_rate: 0.0 required: - output_dir - token_list version: 0.10.3a3 distributed: false - LM config
NONE
Files
asr_train_asr_hubert_transformer_adam_specaug_raw_en_word_valid.acc.ave_5best.zip
Files
(1.4 GB)
| Name | Size | Download all |
|---|---|---|
|
md5:ca3a21873ef56a5471ea61bf03384093
|
1.4 GB | Preview Download |
Additional details
Related works
- Is supplement to
- https://github.com/espnet/espnet (URL)