Published November 4, 2020
| Version v1
Other
Open
ESPnet2 pretrained model, kamo-naoyuki/wsj_asr_train_asr_transformer_raw_char_valid.acc.ave, fs=16k, lang=en
Authors/Creators
Description
This model was trained by kamo-naoyuki using wsj recipe in espnet.
- Python API
See https://github.com/espnet/espnet_model_zoo - Evaluate in the recipe
git clone https://github.com/espnet/espnet cd espnet git checkout 5053bf7fdf193456f884fe23b96f641ebbb23dc8 pip install -e . cd egs2/wsj/asr1 ./run.sh --skip_data_prep false --skip_train true --download_model kamo-naoyuki/wsj_asr_train_asr_transformer_raw_char_valid.acc.ave - Results
# RESULTS ## Environments - date: `Mon Nov 2 11:11:02 JST 2020` - python version: `3.7.3 (default, Mar 27 2019, 22:11:17) [GCC 7.3.0]` - espnet version: `espnet 0.9.0` - pytorch version: `pytorch 1.5.1` - Git hash: `d34c89ed9505fa7559e4caa1c518fc35b199a4d3` - Commit date: `Tue Oct 20 09:23:48 2020 +0900` ## asr_train_asr_transformer_raw_char ### WER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |inference_lm_lm_train_lm_transformer_char_batch_bins350000_accum_grad2_valid.loss.ave_asr_model_valid.acc.ave/test_dev93|503|8234|94.2|5.1|0.7|0.8|6.6|53.3| |inference_lm_lm_train_lm_transformer_char_batch_bins350000_accum_grad2_valid.loss.ave_asr_model_valid.acc.ave/test_eval92|333|5643|96.2|3.6|0.2|0.7|4.6|41.1| ### CER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |inference_lm_lm_train_lm_transformer_char_batch_bins350000_accum_grad2_valid.loss.ave_asr_model_valid.acc.ave/test_dev93|503|48634|97.8|1.0|1.2|0.5|2.7|59.2| |inference_lm_lm_train_lm_transformer_char_batch_bins350000_accum_grad2_valid.loss.ave_asr_model_valid.acc.ave/test_eval92|333|33341|98.6|0.6|0.7|0.4|1.8|49.8| - ASR config
config: conf/train_asr_transformer.yaml print_config: false log_level: INFO dry_run: false iterator_type: sequence output_dir: exp/asr_train_asr_transformer_raw_char ngpu: 1 seed: 0 num_workers: 1 num_att_plot: 3 dist_backend: nccl dist_init_method: env:// dist_world_size: null dist_rank: null local_rank: 0 dist_master_addr: null dist_master_port: null dist_launcher: null multiprocessing_distributed: false cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: true collect_stats: false write_collected_feats: false max_epoch: 100 patience: null val_scheduler_criterion: - valid - loss early_stopping_criterion: - valid - loss - min best_model_criterion: - - valid - acc - max keep_nbest_models: 10 grad_clip: 5.0 grad_noise: false accum_grad: 8 no_forward_run: false resume: true train_dtype: float32 use_amp: false log_interval: null pretrain_path: [] pretrain_key: [] num_iters_per_epoch: null batch_size: 32 valid_batch_size: null batch_bins: 1000000 valid_batch_bins: null train_shape_file: - exp/asr_stats_raw/train/speech_shape - exp/asr_stats_raw/train/text_shape.char valid_shape_file: - exp/asr_stats_raw/valid/speech_shape - exp/asr_stats_raw/valid/text_shape.char batch_type: folded valid_batch_type: null fold_length: - 80000 - 150 sort_in_batch: descending sort_batch: descending multiple_iterator: false chunk_length: 500 chunk_shift_ratio: 0.5 num_cache_chunks: 1024 train_data_path_and_name_and_type: - - dump/raw/train_si284/wav.scp - speech - sound - - dump/raw/train_si284/text - text - text valid_data_path_and_name_and_type: - - dump/raw/test_dev93/wav.scp - speech - sound - - dump/raw/test_dev93/text - text - text allow_variable_data_keys: false max_cache_size: 0.0 valid_max_cache_size: null optim: adam optim_conf: lr: 0.005 scheduler: warmuplr scheduler_conf: warmup_steps: 30000 token_list: - - - - E - T - A - N - I - O - S - R - H - L - D - C - U - M - P - F - G - Y - W - B - V - K - . - X - '''' - J - Q - Z - - ',' - '-' - '"' - '*' - ':' - ( - ) - '?' - '!' - '&' - ; - '1' - '2' - '0' - / - $ - '{' - '}' - '8' - '9' - '6' - '3' - '5' - '7' - '4' - '~' - '`' - _ - <*IN*> - <*MR.*> - \ - ^ - init: xavier_uniform input_size: null ctc_conf: dropout_rate: 0.0 ctc_type: builtin reduce: true model_conf: ctc_weight: 0.3 lsm_weight: 0.1 length_normalized_loss: false use_preprocessor: true token_type: char bpemodel: null non_linguistic_symbols: data/nlsyms.txt cleaner: null g2p: null frontend: default frontend_conf: fs: 16k specaug: null specaug_conf: {} normalize: global_mvn normalize_conf: stats_file: exp/asr_stats_raw/train/feats_stats.npz encoder: transformer encoder_conf: output_size: 256 attention_heads: 4 linear_units: 2048 num_blocks: 12 dropout_rate: 0.1 positional_dropout_rate: 0.1 attention_dropout_rate: 0.0 input_layer: conv2d normalize_before: true decoder: transformer decoder_conf: attention_heads: 4 linear_units: 2048 num_blocks: 6 dropout_rate: 0.1 positional_dropout_rate: 0.1 self_attention_dropout_rate: 0.0 src_attention_dropout_rate: 0.0 required: - output_dir - token_list distributed: false - LM config
config: conf/train_lm_transformer.yaml print_config: false log_level: INFO dry_run: false iterator_type: sequence output_dir: exp/lm_train_lm_transformer_char_batch_bins350000_accum_grad2/ ngpu: 1 seed: 0 num_workers: 1 num_att_plot: 3 dist_backend: nccl dist_init_method: env:// dist_world_size: 4 dist_rank: 0 local_rank: 0 dist_master_addr: localhost dist_master_port: 58933 dist_launcher: null multiprocessing_distributed: true cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: true collect_stats: false write_collected_feats: false max_epoch: 40 patience: null val_scheduler_criterion: - valid - loss early_stopping_criterion: - valid - loss - min best_model_criterion: - - valid - loss - min keep_nbest_models: 10 grad_clip: 5.0 grad_clip_type: 2.0 grad_noise: false accum_grad: 2 no_forward_run: false resume: true train_dtype: float32 use_amp: false log_interval: null pretrain_path: [] pretrain_key: [] num_iters_per_epoch: null batch_size: 20 valid_batch_size: null batch_bins: 350000 valid_batch_bins: null train_shape_file: - exp/lm_stats_char/train/text_shape.char valid_shape_file: - exp/lm_stats_char/valid/text_shape.char batch_type: numel valid_batch_type: null fold_length: - 150 sort_in_batch: descending sort_batch: descending multiple_iterator: false chunk_length: 500 chunk_shift_ratio: 0.5 num_cache_chunks: 1024 train_data_path_and_name_and_type: - - dump/raw/srctexts - text - text valid_data_path_and_name_and_type: - - dump/raw/test_dev93/text - text - text allow_variable_data_keys: false max_cache_size: 0.0 max_cache_fd: 32 valid_max_cache_size: null optim: adam optim_conf: lr: 0.001 scheduler: warmuplr scheduler_conf: warmup_steps: 25000 token_list: - - - - E - T - A - N - I - O - S - R - H - L - D - C - U - M - P - F - G - Y - W - B - V - K - . - X - '''' - J - Q - Z - - ',' - '-' - '"' - '*' - ':' - ( - ) - '?' - '!' - '&' - ; - '1' - '2' - '0' - / - $ - '{' - '}' - '8' - '9' - '6' - '3' - '5' - '7' - '4' - '~' - '`' - _ - <*IN*> - <*MR.*> - \ - ^ - init: null model_conf: ignore_id: 0 use_preprocessor: true token_type: char bpemodel: null non_linguistic_symbols: data/nlsyms.txt cleaner: null g2p: null lm: transformer lm_conf: pos_enc: null embed_unit: 128 att_unit: 512 head: 8 unit: 2048 layer: 16 dropout_rate: 0.1 required: - output_dir - token_list distributed: true
Files
asr_train_asr_transformer_raw_char_valid.acc.ave.zip
Files
(288.4 MB)
| Name | Size | Download all |
|---|---|---|
|
md5:eed91a0de6b5907ec10473d5e9c1d08c
|
288.4 MB | Preview Download |
Additional details
Related works
- Is supplement to
- https://github.com/espnet/espnet (URL)