2024-05-26 11:20:47,975 INFO [ctc_decode.py:717] Decoding started 2024-05-26 11:20:47,975 INFO [ctc_decode.py:723] Device: cuda:0 2024-05-26 11:20:47,975 INFO [ctc_decode.py:724] {'best_train_loss': inf, 'best_valid_loss': inf, 'best_train_epoch': -1, 'best_valid_epoch': -1, 'batch_idx_train': 0, 'log_interval': 50, 'reset_interval': 200, 'valid_interval': 3000, 'feature_dim': 80, 'subsampling_factor': 4, 'ignore_id': -1, 'label_smoothing': 0.1, 'warm_step': 2000, 'env_info': {'k2-version': '1.24.4', 'k2-build-type': 'Release', 'k2-with-cuda': True, 'k2-git-sha1': '44a9d5682af9fd3ef77074777e15278ec6d390eb', 'k2-git-date': 'Wed Sep 27 11:22:55 2023', 'lhotse-version': '1.17.0.dev+git.ccfc5b2c.dirty', 'torch-version': '1.10.0+cu102', 'torch-cuda-available': True, 'torch-cuda-version': '10.2', 'python-version': '3.8', 'icefall-git-branch': 'zipformer-ctc-aed', 'icefall-git-sha1': '84dfb576-dirty', 'icefall-git-date': 'Sat May 25 17:49:14 2024', 'icefall-path': '/star-zw/workspace/zipformer/icefall_ctc_aed', 'k2-path': '/star-zw/workspace/k2/k2/k2/python/k2/__init__.py', 'lhotse-path': '/star-zw/workspace/lhotse/lhotse/lhotse/__init__.py', 'hostname': 'de-74279-k2-train-10-0312151423-668f59dc99-b8962', 'IP address': '10.177.6.147'}, 'frame_shift_ms': 10, 'search_beam': 20, 'output_beam': 8, 'min_active_states': 30, 'max_active_states': 10000, 'use_double_scores': True, 'epoch': 50, 'iter': 0, 'avg': 29, 'use_averaged_model': True, 'exp_dir': PosixPath('zipformer/exp-ctc-0.1-aed-0.9-penalize-attn-large'), 'bpe_model': 'data/lang_bpe_500/bpe.model', 'lang_dir': PosixPath('data/lang_bpe_500'), 'context_size': 2, 'decoding_method': 'ctc-decoding', 'num_paths': 100, 'nbest_scale': 1.0, 'hlg_scale': 0.6, 'lm_dir': PosixPath('data/lm'), 'num_encoder_layers': '2,2,4,5,4,2', 'downsampling_factor': '1,2,4,8,4,2', 'feedforward_dim': '512,768,1536,2048,1536,768', 'num_heads': '4,4,4,8,4,4', 'encoder_dim': '192,256,512,768,512,256', 'query_head_dim': '32', 'value_head_dim': '12', 'pos_head_dim': '4', 'pos_dim': 48, 'encoder_unmasked_dim': '192,192,256,320,256,192', 'cnn_module_kernel': '31,31,15,15,15,31', 'decoder_dim': 512, 'joiner_dim': 512, 'attention_decoder_dim': 512, 'attention_decoder_num_layers': 6, 'attention_decoder_attention_dim': 512, 'attention_decoder_num_heads': 8, 'attention_decoder_feedforward_dim': 2048, 'causal': False, 'chunk_size': '16,32,64,-1', 'left_context_frames': '64,128,256,-1', 'use_transducer': False, 'use_ctc': True, 'use_attention_decoder': True, 'full_libri': True, 'mini_libri': False, 'manifest_dir': PosixPath('data/fbank'), 'max_duration': 200, 'bucketing_sampler': True, 'num_buckets': 30, 'concatenate_cuts': False, 'duration_factor': 1.0, 'gap': 1.0, 'on_the_fly_feats': False, 'shuffle': True, 'drop_last': True, 'return_cuts': True, 'num_workers': 2, 'enable_spec_aug': True, 'spec_aug_time_warp_factor': 80, 'enable_musan': True, 'input_strategy': 'PrecomputedFeatures', 'res_dir': PosixPath('zipformer/exp-ctc-0.1-aed-0.9-penalize-attn-large/ctc-decoding'), 'suffix': 'epoch-50-avg-29-use-averaged-model'} 2024-05-26 11:20:48,266 INFO [lexicon.py:168] Loading pre-compiled data/lang_bpe_500/Linv.pt 2024-05-26 11:20:53,300 INFO [ctc_decode.py:807] About to create model 2024-05-26 11:20:54,579 INFO [ctc_decode.py:874] Calculating the averaged model over epoch range from 21 (excluded) to 50 2024-05-26 11:21:03,984 INFO [ctc_decode.py:891] Number of model parameters: 174319650 2024-05-26 11:21:03,985 INFO [asr_datamodule.py:467] About to get test-clean cuts 2024-05-26 11:21:04,107 INFO [asr_datamodule.py:474] About to get test-other cuts 2024-05-26 11:21:05,248 INFO [ctc_decode.py:623] batch 0/?, cuts processed until now is 14 2024-05-26 11:21:40,659 INFO [ctc_decode.py:623] batch 100/?, cuts processed until now is 2298 2024-05-26 11:21:43,885 INFO [zipformer.py:1858] name=None, attn_weights_entropy = tensor([2.6771, 3.2606, 3.3798, 3.6473], device='cuda:0') 2024-05-26 11:21:45,487 INFO [ctc_decode.py:646] The transcripts are stored in zipformer/exp-ctc-0.1-aed-0.9-penalize-attn-large/ctc-decoding/recogs-test-clean-epoch-50-avg-29-use-averaged-model.txt 2024-05-26 11:21:45,583 INFO [utils.py:657] [test-clean-ctc-decoding] %WER 2.29% [1206 / 52576, 119 ins, 98 del, 989 sub ] 2024-05-26 11:21:45,788 INFO [ctc_decode.py:655] Wrote detailed error stats to zipformer/exp-ctc-0.1-aed-0.9-penalize-attn-large/ctc-decoding/errs-test-clean-epoch-50-avg-29-use-averaged-model.txt 2024-05-26 11:21:45,791 INFO [ctc_decode.py:669] For test-clean, WER of different settings are: ctc-decoding 2.29 best for test-clean 2024-05-26 11:21:46,547 INFO [ctc_decode.py:623] batch 0/?, cuts processed until now is 17 2024-05-26 11:22:22,471 INFO [ctc_decode.py:623] batch 100/?, cuts processed until now is 2530 2024-05-26 11:22:27,123 INFO [ctc_decode.py:646] The transcripts are stored in zipformer/exp-ctc-0.1-aed-0.9-penalize-attn-large/ctc-decoding/recogs-test-other-epoch-50-avg-29-use-averaged-model.txt 2024-05-26 11:22:27,217 INFO [utils.py:657] [test-other-ctc-decoding] %WER 5.14% [2688 / 52343, 277 ins, 209 del, 2202 sub ] 2024-05-26 11:22:27,413 INFO [ctc_decode.py:655] Wrote detailed error stats to zipformer/exp-ctc-0.1-aed-0.9-penalize-attn-large/ctc-decoding/errs-test-other-epoch-50-avg-29-use-averaged-model.txt 2024-05-26 11:22:27,416 INFO [ctc_decode.py:669] For test-other, WER of different settings are: ctc-decoding 5.14 best for test-other 2024-05-26 11:22:27,416 INFO [ctc_decode.py:924] Done!