ThuraAung1601 commited on
Commit
599aff7
·
verified ·
1 Parent(s): 10356bb

Upload 33 files

Browse files
Files changed (33) hide show
  1. version1/.DS_Store +0 -0
  2. version1/baseline_experiments/.DS_Store +0 -0
  3. version1/baseline_experiments/openNMT_baseline.ipynb +0 -0
  4. version1/baseline_experiments/yaml_files/.DS_Store +0 -0
  5. version1/baseline_experiments/yaml_files/yaml_original/syl.transformer.yaml +80 -0
  6. version1/baseline_experiments/yaml_files/yaml_original/syl_s2s.yaml +62 -0
  7. version1/baseline_experiments/yaml_files/yaml_original/word.transformer.yaml +78 -0
  8. version1/baseline_experiments/yaml_files/yaml_original/word_s2s.yaml +62 -0
  9. version1/baseline_experiments/yaml_files/yaml_pos/syl.transformer.yaml +95 -0
  10. version1/baseline_experiments/yaml_files/yaml_pos/syl_s2s.yaml +75 -0
  11. version1/baseline_experiments/yaml_files/yaml_pos/word.transformer.yaml +96 -0
  12. version1/baseline_experiments/yaml_files/yaml_pos/word_s2s.yaml +75 -0
  13. version1/myContradict_v1_word.txt +0 -0
  14. version1/syllable_segmented/syl_test.src.txt +0 -0
  15. version1/syllable_segmented/syl_test.tgt.txt +0 -0
  16. version1/syllable_segmented/syl_train.src.txt +0 -0
  17. version1/syllable_segmented/syl_train.tgt.txt +0 -0
  18. version1/syllable_segmented/syl_valid.src.txt +0 -0
  19. version1/syllable_segmented/syl_valid.tgt.txt +0 -0
  20. version1/with_POS_Tags/.DS_Store +0 -0
  21. version1/with_POS_Tags/syllable_level_segmentation/syl_test.src.txt.TAGGED.TAGGED +0 -0
  22. version1/with_POS_Tags/syllable_level_segmentation/syl_train.src.txt.TAGGED.TAGGED +0 -0
  23. version1/with_POS_Tags/syllable_level_segmentation/syl_valid.src.txt.TAGGED.TAGGED +0 -0
  24. version1/with_POS_Tags/word_level_segmentation/word_test.src.txt.TAGGED.TAGGED +0 -0
  25. version1/with_POS_Tags/word_level_segmentation/word_train.src.txt.TAGGED.TAGGED +0 -0
  26. version1/with_POS_Tags/word_level_segmentation/word_valid.src.txt.TAGGED.TAGGED +0 -0
  27. version1/word_segmented/.DS_Store +0 -0
  28. version1/word_segmented/word_test.src.txt +0 -0
  29. version1/word_segmented/word_test.tgt.txt +0 -0
  30. version1/word_segmented/word_train.src.txt +0 -0
  31. version1/word_segmented/word_train.tgt.txt +0 -0
  32. version1/word_segmented/word_valid.src.txt +0 -0
  33. version1/word_segmented/word_valid.tgt.txt +0 -0
version1/.DS_Store ADDED
Binary file (8.2 kB). View file
 
version1/baseline_experiments/.DS_Store ADDED
Binary file (6.15 kB). View file
 
version1/baseline_experiments/openNMT_baseline.ipynb ADDED
The diff for this file is too large to render. See raw diff
 
version1/baseline_experiments/yaml_files/.DS_Store ADDED
Binary file (6.15 kB). View file
 
version1/baseline_experiments/yaml_files/yaml_original/syl.transformer.yaml ADDED
@@ -0,0 +1,80 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # syl.transformer.yaml
2
+
3
+ ## Where the samples will be written
4
+ save_data: ./transformer_syl
5
+
6
+ # Prevent overwriting existing files in the folder
7
+ overwrite: True
8
+
9
+ ## Where the vocab(s) will be written
10
+ src_vocab: ./transformer_syl/transformer_syl.vocab.src
11
+ tgt_vocab: ./transformer_syl/transformer_syl.vocab.tgt
12
+ vocab_size_multiple: 8
13
+ src_words_min_frequency: 1
14
+ tgt_words_min_frequency: 1
15
+ share_vocab: True
16
+ n_sample: 0
17
+
18
+ #### Filter
19
+ src_seq_length: 200
20
+ tgt_seq_length: 200
21
+
22
+ # Corpus opts:
23
+ data:
24
+ train:
25
+ path_src: ./parallel_pseudo/sT/myPOS_transformer_syl.src.txt
26
+ path_tgt: ./parallel_pseudo/sT/myPOS_transformer_syl.tgt.txt
27
+ valid:
28
+ path_src: ./parallel_pseudo/original/syl_valid.src.txt
29
+ path_tgt: ./parallel_pseudo/original/syl_valid.tgt.txt
30
+
31
+
32
+ # Training
33
+
34
+ save_model: ./transformer_syl/transformer_syl
35
+
36
+ log_file: ./transformer_syl/log_3k_transformer_syl_ST.txt
37
+
38
+ # Stop training if it does not imporve after n validations
39
+ early_stopping: 10
40
+
41
+ # Default: 5000 - Save a model checkpoint for each n
42
+ # save_checkpoint_steps: 1000
43
+
44
+ decoder_type: transformer
45
+ encoder_type: transformer
46
+ word_vec_size: 512
47
+ hidden_size: 512
48
+ enc_layers: 6
49
+ dec_layers: 6
50
+ hidden_size: 512
51
+ word_vec_size: 512
52
+ transformer_ff: 512
53
+ heads: 8
54
+
55
+ accum_count: 4
56
+
57
+ # Optimization
58
+ learning_rate: 0.1
59
+
60
+ batch_size: 64
61
+ batch_type: tokens
62
+ normalization: tokens
63
+ dropout_steps: [0]
64
+ dropout: [0.1]
65
+ attention_dropout: [0.1]
66
+ position_encoding: true
67
+ label_smoothing: 0.1
68
+
69
+ max_generator_batches: 2
70
+
71
+ param_init: 0.0
72
+ param_init_glorot: 'true'
73
+ position_encoding: 'true'
74
+
75
+ world_size: 1
76
+ ## Run with GPU no. zero
77
+ gpu_ranks: [0]
78
+
79
+ # Number of training iterations
80
+ train_steps: 30000
version1/baseline_experiments/yaml_files/yaml_original/syl_s2s.yaml ADDED
@@ -0,0 +1,62 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # syl_s2s.yaml
2
+ ## Where the samples will be written
3
+
4
+ # Vocab Building
5
+ ## Where the samples will be written
6
+ save_data: ./s2s_syl
7
+
8
+ ## Where the vocab(s) will be written
9
+ src_vocab: ./s2s_syl/syl_s2s.vocab.src
10
+ tgt_vocab: ./s2s_syl/syl_s2s.vocab.tgt
11
+ share_vocab: True
12
+
13
+ # Prevent overwriting existing files in the folder
14
+ overwrite: True
15
+
16
+ # Corpus opts:
17
+ data:
18
+ train:
19
+ path_src: ./parallel_pseudo/sT/s2s_syl.src.txt
20
+ path_tgt: ./parallel_pseudo/sT/s2s_syl.tgt.txt
21
+ valid:
22
+ path_src: ./parallel_pseudo/original/syl_valid.src.txt
23
+ path_tgt: ./parallel_pseudo/original/syl_valid.tgt.txt
24
+
25
+ # Increase sequence length
26
+ src_seq_length: 200
27
+ tgt_seq_length: 200
28
+
29
+ # Training
30
+
31
+ save_model: ./s2s_syl/s2s_syl
32
+
33
+ log_file: ./s2s_syl/log_3k_s2s_syl_ST.txt
34
+
35
+ # Stop training if it does not imporve after n validations
36
+ early_stopping: 10
37
+
38
+ # Default: 5000 - Save a model checkpoint for each n
39
+ # save_checkpoint_steps: 10000
40
+
41
+ # To save space, limit checkpoints to last n
42
+ keep_checkpoint: 5
43
+
44
+ # Optimization
45
+ learning_rate: 0.1
46
+
47
+ batch_size: 64
48
+ dropout: [0.3] # LSTM models often use higher dropout rates
49
+
50
+ # Train on a single GPU
51
+ world_size: 1
52
+ gpu_ranks: [0]
53
+
54
+ # Number of training iterations
55
+ train_steps: 30000
56
+
57
+ # Model parameters for RNN LSTM
58
+ encoder_type: brnn
59
+ decoder_type: rnn
60
+ rnn_type: LSTM
61
+ rnn_size: 512 # Adjust this based on your model size and complexity
62
+ rnn_num_layers: 2 # Adjust this based on model architecture
version1/baseline_experiments/yaml_files/yaml_original/word.transformer.yaml ADDED
@@ -0,0 +1,78 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # word.transformer.yaml
2
+
3
+ ## Where the samples will be written
4
+ save_data: ./word_transformer
5
+
6
+ # Prevent overwriting existing files in the folder
7
+ overwrite: True
8
+
9
+ ## Where the vocab(s) will be written
10
+ src_vocab: ./word_transformer/word_transformer.vocab.src
11
+ tgt_vocab: ./word_transformer/word_transformer.vocab.tgt
12
+ vocab_size_multiple: 8
13
+ src_words_min_frequency: 1
14
+ tgt_words_min_frequency: 1
15
+ share_vocab: True
16
+ n_sample: 0
17
+
18
+ #### Filter
19
+ src_seq_length: 200
20
+ tgt_seq_length: 200
21
+
22
+ # Corpus opts:
23
+ data:
24
+ train:
25
+ path_src: ./parallel_pseudo/sT/myPOS_transformer_word.train.src.txt
26
+ path_tgt: ./parallel_pseudo/sT/myPOS_transformer_word.train.tgt.txt
27
+ valid:
28
+ path_src: ./parallel_pseudo/original/word_valid.src.txt
29
+ path_tgt: ./parallel_pseudo/original/word_valid.tgt.txt
30
+
31
+ # Model Configuration
32
+
33
+ save_model: ./word_transformer/word_transformer
34
+ log_file: ./word_transformer/log_3k_word_transformer_ST.txt
35
+
36
+ # Stop training if it does not imporve after n validations
37
+ early_stopping: 10
38
+
39
+ # Default: 5000 - Save a model checkpoint for each n
40
+ # save_checkpoint_steps: 1000
41
+
42
+ decoder_type: transformer
43
+ encoder_type: transformer
44
+ word_vec_size: 512
45
+ hidden_size: 512
46
+ enc_layers: 6
47
+ dec_layers: 6
48
+ hidden_size: 512
49
+ word_vec_size: 512
50
+ transformer_ff: 512
51
+ heads: 8
52
+
53
+ accum_count: 4
54
+
55
+ # Optimization
56
+ learning_rate: 0.1
57
+
58
+ batch_size: 64
59
+ batch_type: tokens
60
+ normalization: tokens
61
+ dropout_steps: [0]
62
+ dropout: [0.1]
63
+ attention_dropout: [0.1]
64
+ position_encoding: true
65
+ label_smoothing: 0.1
66
+
67
+ max_generator_batches: 2
68
+
69
+ param_init: 0.0
70
+ param_init_glorot: 'true'
71
+ position_encoding: 'true'
72
+
73
+ world_size: 1
74
+ ## Run with GPU no. zero
75
+ gpu_ranks: [0]
76
+
77
+ # Number of training iterations
78
+ train_steps: 30000
version1/baseline_experiments/yaml_files/yaml_original/word_s2s.yaml ADDED
@@ -0,0 +1,62 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # syl_s2s.yaml
2
+ ## Where the samples will be written
3
+
4
+ # Vocab Building
5
+ ## Where the samples will be written
6
+ save_data: ./s2s_word
7
+
8
+ ## Where the vocab(s) will be written
9
+ src_vocab: ./s2s_word/word_s2s.vocab.src
10
+ tgt_vocab: ./s2s_word/word_s2s.vocab.tgt
11
+ share_vocab: True
12
+
13
+ # Prevent overwriting existing files in the folder
14
+ overwrite: True
15
+
16
+ # Corpus opts:
17
+ data:
18
+ train:
19
+ path_src: ./parallel_pseudo/sT/s2s_word.train.src.txt
20
+ path_tgt: ./parallel_pseudo/sT/s2s_word.train.tgt.txt
21
+ valid:
22
+ path_src: ./parallel_pseudo/original/word_valid.src.txt
23
+ path_tgt: ./parallel_pseudo/original/word_valid.tgt.txt
24
+
25
+ # Increase sequence length
26
+ src_seq_length: 200
27
+ tgt_seq_length: 200
28
+
29
+ # Training
30
+
31
+ save_model: ./s2s_word/s2s_word
32
+
33
+ log_file: ./s2s_word/log_3k_s2s_word_ST.txt
34
+
35
+ # Stop training if it does not imporve after n validations
36
+ early_stopping: 10
37
+
38
+ # Default: 5000 - Save a model checkpoint for each n
39
+ # save_checkpoint_steps: 10000
40
+
41
+ # To save space, limit checkpoints to last n
42
+ keep_checkpoint: 5
43
+
44
+ # Optimization
45
+ learning_rate: 0.1
46
+
47
+ batch_size: 64
48
+ dropout: [0.3] # LSTM models often use higher dropout rates
49
+
50
+ # Train on a single GPU
51
+ world_size: 1
52
+ gpu_ranks: [0]
53
+
54
+ # Number of training iterations
55
+ train_steps: 30000
56
+
57
+ # Model parameters for RNN LSTM
58
+ encoder_type: brnn
59
+ decoder_type: rnn
60
+ rnn_type: LSTM
61
+ rnn_size: 512 # Adjust this based on your model size and complexity
62
+ rnn_num_layers: 2 # Adjust this based on model architecture
version1/baseline_experiments/yaml_files/yaml_pos/syl.transformer.yaml ADDED
@@ -0,0 +1,95 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # syl.transformer.yaml
2
+
3
+ ## Where the samples will be written
4
+ save_data: ./syl_transformer_final2
5
+
6
+ # Prevent overwriting existing files in the folder
7
+ overwrite: True
8
+
9
+ ## Where the vocab(s) will be written
10
+ src_vocab: ./syl_transformer_final2/syl_transformer.vocab.src
11
+ tgt_vocab: ./syl_transformer_final2/syl_transformer.vocab.tgt
12
+ vocab_size_multiple: 8
13
+ src_words_min_frequency: 1
14
+ tgt_words_min_frequency: 1
15
+ share_vocab: True
16
+ n_sample: 0
17
+
18
+ #### Filter
19
+ src_seq_length: 200
20
+ tgt_seq_length: 200
21
+
22
+ # Corpus opts:
23
+ data:
24
+ train:
25
+ path_src: ./parallel_pseudo/sT/myPOS_transformer_syl_pos.train.src.txt
26
+ path_tgt: ./parallel_pseudo/sT/myPOS_transformer_syl_pos.train.tgt.txt
27
+ transforms: [inferfeats, filtertoolong]
28
+ weight: 1
29
+
30
+ valid:
31
+ path_src: ./parallel_pseudo/w_pos/syl_valid.src.txt.TAGGED.TAGGED
32
+ path_tgt: ./parallel_pseudo/original/syl_valid.tgt.txt
33
+ transforms: [inferfeats]
34
+
35
+ # Features options
36
+ n_src_feats: 1
37
+ feat_merge: "mlp"
38
+ feat_vec_size: 512
39
+ src_feats_defaults: "x"
40
+
41
+ # Transform options
42
+ reversible_tokenization: "joiner"
43
+
44
+ # Model Configuration
45
+
46
+ save_model: ./syl_transformer_final2/syl_transformer_final2
47
+ log_file: ./syl_transformer_final2/log_final.txt
48
+
49
+ # Stop training if it does not improve after n validations
50
+ early_stopping: 10
51
+
52
+ # Default: 5000 - Save a model checkpoint for each n
53
+ # save_checkpoint_steps: 1000
54
+
55
+ decoder_type: transformer
56
+ encoder_type: transformer
57
+ word_vec_size: 512
58
+ hidden_size: 512
59
+ enc_layers: 6
60
+ dec_layers: 6
61
+ transformer_ff: 2048
62
+ heads: 8
63
+
64
+ accum_count: 4
65
+
66
+ # Optimization
67
+ model_dtype: "fp16"
68
+ optim: adam
69
+ adam_beta1: 0.9
70
+ adam_beta2: 0.998
71
+ decay_method: noam
72
+ learning_rate: 0.1
73
+ max_grad_norm: 0.0
74
+
75
+ batch_size: 64
76
+ batch_type: tokens
77
+ normalization: tokens
78
+ dropout_steps: [0]
79
+ dropout: [0.1]
80
+ attention_dropout: [0.1]
81
+ position_encoding: true
82
+ label_smoothing: 0.1
83
+
84
+ max_generator_batches: 2
85
+
86
+ param_init: 0.0
87
+ param_init_glorot: 'true'
88
+ position_encoding: 'true'
89
+
90
+ world_size: 1
91
+ ## Run with GPU no. zero
92
+ gpu_ranks: [0]
93
+
94
+ # Number of training iterations
95
+ train_steps: 30000
version1/baseline_experiments/yaml_files/yaml_pos/syl_s2s.yaml ADDED
@@ -0,0 +1,75 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # syl_s2s.yaml
2
+ ## Where the samples will be written
3
+
4
+ # Vocab Building
5
+ ## Where the samples will be written
6
+ save_data: ./s2s_syl_final2
7
+
8
+ ## Where the vocab(s) will be written
9
+ src_vocab: ./s2s_syl_final2/syl_s2s.vocab.src
10
+ tgt_vocab: ./s2s_syl_final2/syl_s2s.vocab.tgt
11
+ share_vocab: True
12
+
13
+ # Prevent overwriting existing files in the folder
14
+ overwrite: True
15
+
16
+ # Corpus opts:
17
+ data:
18
+ train:
19
+ path_src: ./parallel_pseudo/sT/s2s_syl_pos.train.src.txt
20
+ path_tgt: ./parallel_pseudo/sT/s2s_syl_pos.train.tgt.txt
21
+ transforms: [inferfeats, filtertoolong]
22
+ weight: 1
23
+
24
+ valid:
25
+ path_src: ./parallel_pseudo/w_pos/syl_valid.src.txt.TAGGED.TAGGED
26
+ path_tgt: ./parallel_pseudo/original/syl_valid.tgt.txt
27
+ transforms: [inferfeats]
28
+
29
+ # Features options
30
+ n_src_feats: 1
31
+ feat_merge: "mlp"
32
+ feat_vec_size: 512
33
+ src_feats_defaults: "x"
34
+
35
+ # Transform options
36
+ reversible_tokenization: "joiner"
37
+
38
+ # Increase sequence length
39
+ src_seq_length: 200
40
+ tgt_seq_length: 200
41
+
42
+ # Training
43
+
44
+ save_model: ./s2s_syl_final2/s2s_syl_final2
45
+
46
+ log_file: ./s2s_syl_final2/log_final.txt
47
+
48
+ # Stop training if it does not improve after n validations
49
+ early_stopping: 10
50
+
51
+ # Default: 5000 - Save a model checkpoint for each n
52
+ # save_checkpoint_steps: 1000
53
+
54
+ # To save space, limit checkpoints to last n
55
+ keep_checkpoint: 5
56
+
57
+ # Optimization
58
+ learning_rate: 0.1
59
+
60
+ batch_size: 64
61
+ dropout: [0.3] # LSTM models often use higher dropout rates
62
+
63
+ # Train on a single GPU
64
+ world_size: 1
65
+ gpu_ranks: [0]
66
+
67
+ # Number of training iterations
68
+ train_steps: 30000
69
+
70
+ # Model parameters for RNN LSTM
71
+ encoder_type: brnn
72
+ decoder_type: rnn
73
+ rnn_type: LSTM
74
+ rnn_size: 512 # Adjust this based on your model size and complexity
75
+ rnn_num_layers: 3 # Adjust this based on model architecture
version1/baseline_experiments/yaml_files/yaml_pos/word.transformer.yaml ADDED
@@ -0,0 +1,96 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # word.transformer.yaml
2
+
3
+ ## Where the samples will be written
4
+ save_data: ./word_transformer_final2
5
+
6
+ # Prevent overwriting existing files in the folder
7
+ overwrite: True
8
+
9
+ ## Where the vocab(s) will be written
10
+ src_vocab: ./word_transformer_final2/word_transformer.vocab.src
11
+ tgt_vocab: ./word_transformer_final2/word_transformer.vocab.tgt
12
+ vocab_size_multiple: 8
13
+ src_words_min_frequency: 1
14
+ tgt_words_min_frequency: 1
15
+ share_vocab: True
16
+ n_sample: 0
17
+
18
+ #### Filter
19
+ src_seq_length: 200
20
+ tgt_seq_length: 200
21
+
22
+ # Corpus opts:
23
+ data:
24
+ train:
25
+ path_src: ./parallel_pseudo/sT/myPOS_transformer_word_pos.train.src.txt
26
+ path_tgt: ./parallel_pseudo/sT/myPOS_transformer_word_pos.train.tgt.txt
27
+ transforms: [inferfeats, filtertoolong]
28
+ weight: 1
29
+
30
+ valid:
31
+ path_src: ./parallel_pseudo/w_pos/word_valid.src.txt.TAGGED.TAGGED
32
+ path_tgt: ./parallel_pseudo/original/word_valid.tgt.txt
33
+ transforms: [inferfeats]
34
+
35
+ # Features options
36
+ n_src_feats: 1
37
+ feat_merge: "mlp"
38
+ feat_vec_size: 512
39
+ src_feats_defaults: "x"
40
+
41
+ # Transform options
42
+ reversible_tokenization: "joiner"
43
+
44
+ # Model Configuration
45
+
46
+ save_model: ./word_transformer_final2/word_transformer_final2
47
+ log_file: ./word_transformer_final2/log_final.txt
48
+
49
+ # Stop training if it does not improve after n validations
50
+ early_stopping: 10
51
+
52
+ # Default: 5000 - Save a model checkpoint for each n
53
+ # save_checkpoint_steps: 1000
54
+
55
+ decoder_type: transformer
56
+ encoder_type: transformer
57
+ word_vec_size: 512
58
+ hidden_size: 512
59
+ enc_layers: 6
60
+ dec_layers: 6
61
+ transformer_ff: 2048
62
+ heads: 8
63
+
64
+ accum_count: 4
65
+
66
+ # Optimization
67
+ model_dtype: "fp16"
68
+ optim: adam
69
+ adam_beta1: 0.9
70
+ adam_beta2: 0.998
71
+ decay_method: noam
72
+ learning_rate: 0.1
73
+ max_grad_norm: 0.0
74
+
75
+ batch_size: 64
76
+ batch_type: tokens
77
+ normalization: tokens
78
+ dropout_steps: [0]
79
+ dropout: [0.1]
80
+ attention_dropout: [0.1]
81
+ position_encoding: true
82
+ label_smoothing: 0.1
83
+
84
+ max_generator_batches: 2
85
+
86
+ param_init: 0.0
87
+ param_init_glorot: 'true'
88
+ position_encoding: 'true'
89
+
90
+ world_size: 1
91
+ ## Run with GPU no. zero
92
+ gpu_ranks: [0]
93
+
94
+ # Number of training iterations
95
+ train_steps: 30000
96
+
version1/baseline_experiments/yaml_files/yaml_pos/word_s2s.yaml ADDED
@@ -0,0 +1,75 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # word_s2s.yaml
2
+ ## Where the samples will be written
3
+
4
+ # Vocab Building
5
+ ## Where the samples will be written
6
+ save_data: ./s2s_word_final2
7
+
8
+ ## Where the vocab(s) will be written
9
+ src_vocab: ./s2s_word_final2/word_s2s.vocab.src
10
+ tgt_vocab: ./s2s_word_final2/word_s2s.vocab.tgt
11
+ share_vocab: True
12
+
13
+ # Prevent overwriting existing files in the folder
14
+ overwrite: True
15
+
16
+ # Corpus opts:
17
+ data:
18
+ train:
19
+ path_src: ./parallel_pseudo/sT/s2s_word_pos.train.src.txt
20
+ path_tgt: ./parallel_pseudo/sT/s2s_word_pos.train.tgt.txt
21
+ transforms: [inferfeats, filtertoolong]
22
+ weight: 1
23
+
24
+ valid:
25
+ path_src: ./parallel_pseudo/w_pos/word_valid.src.txt.TAGGED.TAGGED
26
+ path_tgt: ./parallel_pseudo/original/word_valid.tgt.txt
27
+ transforms: [inferfeats]
28
+
29
+ # Features options
30
+ n_src_feats: 1
31
+ feat_merge: "mlp"
32
+ feat_vec_size: 512
33
+ src_feats_defaults: "x"
34
+
35
+ # Transform options
36
+ reversible_tokenization: "joiner"
37
+
38
+ # Increase sequence length
39
+ src_seq_length: 200
40
+ tgt_seq_length: 200
41
+
42
+ # Training
43
+
44
+ save_model: ./s2s_word_final2/s2s_word_final2
45
+
46
+ log_file: ./s2s_word_final2/log_final.txt
47
+
48
+ # Stop training if it does not improve after n validations
49
+ early_stopping: 10
50
+
51
+ # Default: 5000 - Save a model checkpoint for each n
52
+ # save_checkpoint_steps: 1000
53
+
54
+ # To save space, limit checkpoints to last n
55
+ keep_checkpoint: 5
56
+
57
+ # Optimization
58
+ learning_rate: 0.1
59
+
60
+ batch_size: 64
61
+ dropout: [0.3] # LSTM models often use higher dropout rates
62
+
63
+ # Train on a single GPU
64
+ world_size: 1
65
+ gpu_ranks: [0]
66
+
67
+ # Number of training iterations
68
+ train_steps: 30000
69
+
70
+ # Model parameters for RNN LSTM
71
+ encoder_type: brnn
72
+ decoder_type: rnn
73
+ rnn_type: LSTM
74
+ rnn_size: 512 # Adjust this based on your model size and complexity
75
+ rnn_num_layers: 3 # Adjust this based on model architecture
version1/myContradict_v1_word.txt ADDED
The diff for this file is too large to render. See raw diff
 
version1/syllable_segmented/syl_test.src.txt ADDED
The diff for this file is too large to render. See raw diff
 
version1/syllable_segmented/syl_test.tgt.txt ADDED
The diff for this file is too large to render. See raw diff
 
version1/syllable_segmented/syl_train.src.txt ADDED
The diff for this file is too large to render. See raw diff
 
version1/syllable_segmented/syl_train.tgt.txt ADDED
The diff for this file is too large to render. See raw diff
 
version1/syllable_segmented/syl_valid.src.txt ADDED
The diff for this file is too large to render. See raw diff
 
version1/syllable_segmented/syl_valid.tgt.txt ADDED
The diff for this file is too large to render. See raw diff
 
version1/with_POS_Tags/.DS_Store ADDED
Binary file (6.15 kB). View file
 
version1/with_POS_Tags/syllable_level_segmentation/syl_test.src.txt.TAGGED.TAGGED ADDED
The diff for this file is too large to render. See raw diff
 
version1/with_POS_Tags/syllable_level_segmentation/syl_train.src.txt.TAGGED.TAGGED ADDED
The diff for this file is too large to render. See raw diff
 
version1/with_POS_Tags/syllable_level_segmentation/syl_valid.src.txt.TAGGED.TAGGED ADDED
The diff for this file is too large to render. See raw diff
 
version1/with_POS_Tags/word_level_segmentation/word_test.src.txt.TAGGED.TAGGED ADDED
The diff for this file is too large to render. See raw diff
 
version1/with_POS_Tags/word_level_segmentation/word_train.src.txt.TAGGED.TAGGED ADDED
The diff for this file is too large to render. See raw diff
 
version1/with_POS_Tags/word_level_segmentation/word_valid.src.txt.TAGGED.TAGGED ADDED
The diff for this file is too large to render. See raw diff
 
version1/word_segmented/.DS_Store ADDED
Binary file (6.15 kB). View file
 
version1/word_segmented/word_test.src.txt ADDED
The diff for this file is too large to render. See raw diff
 
version1/word_segmented/word_test.tgt.txt ADDED
The diff for this file is too large to render. See raw diff
 
version1/word_segmented/word_train.src.txt ADDED
The diff for this file is too large to render. See raw diff
 
version1/word_segmented/word_train.tgt.txt ADDED
The diff for this file is too large to render. See raw diff
 
version1/word_segmented/word_valid.src.txt ADDED
The diff for this file is too large to render. See raw diff
 
version1/word_segmented/word_valid.tgt.txt ADDED
The diff for this file is too large to render. See raw diff