zeroMN commited on
Commit
4c497df
·
verified ·
1 Parent(s): acea4e9

Upload 37 files

Browse files
.gitattributes CHANGED
@@ -57,3 +57,9 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
57
  # Video files - compressed
58
  *.mp4 filter=lfs diff=lfs merge=lfs -text
59
  *.webm filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
57
  # Video files - compressed
58
  *.mp4 filter=lfs diff=lfs merge=lfs -text
59
  *.webm filter=lfs diff=lfs merge=lfs -text
60
+ training/as_training.b5 filter=lfs diff=lfs merge=lfs -text
61
+ training/as_training.utf8 filter=lfs diff=lfs merge=lfs -text
62
+ training/cityu_training.txt filter=lfs diff=lfs merge=lfs -text
63
+ training/msr_training.txt filter=lfs diff=lfs merge=lfs -text
64
+ training/msr_training.utf8 filter=lfs diff=lfs merge=lfs -text
65
+ training/pku_training.txt filter=lfs diff=lfs merge=lfs -text
README ADDED
@@ -0,0 +1,68 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 2nd International Chinese Word Segmentation Bakeoff - Data Release
2
+ Release 1, 2005-11-18
3
+
4
+ * Introduction
5
+
6
+ This directory contains the training, test, and gold-standard data
7
+ used in the 2nd International Chinese Word Segmentation Bakeoff. Also
8
+ included is the script used to score the results submitted by the
9
+ bakeoff participants and the simple segmenter used to generate the
10
+ baseline and topline data.
11
+
12
+ * File List
13
+
14
+ gold/ Contains the gold standard segmentation of the test data
15
+ along with the training data word lists.
16
+
17
+ scripts/ Contains the scoring script and simple segmenter.
18
+
19
+ testing/ Contains the unsegmented test data.
20
+
21
+ training/ Contains the segmented training data.
22
+
23
+ doc/ Contains the instructions used in the bakeoff.
24
+
25
+ * Encoding Issues
26
+
27
+ Files with the extension ".utf8" are encoded in UTF-8 Unicode.
28
+
29
+ Files with the extension ".txt" are encoded as follows:
30
+
31
+ as_ Big Five (CP950)
32
+ hk_ Big Five/HKSCS
33
+ msr_ EUC-CN (CP936)
34
+ pku_ EUC-CN (CP936)
35
+
36
+ EUC-CN is often called "GB" or "GB2312" encoding, though technically
37
+ GB2312 is a character set, not a character encoding.
38
+
39
+ * Scoring
40
+
41
+ The script 'score' is used to generate compare two segmentations. The
42
+ script takes three arguments:
43
+
44
+ 1. The training set word list
45
+ 2. The gold standard segmentation
46
+ 3. The segmented test file
47
+
48
+ You must not mix character encodings when invoking the scoring
49
+ script. For example:
50
+
51
+ % perl scripts/score gold/cityu_training_words.utf8 \
52
+ gold/cityu_test_gold.utf8 test_segmentation.utf8 > score.ut8
53
+
54
+ * Licensing
55
+
56
+ The corpora have been made available by the providers for the purposes
57
+ of this competition only. By downloading the training and testing
58
+ corpora, you agree that you will not use these corpora for any other
59
+ purpose than as material for this competition. Petitions to use the
60
+ data for any other purpose MUST be directed to the original providers
61
+ of the data. Neither SIGHAN nor the ACL will assume any liability for
62
+ a participant's misuse of the data.
63
+
64
+ * Questions?
65
+
66
+ Questions or comments about these data can be sent to Tom Emerson,
67
68
+
doc/instructions.txt ADDED
@@ -0,0 +1,165 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {55Second International Chinese Word Segmentation Bakeoff
2
+ Detailed Instructions
3
+
4
+ [Note: these were converted from HTML to text for this post-bakeoff
5
+ data release. No other changes were made.]
6
+
7
+ The following comprises the complete description of the training and
8
+ testing for the Second International Chinese Word Segmentation
9
+ Bakeoff. By participating in this competition, you are declaring that
10
+ you understand these descriptions, and that you agree to abide by the
11
+ specific terms as laid out below.
12
+
13
+ * Training: Description of Tracks
14
+
15
+ ** Dimension 1: Corpora
16
+
17
+ Four corpora are available for this bakeoff:
18
+
19
+ --------------------------------------------------------------------------------
20
+ Corpus Encoding Word Words Character Characters
21
+ Types Types
22
+ --------------------------------------------------------------------------------
23
+ Academia Sinica Big Five Plus 141,340 5,449,698 6,117 8,368,050
24
+ CityU HKSCS Big Five 69,085 1,455,629 4,923 2,403,355
25
+ Peking University CP936 55,303 1,109,947 4,698 1,826,448
26
+ Microsoft Research CP936 88,119 2,368,391 5,167 4,050,469
27
+ --------------------------------------------------------------------------------
28
+
29
+ You may declare that you will return results on any subset of these
30
+ corpora. For example, you may decide that you will test on the Sinica
31
+ Corpus and the Beijing University corpus. The only constraint is that
32
+ you must not select a corpus where you have knowingly had previous
33
+ access to the testing portion of the corpus. A corollary of this is
34
+ that a team may not test on the data from their own institution.
35
+
36
+ ** Dimension 2: Open or Closed Test
37
+
38
+ You may decide to participate in either an open test or a closed test, or both.
39
+
40
+ In the open test you will be allowed to train on the training set for
41
+ a particular corpus, and in addition you may use *any* other material
42
+ including material from other training corpora, proprietary
43
+ dictionaries, material from the WWW and so forth.
44
+
45
+ If you elect the open test, you will be required, in the
46
+ two-page writeup of your results, to explain what percentage of your
47
+ correct/incorrect results came from which sources. For example, if you
48
+ score an F measure of 0.7 on words in the testing corpus that are
49
+ out-of-vocabulary with respect to the training corpus, you must
50
+ explain how you got that result: was it just because you have a good
51
+ coverage dictionary, do you have a good unknown word detection
52
+ algorithm, etc?
53
+
54
+ In the closed test you may *only* use training
55
+ material from the training data for the particular corpus you are
56
+ testing on. No other material or knowledge is allowed, including
57
+ (but not limited to):
58
+
59
+ 1. Part-of-speech information
60
+ 2. Externally generated word-frequency counts
61
+ 3. Arabic and Chinese Numbers
62
+ 4. Feature characters for place ames
63
+ 5. Common Chinese surnames
64
+
65
+ ** Declaration
66
+
67
+ When you download the training corpora, you will be asked to register
68
+ and provide various information about your site, including the contact
69
+ person, and you will be asked to declare which tracks you will
70
+ participating in.
71
+
72
+ ** Format of the data
73
+
74
+ Both training and testing data will be published in the original
75
+ coding schemes used by the data sources. Additionally it will be
76
+ transcoded by the organizers into Unicode UTF-8 (or, if provided in
77
+ Unicode, into the defacto encoding for the locale.) The training data
78
+ will be formatted as follows.
79
+
80
+ 1. There will be one sentence per line.
81
+ 2. Words and punctuation symbols will be separated by spaces.
82
+ 3. There will be no further annotations, such as part-of-speech tags:
83
+ if the original corpus includes those, those will be removed.
84
+
85
+
86
+ ** Licensing
87
+
88
+ The corpora have been made available by the providers for the purposes
89
+ of this competition only. By downloading the training and testing
90
+ corpora, you agree that you will not use these corpora for any other
91
+ purpose than as material for this competition. Petitions to use the
92
+ data for any other purpose MUST be directed to the original providers
93
+ of the data. Neither SIGHAN nor the ACL will assume any liability for
94
+ a participant's misuse of the data.
95
+
96
+ * Testing
97
+
98
+ The test data will be available for each corpus at the website at
99
+ 12:00 GMT, July 27, 2005. The test data will be in the same format as
100
+ described for the training data, but of course spaces will be removed.
101
+
102
+ You will have roughly two days to process the data, format the
103
+ results and return them to the SIGHAN website. The final due
104
+ date/time is:
105
+
106
+ July 29, 2005, 12:00, GMT
107
+
108
+ Late submissions will not be scored.
109
+
110
+ The format of the result must adhere to the format
111
+ described for the training data. In particular, there must be one line
112
+ per sentence, and there must be the same number of lines in the
113
+ returned data as in the data available from the site. Segmented words
114
+ and punctuation must be separated by spaces, and there should be
115
+ no further annotations (e.g. part of speech tags) on
116
+ the segmented words. The data must be returned in the same
117
+ coding scheme as they were published in. (For example, If you
118
+ utilize the UTF-8 encoded version of the testing data, then the
119
+ results must be returned in UTF-8.) Participants are reminded that
120
+ ASCII character codes may occur in Chinese text to represent Latin
121
+ letters, numbers and so forth: such codes should be left in their
122
+ original coding scheme. Do not convert them to their GB/Big5
123
+ equivalents. Similarly GB/Big5 codings of Latin letters or Arabic
124
+ numerals should be left in their original coding, and not converted to
125
+ ASCII.
126
+
127
+ The results will be scored completely automatically. The scripts that
128
+ were used to score will be made publicly available. The measures that
129
+ will be reported are precision, recall, and an evenly-weighted
130
+ F-measure. We will also report scores for in-vocabulary and
131
+ out-of-vocabulary words.
132
+
133
+ Note: by downloading the test material and submitting results on this
134
+ material you are thereby declaring that you have not previously seen
135
+ the test material for the given corpus.
136
+
137
+ You are also declaring that your testing will be fully automatic. This
138
+ means that any kind of manual intervention is disallowed, including,
139
+ but not limited to:
140
+
141
+ 1. Manual correction of the output of your segmentation.
142
+ 2. Prepopulating the dictionary with words derived by a manual
143
+ inspection of the test corpus.
144
+
145
+ * Results
146
+
147
+ Results will be provided in two phases. Privately to individual
148
+ participants by August 5, 2005, then publicly to all participants and
149
+ to the community at large at the SIGHAN Workshop. By participating in
150
+ this contest, you are agreeing that the results of the test may be
151
+ published, including the names of the participants.
152
+
153
+ * Writeup
154
+
155
+ By electing to participate in any part of this contest, you are
156
+ agreeing to provide, by August 19, 2005, a two-page writeup that
157
+ briefly describes your segmentation system, and a summary of your
158
+ results. In the closed tests you may describe the technical details of
159
+ how you came by the particular results. In the open test you must
160
+ describe the technical details of how you came by the particular
161
+ results.
162
+
163
+ The format of the two-page paper must adhere to the style guidelines
164
+ for IJCNLP-05, except for the two page limit and the submission via
165
+ the SIGHAN site.
doc/result_instructions.txt ADDED
@@ -0,0 +1,97 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Bakeoff 2005 Result Submission Instructions
2
+
3
+ Thank you for participating in the 2nd International Chinese Word
4
+ Segmentation Bakeoff. Please read this message completely: it
5
+ contains very important information on result submission.
6
+
7
+ This message contains the instructions for submitting your
8
+ results so that they can be scored. I originally planned to
9
+ update the website to allow you to upload your results, but I
10
+ decided against this for two reasons:
11
+
12
+ 1) Many participants are running tracks that are different from
13
+ what they signed up for, or wish to submit alternate runs
14
+ within a track. Writing a web application to handle this
15
+ correctly and reliably is difficult, and errors cannot be
16
+ tolerated.
17
+
18
+ 2) Connectivity to the SIGHAN site has been problematic for some
19
+ participants, especially in China. I do not want cause undue
20
+ stress on them if there are problems.
21
+
22
+ Therefore you should submit your results _by_email_ to me at this
23
+ address:
24
+
25
26
+
27
+ The subject of the message should be
28
+
29
+ Bakeoff 2005 Result Submission
30
+
31
+ The message should be sent by the primary investigator, i.e., the
32
+ one who registered for participation. This is how I will match
33
+ your submission to your registration.
34
+
35
+ Please use the following conventions:
36
+
37
+ 1) Submit a single archive file (.zip, .rar, .tar.gz, or
38
+ .tar.bz2) containing your output file(s). Do this even if you
39
+ are submitting a single result file.
40
+
41
+ 2) The archive should be named with the email user id of the
42
+ submitter. For example, "[email protected]" would
43
+ submit "tree.zip".
44
+
45
+ 3) Each test file should be named X_test_result_[OC]_Y.Z where
46
+
47
+ X is the name of the corpus: pku, cityu, as, msr
48
+ [OC] is whether the result is for the Open or Closed track
49
+ Y is an optional identifier (a lower-case letter) for
50
+ multiple runs of the system
51
+ Z is the file suffix, .txt or .utf8
52
+
53
+ For example, the results of running the UTF-8 encoded PKU
54
+ corpus in the closed track in a single run would be
55
+
56
+ pku_test_result_C.utf8
57
+
58
+ Running the CP950 version of the Academia Sinica corpus in the
59
+ open track with two alternate systems would be
60
+
61
+ as_test_result_O_a.txt
62
+ as_test_result_O_b.txt
63
+
64
+ Note that 'a' and 'b' is used to distinguish two separate runs
65
+ on this corpus.
66
+
67
+ 4) The deadline for which I will accept result submissions is
68
+ 14:00 GMT on Friday, July 29. Results received after this time
69
+ will not be scored unless the team can provide me with a
70
+ *very* good reason to allow the extension (e.g., the building
71
+ burned down, the national computer network went down, etc.)
72
+
73
+ To minimize confusion, 14:00 GMT on 2005/07/29 is:
74
+
75
+ 07:00 2005/07/29 in San Francisco
76
+ 10:00 2005/07/29 in New York
77
+ 22:00 2005/07/29 in Hong Kong
78
+ 22:00 2005/07/29 in Beijing
79
+ 22:00 2005/07/29 in Taipei
80
+ 22:00 2005/07/29 in Singapore
81
+ 23:00 2005/07/29 in Seoul
82
+ 23:00 2005/07/29 in Tokyo
83
+
84
+ Answers to Some Common Questions
85
+
86
+ The AS testing data had the spaces between English words
87
+ inadvertantly removed. You will *not* be penalized for not
88
+ segmenting English text.
89
+
90
+ The AS training data used full-width space (U+3000, Big Five
91
+ 0xA140) to separate tokens. You can use either the full-width
92
+ space or the normal ASCII space (0x20) in your result submission:
93
+ these are scored equally. PLEASE NOTE: this is the *only*
94
+ full-width/half-width normalization that is allowed! Do not
95
+ convert full-width punctuation or Latin characters to half-width,
96
+ these will *not* be scored correctly.
97
+
gold/as_testing_gold.txt ADDED
The diff for this file is too large to render. See raw diff
 
gold/as_testing_gold.utf8 ADDED
The diff for this file is too large to render. See raw diff
 
gold/as_training_words.txt ADDED
The diff for this file is too large to render. See raw diff
 
gold/as_training_words.utf8 ADDED
The diff for this file is too large to render. See raw diff
 
gold/cityu_test_gold.txt ADDED
The diff for this file is too large to render. See raw diff
 
gold/cityu_test_gold.utf8 ADDED
The diff for this file is too large to render. See raw diff
 
gold/cityu_training_words.txt ADDED
The diff for this file is too large to render. See raw diff
 
gold/cityu_training_words.utf8 ADDED
The diff for this file is too large to render. See raw diff
 
gold/msr_test_gold.txt ADDED
The diff for this file is too large to render. See raw diff
 
gold/msr_test_gold.utf8 ADDED
The diff for this file is too large to render. See raw diff
 
gold/msr_training_words.txt ADDED
The diff for this file is too large to render. See raw diff
 
gold/msr_training_words.utf8 ADDED
The diff for this file is too large to render. See raw diff
 
gold/pku_test_gold.txt ADDED
The diff for this file is too large to render. See raw diff
 
gold/pku_test_gold.utf8 ADDED
The diff for this file is too large to render. See raw diff
 
gold/pku_training_words.txt ADDED
The diff for this file is too large to render. See raw diff
 
gold/pku_training_words.utf8 ADDED
The diff for this file is too large to render. See raw diff
 
scripts/mwseg.pl ADDED
@@ -0,0 +1,93 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/perl -w
2
+
3
+ ###########################################################################
4
+ # #
5
+ # SIGHAN #
6
+ # Copyright (c) 2003 #
7
+ # All Rights Reserved. #
8
+ # #
9
+ # Permission is hereby granted, free of charge, to use and distribute #
10
+ # this software and its documentation without restriction, including #
11
+ # without limitation the rights to use, copy, modify, merge, publish, #
12
+ # distribute, sublicense, and/or sell copies of this work, and to #
13
+ # permit persons to whom this work is furnished to do so, subject to #
14
+ # the following conditions: #
15
+ # 1. The code must retain the above copyright notice, this list of #
16
+ # conditions and the following disclaimer. #
17
+ # 2. Any modifications must be clearly marked as such. #
18
+ # 3. Original authors' names are not deleted. #
19
+ # 4. The authors' names are not used to endorse or promote products #
20
+ # derived from this software without specific prior written #
21
+ # permission. #
22
+ # #
23
+ # SIGHAN AND THE CONTRIBUTORS TO THIS WORK DISCLAIM ALL WARRANTIES #
24
+ # WITH REGARD TO THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF #
25
+ # MERCHANTABILITY AND FITNESS, IN NO EVENT SHALL SIGHAN NOR THE #
26
+ # CONTRIBUTORS BE LIABLE FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL #
27
+ # DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA #
28
+ # OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER #
29
+ # TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR #
30
+ # PERFORMANCE OF THIS SOFTWARE. #
31
+ # #
32
+ ###########################################################################
33
+ # #
34
+ # Author: Richard Sproat ([email protected]) #
35
+ # #
36
+ ###########################################################################
37
+
38
+ $USAGE = "Usage:\t$0 dictionary\n\t";
39
+
40
+ if (@ARGV < 1) {print "$USAGE\n"; exit;}
41
+
42
+ %dict = ();
43
+ $maxwlen = 0;
44
+
45
+ open (S, $ARGV[0]) or die "$ARGV[0]: $!\n";
46
+ while (<S>) {
47
+ chop;
48
+ $dict{$_} = 1;
49
+ my $l = length($_);
50
+ $maxwlen = $l if $l > $maxwlen;
51
+ }
52
+ close (S);
53
+
54
+ shift @ARGV;
55
+
56
+ $n = 0;
57
+ while (<>) {
58
+ chop;
59
+ s/\s*//g;
60
+ my $text = $_;
61
+ while ($text ne "") {
62
+ $sub = substr($text, 0, $maxwlen);
63
+ while ($sub ne "") {
64
+ if ($dict{$sub}) {
65
+ print "$sub ";
66
+ for (my $i = 0; $i < length($sub); ++$i) {
67
+ $text =~ s/^.//;
68
+ }
69
+ last;
70
+ }
71
+ $sub =~ s/.$//;
72
+ }
73
+ if ($sub eq "") {
74
+ if ($text =~ /^([\x21-\x7e])/) {
75
+ print "$1 ";
76
+ $text =~ s/^.//;
77
+ }
78
+ elsif ($text =~ /^([^\x21-\x7e].)/) {
79
+ print "$1 ";
80
+ $text =~ s/^..//;
81
+ }
82
+ else { ## shouldn't happen
83
+ print STDERR "Oops: shouldn't be here: $n\n";
84
+ print "$1 ";
85
+ $text =~ s/^.//;
86
+ }
87
+ }
88
+ }
89
+ print "\n";
90
+ ++$n;
91
+ }
92
+
93
+ exit(0);
scripts/score ADDED
@@ -0,0 +1,205 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/perl -w
2
+
3
+ ###########################################################################
4
+ # #
5
+ # SIGHAN #
6
+ # Copyright (c) 2003,2005 #
7
+ # All Rights Reserved. #
8
+ # #
9
+ # Permission is hereby granted, free of charge, to use and distribute #
10
+ # this software and its documentation without restriction, including #
11
+ # without limitation the rights to use, copy, modify, merge, publish, #
12
+ # distribute, sublicense, and/or sell copies of this work, and to #
13
+ # permit persons to whom this work is furnished to do so, subject to #
14
+ # the following conditions: #
15
+ # 1. The code must retain the above copyright notice, this list of #
16
+ # conditions and the following disclaimer. #
17
+ # 2. Any modifications must be clearly marked as such. #
18
+ # 3. Original authors' names are not deleted. #
19
+ # 4. The authors' names are not used to endorse or promote products #
20
+ # derived from this software without specific prior written #
21
+ # permission. #
22
+ # #
23
+ # SIGHAN AND THE CONTRIBUTORS TO THIS WORK DISCLAIM ALL WARRANTIES #
24
+ # WITH REGARD TO THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF #
25
+ # MERCHANTABILITY AND FITNESS, IN NO EVENT SHALL SIGHAN NOR THE #
26
+ # CONTRIBUTORS BE LIABLE FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL #
27
+ # DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA #
28
+ # OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER #
29
+ # TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR #
30
+ # PERFORMANCE OF THIS SOFTWARE. #
31
+ # #
32
+ ###########################################################################
33
+ # #
34
+ # Author: Richard Sproat ([email protected]) #
35
+ # Tom Emerson ([email protected]) #
36
+ # #
37
+ ###########################################################################
38
+
39
+ ## This code depends upon a version of diff (e.g. GNU diffutils 2.7.2)
40
+ ## that supports the -y flag:
41
+ ##
42
+ ## -y Use the side by side output format.
43
+ ##
44
+ ## change the following per your installation:
45
+
46
+ $diff = "/usr/bin/diff";
47
+
48
+ $USAGE = "Usage:\t$0 dictionary truth test\n\t";
49
+
50
+ if (@ARGV != 3) {print "$USAGE\n"; exit;}
51
+
52
+ $tmp1 = "/tmp/comp01$$";
53
+ $tmp2 = "/tmp/comp02$$";
54
+
55
+ %dict = ();
56
+
57
+ open (S, $ARGV[0]) or die "$ARGV[0]: $!\n";
58
+
59
+ while (<S>) {
60
+ chop;
61
+ s/^\s*//;
62
+ s/\s*$//;
63
+ $dict{$_} = 1;
64
+ }
65
+
66
+ close(S);
67
+
68
+ open (TRUTH, $ARGV[1]) or die "$ARGV[1]: $!\n";
69
+ open (TEST, $ARGV[2]) or die "$ARGV[2]: $!\n";
70
+
71
+ $Tot = $Del = $Ins = $Subst = $Truecount = $Testcount = 0;
72
+ $RawRecall = $RawPrecision = 0;
73
+
74
+ $linenum = 0;
75
+
76
+
77
+ $IVMISSED = $OOVMISSED = $OOV = $IV = 0;
78
+
79
+ $file1 = $ARGV[1];
80
+ $file2 = $ARGV[2];
81
+ $file1 =~ s=^/.*/==;
82
+ $file2 =~ s=^/.*/==;
83
+
84
+ while (defined($truth = <TRUTH>) && defined($test = <TEST>)) {
85
+ $truth =~ s/^\s*//;
86
+ $test =~ s/^\s*//;
87
+ $truth =~ s/\s*$//;
88
+ $test =~ s/\s*$//;
89
+ $truth =~ s/(\xe3\x80\x80)|(\xa1\x40)/ /g;
90
+ $test =~ s/(\xe3\x80\x80)|(\xa1\x40)/ /g;
91
+ $truth =~ s/
92
+ $test =~ s/
93
+ @truthwords = split /\s+/, $truth;
94
+ @testwords = split /\s+/, $test;
95
+ $truecount = scalar(@truthwords);
96
+ $testcount = scalar(@testwords);
97
+ ++$linenum;
98
+ if ($truecount == 0) {
99
+ if ($testcount > 0) {
100
+ print STDERR "Warning: training is 0 but test is nonzero, possible misalignment at line $linenum.\n";
101
+ }
102
+ next;
103
+ }
104
+ if ($testcount == 0) {
105
+ print STDERR "Warning: No output in test data where there is in training data, line $linenum\n";
106
+ }
107
+ open (T1, ">$tmp1") or die "Can't open $tmp1";
108
+ open (T2, ">$tmp2") or die "Can't open $tmp2";
109
+ foreach my $w (@truthwords) { print T1 "$w\n"; }
110
+ foreach my $w (@testwords) {print T2 "$w\n";}
111
+ close (T1);
112
+ close (T2);
113
+ open (P, "$diff -y $tmp1 $tmp2 |")
114
+ or die "Can't open pipe.\n";
115
+ print "--$file1-------$file2----$linenum\n";
116
+ my $del = 0;
117
+ my $ins = 0;
118
+ my $subst = 0;
119
+ my $rawrecall = 0;
120
+ my $rawprecision = 0;
121
+ while (<P>) {
122
+ my $err = 0;
123
+ if (/\s\|\s/) {$subst++ ; $err++; }
124
+ elsif (/\s\>\s/) {$ins++ ; $err++; }
125
+ elsif (/\s\<\s/) {$del++ ; $err++; }
126
+ if (/^([^\s]+)\s/) {
127
+ my $w = $1;
128
+ if (!$dict{$w}) {++$OOV;}
129
+ else {++$IV;}
130
+ if (/^[^\s]+\s.*\s[\|\>\<]\s/) {
131
+ if (!$dict{$w}) {++$OOVMISSED;}
132
+ else {++$IVMISSED;}
133
+ ++$rawrecall;
134
+ }
135
+ }
136
+ if (/\s[\|\>\<]\s.*[^\s]$/) { ++$rawprecision; }
137
+ print "$_";
138
+ }
139
+ close (P);
140
+ my $tot = $del + $ins + $subst;
141
+ $Tot += $tot;
142
+ $Del += $del;
143
+ $Ins += $ins;
144
+ $Subst += $subst;
145
+ $Truecount += $truecount;
146
+ $Testcount += $testcount;
147
+ $rawrecall = $truecount - $rawrecall;
148
+ $rawprecision = $testcount - $rawprecision;
149
+ $RawRecall += $rawrecall;
150
+ $RawPrecision += $rawprecision;
151
+ $rawrecall = sprintf("%2.3f", $rawrecall/$truecount);
152
+ $rawprecision = sprintf("%2.3f", $rawprecision/$testcount);
153
+ print "INSERTIONS:\t$ins\n";
154
+ print "DELETIONS:\t$del\n";
155
+ print "SUBSTITUTIONS:\t$subst\n";
156
+ print "NCHANGE:\t$tot\n";
157
+ print "NTRUTH:\t$truecount\n";
158
+ print "NTEST:\t$testcount\n";
159
+ print "TRUE WORDS RECALL:\t$rawrecall\n";
160
+ print "TEST WORDS PRECISION:\t$rawprecision\n";
161
+ }
162
+
163
+ close(TRUTH);
164
+ close(TEST);
165
+ unlink($tmp1);
166
+ unlink($tmp2);
167
+
168
+ print "=== SUMMARY:\n";
169
+ print "=== TOTAL INSERTIONS:\t$Ins\n";
170
+ print "=== TOTAL DELETIONS:\t$Del\n";
171
+ print "=== TOTAL SUBSTITUTIONS:\t$Subst\n";
172
+ print "=== TOTAL NCHANGE:\t$Tot\n";
173
+ print "=== TOTAL TRUE WORD COUNT:\t$Truecount\n";
174
+ print "=== TOTAL TEST WORD COUNT:\t$Testcount\n";
175
+ $RawRecall = $RawRecall/$Truecount;
176
+ $RawPrecision = $RawPrecision/$Testcount;
177
+ $beta = 1;
178
+ $R = $RawRecall;
179
+ $P = $RawPrecision;
180
+ $F = (1 + $beta)*$P*$R/($beta * $P + $R);
181
+ $F = sprintf("%2.3f", $F);
182
+ $RawRecall = sprintf("%2.3f", $RawRecall);
183
+ $RawPrecision = sprintf("%2.3f", $RawPrecision);
184
+ print "=== TOTAL TRUE WORDS RECALL:\t$RawRecall\n";
185
+ print "=== TOTAL TEST WORDS PRECISION:\t$RawPrecision\n";
186
+ print "=== F MEASURE:\t$F\n";
187
+ if ($OOV > 0) {
188
+ $OOVMISSED = sprintf("%2.3f", 1 - $OOVMISSED / $OOV);
189
+ }
190
+ else {
191
+ $OOVMISSED = "--";
192
+ }
193
+ $OOV = sprintf("%2.3f", $OOV / $Truecount);
194
+ if ($IV > 0) {
195
+ $IVMISSED = sprintf("%2.3f", 1 - $IVMISSED / $IV);
196
+ }
197
+ else {
198
+ $IVMISSED = "--";
199
+ }
200
+ print "=== OOV Rate:\t$OOV\n";
201
+ print "=== OOV Recall Rate:\t$OOVMISSED\n";
202
+ print "=== IV Recall Rate:\t$IVMISSED\n";
203
+
204
+ print "###\t$file2\t$Ins\t$Del\t$Subst\t$Tot\t$Truecount\t$Testcount\t$RawRecall\t$RawPrecision\t$F\t$OOV\t$OOVMISSED\t$IVMISSED\n";
205
+ exit(0);
testing/as_test.txt ADDED
The diff for this file is too large to render. See raw diff
 
testing/as_test.utf8 ADDED
The diff for this file is too large to render. See raw diff
 
testing/cityu_test.txt ADDED
The diff for this file is too large to render. See raw diff
 
testing/cityu_test.utf8 ADDED
The diff for this file is too large to render. See raw diff
 
testing/msr_test.txt ADDED
The diff for this file is too large to render. See raw diff
 
testing/msr_test.utf8 ADDED
The diff for this file is too large to render. See raw diff
 
testing/pku_test.txt ADDED
The diff for this file is too large to render. See raw diff
 
testing/pku_test.utf8 ADDED
The diff for this file is too large to render. See raw diff
 
training/as_training.b5 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:19b05266bc1a8a020f51153ecceb38d5d3c517f5a468a085963bfe465d2ae882
3
+ size 27635392
training/as_training.utf8 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7169e008e88a434a1c21212d5e379a85a5d3a136679eabe3ae0dd677edfb26f0
3
+ size 40743877
training/cityu_training.txt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:774c5ee7cfcb8d01348bb65622fd635df8ed910b65d0eee4742b41e9e5bf9d72
3
+ size 6230851
training/cityu_training.utf8 ADDED
The diff for this file is too large to render. See raw diff
 
training/msr_training.txt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3ab2390aad4c28934c282eb49a25ff7b210540fb6d70ec23f0b7af15926c3cd7
3
+ size 12842947
training/msr_training.utf8 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:872a24b0aa827fe5334370c5d9bf090902a948cc103c4775a3f61285c052e6ee
3
+ size 16891510
training/pku_training.txt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cfc95e1e1313b914ce3d4e7fcaca2ee918381c04546f85c50b67200579c1e807
3
+ size 5906861
training/pku_training.utf8 ADDED
The diff for this file is too large to render. See raw diff