table_id_paper
stringlengths 15
15
| caption
stringlengths 14
1.88k
| row_header_level
int32 1
9
| row_headers
large_stringlengths 15
1.75k
| column_header_level
int32 1
6
| column_headers
large_stringlengths 7
1.01k
| contents
large_stringlengths 18
2.36k
| metrics_loc
stringclasses 2
values | metrics_type
large_stringlengths 5
532
| target_entity
large_stringlengths 2
330
| table_html_clean
large_stringlengths 274
7.88k
| table_name
stringclasses 9
values | table_id
stringclasses 9
values | paper_id
stringlengths 8
8
| page_no
int32 1
13
| dir
stringclasses 8
values | description
large_stringlengths 103
3.8k
| class_sentence
stringlengths 3
120
| sentences
large_stringlengths 110
3.92k
| header_mention
stringlengths 12
1.8k
| valid
int32 0
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
D16-1231table_3 | Benchmark results: accuracy on addressee-response selection (ADR-RES), addressee selection (ADR), and response selection (RES). Nc is the context window. Bolded are the best per column. | 3 | [['Chance', 'Nc', '-'], ['Chance', 'Nc', '5'], ['Baseline', 'Nc', '10'], ['Baseline', 'Nc', '15'], ['Baseline', 'Nc', '5'], ['Static', 'Nc', '10'], ['Static', 'Nc', '15'], ['Static', 'Nc', '5'], ['Dynamic', 'Nc', '10'], ['Dynamic', 'Nc', '15']] | 2 | [['RES-CAND = 2', 'ADR-RES'], ['RES-CAND = 2', 'ADR'], ['RES-CAND = 2', 'RES'], ['RES-CAND = 10', 'ADR-RES'], ['RES-CAND = 10', 'ADR'], ['RES-CAND = 10', 'RES']] | [['0.62', '1.24', '50.00', '0.12', '1.24', '10.00'], ['36.97', '55.73', '65.68', '16.34', '55.73', '28.19'], ['37.42', '55.63', '67.79', '16.11', '55.63', '29.48'], ['37.13', '55.62', '67.89', '15.44', '55.62', '29.19'], ['46.99', '60.39', '75.07', '21.98', '60.26', '33.27'], ['48.67', '60.97', '77.75', '23.31', '60.66', '35.91'], ['49.27', '61.95', '78.14', '23.49', '60.98', '36.58'], ['49.80', '63.19', '76.07', '23.72', '63.28', '33.62'], ['53.85', '66.94', '78.16', '25.95', '66.70', '36.14'], ['54.88', '68.54', '78.64', '27.19', '68.41', '36.93']] | column | ['accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy'] | ['Static', 'Dynamic'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>RES-CAND = 2 || ADR-RES</th> <th>RES-CAND = 2 || ADR</th> <th>RES-CAND = 2 || RES</th> <th>RES-CAND = 10 || ADR-RES</th> <th>RES-CAND = 10 || ADR</th> <th>RES-CAND = 10 || RES</th> </tr> </thead> <tbody> <tr> <td>Chance || Nc || -</td> <td>0.62</td> <td>1.24</td> <td>50.00</td> <td>0.12</td> <td>1.24</td> <td>10.00</td> </tr> <tr> <td>Chance || Nc || 5</td> <td>36.97</td> <td>55.73</td> <td>65.68</td> <td>16.34</td> <td>55.73</td> <td>28.19</td> </tr> <tr> <td>Baseline || Nc || 10</td> <td>37.42</td> <td>55.63</td> <td>67.79</td> <td>16.11</td> <td>55.63</td> <td>29.48</td> </tr> <tr> <td>Baseline || Nc || 15</td> <td>37.13</td> <td>55.62</td> <td>67.89</td> <td>15.44</td> <td>55.62</td> <td>29.19</td> </tr> <tr> <td>Baseline || Nc || 5</td> <td>46.99</td> <td>60.39</td> <td>75.07</td> <td>21.98</td> <td>60.26</td> <td>33.27</td> </tr> <tr> <td>Static || Nc || 10</td> <td>48.67</td> <td>60.97</td> <td>77.75</td> <td>23.31</td> <td>60.66</td> <td>35.91</td> </tr> <tr> <td>Static || Nc || 15</td> <td>49.27</td> <td>61.95</td> <td>78.14</td> <td>23.49</td> <td>60.98</td> <td>36.58</td> </tr> <tr> <td>Static || Nc || 5</td> <td>49.80</td> <td>63.19</td> <td>76.07</td> <td>23.72</td> <td>63.28</td> <td>33.62</td> </tr> <tr> <td>Dynamic || Nc || 10</td> <td>53.85</td> <td>66.94</td> <td>78.16</td> <td>25.95</td> <td>66.70</td> <td>36.14</td> </tr> <tr> <td>Dynamic || Nc || 15</td> <td>54.88</td> <td>68.54</td> <td>78.64</td> <td>27.19</td> <td>68.41</td> <td>36.93</td> </tr> </tbody></table> | Table 3 | table_3 | D16-1231 | 8 | emnlp2016 | Table 3 shows the empirical benchmark results. The dynamic model achieves the best results in all the metrics. The static model outperforms the baseline, but is inferior to the dynamic model. In addressee selection (ADR), the baseline model achieves around 55% in accuracy. This means that if you select the agents that spoke most recently as an addressee, the half of them are correct. Compared with the baseline, our proposed models achieve better results, which suggests that the models can select the correct addressees that spoke at more previous time steps. In particular, the dynamic model achieves 68% in accuracy, which is 7 point higher than the accuracy of static model. In response selection (RES), our models outperform the baseline. Compared with the static model,the dynamic model achieves around 0.5 point higher in accuracy. | [1, 1, 1, 1, 2, 1, 1, 1, 1] | ['Table 3 shows the empirical benchmark results.', 'The dynamic model achieves the best results in all the metrics.', 'The static model outperforms the baseline, but is inferior to the dynamic model.', 'In addressee selection (ADR), the baseline model achieves around 55% in accuracy.', 'This means that if you select the agents that spoke most recently as an addressee, the half of them are correct.', 'Compared with the baseline, our proposed models achieve better results, which suggests that the models can select the correct addressees that spoke at more previous time steps.', 'In particular, the dynamic model achieves 68% in accuracy, which is 7 point higher than the accuracy of static model.', 'In response selection (RES), our models outperform the baseline.', 'Compared with the static model,the dynamic model achieves around 0.5 point higher in accuracy.'] | [None, ['Dynamic', 'ADR-RES', 'ADR', 'RES'], ['Baseline', 'Static', 'Dynamic'], ['ADR', 'Baseline'], None, ['Baseline'], ['Dynamic', 'ADR', 'Static'], ['RES', 'Baseline', 'Dynamic', 'Static'], ['Static', 'Dynamic', 'RES']] | 1 |
D16-1231table_4 | Performance comparison for different numbers of agents appearing in the context. The numbers are accuracies on the test set with the number of candidate responses CAND-RES = 2 and the context window Nc = 15. | 2 | [['ADR-RES', 'Baseline'], ['ADR-RES', 'Static'], ['ADR-RES', 'Dynamic'], ['ADR', 'Baseline'], ['ADR', 'Static'], ['ADR', 'Dynamic'], ['RES', 'Baseline'], ['RES', 'Static'], ['RES', 'Dynamic']] | 4 | [['No. of Agents', '2-5', 'No. of Samples', '3731'], ['No. of Agents', '6-10', 'No. of Samples', '5962'], ['No. of Agents', '11-15', 'No. of Samples', '5475'], ['No. of Agents', '16-20', 'No. of Samples', '4495'], ['No. of Agents', '21-30', 'No. of Samples', '5619'], ['No. of Agents', '31-100', 'No. of Samples', '7956'], ['No. of Agents', '101-305', 'No. of Samples', '18659']] | [['52.13', '43.51', '39.98', '42.96', '39.70', '36.55', '29.22'], ['64.17', '55.92', '50.72', '53.04', '48.69', '49.61', '42.86'], ['66.9', '57.73', '54.32', '55.64', '51.61', '55.88', '52.14'], ['84.94', '70.82', '62.14', '65.52', '58.89', '51.28', '41.47'], ['86.33', '74.37', '66.12', '68.54', '63.43', '59.24', '50.99'], ['87.64', '76.48', '69.99', '72.21', '66.90', '66.78', '62.11'], ['60.71', '61.24', '64.51', '65.58', '67.93', '71.66', '71.38'], ['73.6', '73.45', '74.54', '75.95', '75.17', '81.5', '81.6'], ['75.64', '74.12', '75.53', '75.17', '76.05', '81.96', '81.81']] | column | ['accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy'] | ['Static', 'Dynamic'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>No. of Agents || 2-5 || No. of Samples || 3731</th> <th>No. of Agents || 6-10 || No. of Samples || 5962</th> <th>No. of Agents || 11-15 || No. of Samples || 5475</th> <th>No. of Agents || 16-20 || No. of Samples || 4495</th> <th>No. of Agents || 21-30 || No. of Samples || 5619</th> <th>No. of Agents || 31-100 || No. of Samples || 7956</th> <th>No. of Agents || 101-305 || No. of Samples || 18659</th> </tr> </thead> <tbody> <tr> <td>ADR-RES || Baseline</td> <td>52.13</td> <td>43.51</td> <td>39.98</td> <td>42.96</td> <td>39.70</td> <td>36.55</td> <td>29.22</td> </tr> <tr> <td>ADR-RES || Static</td> <td>64.17</td> <td>55.92</td> <td>50.72</td> <td>53.04</td> <td>48.69</td> <td>49.61</td> <td>42.86</td> </tr> <tr> <td>ADR-RES || Dynamic</td> <td>66.9</td> <td>57.73</td> <td>54.32</td> <td>55.64</td> <td>51.61</td> <td>55.88</td> <td>52.14</td> </tr> <tr> <td>ADR || Baseline</td> <td>84.94</td> <td>70.82</td> <td>62.14</td> <td>65.52</td> <td>58.89</td> <td>51.28</td> <td>41.47</td> </tr> <tr> <td>ADR || Static</td> <td>86.33</td> <td>74.37</td> <td>66.12</td> <td>68.54</td> <td>63.43</td> <td>59.24</td> <td>50.99</td> </tr> <tr> <td>ADR || Dynamic</td> <td>87.64</td> <td>76.48</td> <td>69.99</td> <td>72.21</td> <td>66.90</td> <td>66.78</td> <td>62.11</td> </tr> <tr> <td>RES || Baseline</td> <td>60.71</td> <td>61.24</td> <td>64.51</td> <td>65.58</td> <td>67.93</td> <td>71.66</td> <td>71.38</td> </tr> <tr> <td>RES || Static</td> <td>73.6</td> <td>73.45</td> <td>74.54</td> <td>75.95</td> <td>75.17</td> <td>81.5</td> <td>81.6</td> </tr> <tr> <td>RES || Dynamic</td> <td>75.64</td> <td>74.12</td> <td>75.53</td> <td>75.17</td> <td>76.05</td> <td>81.96</td> <td>81.81</td> </tr> </tbody></table> | Table 4 | table_4 | D16-1231 | 9 | emnlp2016 | To shed light on the relationship between the model performance and the number of agents in multi-party conversation, we investigate the effect of the number of agents participating in each context. Table 4 compares the performance of the models for different numbers of agents in a context. In addressee selection, the performance of all models gradually gets worse as the number of agents in the context increases. However, compared with the baseline, our proposed models suppress the performance degradation. In particular, the dynamic model predicts correct addressees most robustly. In response selection, unexpectedly, the performance of all the models gets better as the number of agents increases. Detailed investigation on the interaction between the number of agents and the response selection complexity is an interesting line of future work. | [2, 1, 1, 1, 1, 1, 0] | ['To shed light on the relationship between the model performance and the number of agents in multi-party conversation, we investigate the effect of the number of agents participating in each context.', 'Table 4 compares the performance of the models for different numbers of agents in a context.', 'In addressee selection, the performance of all models gradually gets worse as the number of agents in the context increases.', 'However, compared with the baseline, our proposed models suppress the performance degradation.', 'In particular, the dynamic model predicts correct addressees most robustly.', 'In response selection, unexpectedly, the performance of all the models gets better as the number of agents increases.', 'Detailed investigation on the interaction between the number of agents and the response selection complexity is an interesting line of future work.'] | [None, None, ['Baseline', 'Static', 'Dynamic', 'No. of Agents', 'ADR'], ['Baseline', 'Static', 'Dynamic'], ['Dynamic'], ['Baseline', 'Static', 'Dynamic', 'No. of Agents', 'RES'], None] | 1 |
D16-1237table_2 | F-score for headlines and images datasets. These tables show the result of our systems, baseline and top-ranked systems. DA is our strong baseline trained on interpretable STS dataset; DA + DS is trained on interpretable STS as well as STS dataset. The rank 1 system on headlines is Inspire (Kazmi and Sch¨uller, 2016) and UWB (Konopik et al., 2016) on images. Bold are the best scores. | 2 | [['Headline results', 'Baseline'], ['Headline results', 'Rank 1'], ['Headline results', 'DA'], ['Headline results', 'DA +DS'], ['Images results', 'Baseline'], ['Images results', 'Rank 1'], ['Images results', 'DA'], ['Images results', 'DA +DS']] | 2 | [['untyped', 'ali'], ['untyped', 'score'], ['typed', 'ali'], ['typed', 'score']] | [['0.8462', '0.7610', '0.5462', '0.5461'], ['0.8194', '0.7865', '0.7031', '0.696'], ['0.9257', '0.8377', '0.735', '0.6776'], ['0.9235', '0.8591', '0.7281', '0.6948'], ['0.8556', '0.7456', '0.4799', '0.4799.1'], ['0.8922', '0.8408', '0.6867', '0.6708'], ['0.8689', '0.7905', '0.6933', '0.6411'], ['0.8738', '0.8193', '0.7011', '0.6769']] | column | ['F-score', 'F-score', 'F-score', 'F-score'] | ['DA', 'DA +DS'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>untyped || ali</th> <th>untyped || score</th> <th>typed || ali</th> <th>typed || score</th> </tr> </thead> <tbody> <tr> <td>Headline results || Baseline</td> <td>0.8462</td> <td>0.7610</td> <td>0.5462</td> <td>0.5461</td> </tr> <tr> <td>Headline results || Rank 1</td> <td>0.8194</td> <td>0.7865</td> <td>0.7031</td> <td>0.696</td> </tr> <tr> <td>Headline results || DA</td> <td>0.9257</td> <td>0.8377</td> <td>0.735</td> <td>0.6776</td> </tr> <tr> <td>Headline results || DA +DS</td> <td>0.9235</td> <td>0.8591</td> <td>0.7281</td> <td>0.6948</td> </tr> <tr> <td>Images results || Baseline</td> <td>0.8556</td> <td>0.7456</td> <td>0.4799</td> <td>0.4799.1</td> </tr> <tr> <td>Images results || Rank 1</td> <td>0.8922</td> <td>0.8408</td> <td>0.6867</td> <td>0.6708</td> </tr> <tr> <td>Images results || DA</td> <td>0.8689</td> <td>0.7905</td> <td>0.6933</td> <td>0.6411</td> </tr> <tr> <td>Images results || DA +DS</td> <td>0.8738</td> <td>0.8193</td> <td>0.7011</td> <td>0.6769</td> </tr> </tbody></table> | Table 2 | table_2 | D16-1237 | 7 | emnlp2016 | By comparing the rows labeled DA and DA +DS in Table 2 (a) and Table 2 (b), we see that in both the headlines and the images datasets, adding sentence level information improves the untyped score, lifting the stricter typed score F1. On the headlines dataset, incorporating sentence-level information degrades both the untyped and typed alignment quality because we cross-validated on the typed score metric. The typed score metric is the combination of untyped alignment, untyped score and typed alignment. From the row DA + DS in Table 2(a), we observe that the typed score F1 is slightly behind that of rank 1 system while all other three metrics are significantly better, indicating that we need to improve our modeling of the intersection of the three aspects. However, this does not apply to images dataset where the improvement on the typed score F1 comes from the typed alignment. Further, we see that even our base model that only depends on the alignment data offers strong alignment F1 scores. This validates the utility of jointly modeling alignments and chunk similarities. Adding sentence data to this already strong system leads to performance that is comparable to or better than the state-of-the-art systems. Indeed, our final results would have been ranked first on the images task and a close second on the headlines task in the official standings. | [1, 1, 2, 1, 2, 1, 2, 0, 0] | ['By comparing the rows labeled DA and DA +DS in Table 2 (a) and Table 2 (b), we see that in both the headlines and the images datasets, adding sentence level information improves the untyped score, lifting the stricter typed score F1.', 'On the headlines dataset, incorporating sentence-level information degrades both the untyped and typed alignment quality because we cross-validated on the typed score metric.', 'The typed score metric is the combination of untyped alignment, untyped score and typed alignment.', 'From the row DA + DS in Table 2(a), we observe that the typed score F1 is slightly behind that of rank 1 system while all other three metrics are significantly better, indicating that we need to improve our modeling of the intersection of the three aspects.', 'However, this does not apply to images dataset where the improvement on the typed score F1 comes from the typed alignment.', 'Further, we see that even our base model that only depends on the alignment data offers strong alignment F1 scores.', 'This validates the utility of jointly modeling alignments and chunk similarities.', 'Adding sentence data to this already strong system leads to performance that is comparable to or better than the state-of-the-art systems.', 'Indeed, our final results would have been ranked first on the images task and a close second on the headlines task in the official standings.'] | [['DA', 'DA +DS', 'untyped', 'typed'], ['Headline results', 'untyped', 'typed'], ['typed'], ['DA +DS', 'score', 'Rank 1'], None, ['DA', 'DA +DS'], None, None, None] | 1 |
D16-1238table_1 | Parsing accuracy on PTB test set. Our parser uses the same POS tagger as C&M (2014) and Dyer et al. (2015), whereas other parsers use a different POS tagger. Results with † and ∗ are provided in (Alberti et al., 2015) and (Andor et al., 2016), respectively. | 4 | [['Type', 'Trans.', 'Method', 'C&M (2014)'], ['Type', 'Trans.', 'Method', 'Dyer et al. (2015)'], ['Type', 'Trans.', 'Method', 'B&N (2012)'], ['Type', 'Trans.', 'Method', 'Alberti et al. (2015)'], ['Type', 'Trans.', 'Method', 'Weiss et al. (2015)'], ['Type', 'Trans.', 'Method', 'Andor et al. (2016)'], ['Type', 'Graph', 'Method', 'Bohnet (2010)'], ['Type', 'Graph', 'Method', 'Martins et al. (2013)'], ['Type', 'Graph', 'Method', 'Z&M (2014)'], ['Type', 'Graph', 'Method', 'BiAtt-DP']] | 1 | [['UAS'], ['LAS']] | [['91.8', '89.6'], ['93.2', '90.9'], ['93.33', '91.22'], ['94.23', '92.41'], ['94.26', '92.41'], ['94.41', '92.55'], ['92.88', '90.71'], ['92.89', '90.55'], ['93.22', '91.02'], ['94.10', '91.49']] | column | ['UAS', 'LAS'] | ['BiAtt-DP'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>UAS</th> <th>LAS</th> </tr> </thead> <tbody> <tr> <td>Type || Trans. || Method || C&M (2014)</td> <td>91.8</td> <td>89.6</td> </tr> <tr> <td>Type || Trans. || Method || Dyer et al. (2015)</td> <td>93.2</td> <td>90.9</td> </tr> <tr> <td>Type || Trans. || Method || B&N (2012)</td> <td>93.33</td> <td>91.22</td> </tr> <tr> <td>Type || Trans. || Method || Alberti et al. (2015)</td> <td>94.23</td> <td>92.41</td> </tr> <tr> <td>Type || Trans. || Method || Weiss et al. (2015)</td> <td>94.26</td> <td>92.41</td> </tr> <tr> <td>Type || Trans. || Method || Andor et al. (2016)</td> <td>94.41</td> <td>92.55</td> </tr> <tr> <td>Type || Graph || Method || Bohnet (2010)</td> <td>92.88</td> <td>90.71</td> </tr> <tr> <td>Type || Graph || Method || Martins et al. (2013)</td> <td>92.89</td> <td>90.55</td> </tr> <tr> <td>Type || Graph || Method || Z&M (2014)</td> <td>93.22</td> <td>91.02</td> </tr> <tr> <td>Type || Graph || Method || BiAtt-DP</td> <td>94.10</td> <td>91.49</td> </tr> </tbody></table> | Table 1 | table_1 | D16-1238 | 7 | emnlp2016 | We first compare our parser with state-of-the-art neural transition-based dependency parsers on PTB and CTB. For English, we also compare with stateof-the-art graph-based dependency parsers. The results are shown in Table 1. It can be seen that the BiAtt-DP outperforms all other graph-based parsers on PTB. Compared with the transition-based parsers, it achieves better accuracy than Chen and Manning (2014), which uses a feed-forward neural network, and Dyer et al. (2015), which uses three stack LSTM networks. Compared with the integrated parsing and tagging models, the BiAtt-DP outperforms Bohnet and Nivre (2012) but has a small gap to Alberti et al. (2015). | [1, 1, 1, 1, 1, 1] | ['We first compare our parser with state-of-the-art neural transition-based dependency parsers on PTB and CTB.', 'For English, we also compare with state-of-the-art graph-based dependency parsers.', 'The results are shown in Table 1.', 'It can be seen that the BiAtt-DP outperforms all other graph-based parsers on PTB.', 'Compared with the transition-based parsers, it achieves better accuracy than Chen and Manning (2014), which uses a feed-forward neural network, and Dyer et al. (2015), which uses three stack LSTM networks.', 'Compared with the integrated parsing and tagging models, the BiAtt-DP outperforms Bohnet and Nivre (2012) but has a small gap to Alberti et al. (2015).'] | [None, None, None, ['BiAtt-DP', 'Graph'], ['Trans.', 'C&M (2014)', 'Dyer et al. (2015)'], ['BiAtt-DP', 'B&N (2012)', 'Alberti et al. (2015)']] | 1 |
D16-1238table_3 | UAS on 12 languages in the CoNLL 2006 shared task (Buchholz and Marsi, 2006). We also report corresponding LAS in squared brackets. The results of the 3rd-order RBGParser are reported in (Lei et al., 2014). Best published results on the same dataset in terms of UAS among (Pitler and McDonald, 2015), (Zhang and McDonald, 2014), (Zhang et al., 2013), (Zhang and McDonald, 2012), (Rush and Petrov, 2012), (Martins et al., 2013), (Martins et al., 2010), and (Koo et al., 2010). To study the effectiveness of the parser in dealing with non-projectivity, we follow (Pitler and McDonald, 2015), to compute the recall of crossed and uncrossed arcs in the gold tree, as well as the percentage of crossed arcs. | 2 | [['Language', 'Arabic'], ['Language', 'Bulgarian'], ['Language', 'Czech'], ['Language', 'Danish'], ['Language', 'Dutch'], ['Language', 'German'], ['Language', 'Japanese'], ['Language', 'Portuguese'], ['Language', 'Slovene'], ['Language', 'Spanish'], ['Language', 'Swedish'], ['Language', 'Turkish']] | 1 | [['BiAtt-DP'], ['RBGParser'], ['Best Published'], ['Crossed'], ['Uncrossed'], ['%Crossed']] | [['80.34 [68.58]', '79.95', '81.12 (Ma11)', '17.24', '80.71', '0.58'], ['93.96 [89.55]', '93.5', '94.02 (Zh14)', '79.59', '94.1', '0.98'], ['91.16 [85.14]', '90.5', '90.32 (Ma13)', '81.62', '91.63', '4.68'], ['91.56 [85.53]', '91.39', '92.00 (Zh13)', '73.33', '91.89', '1.8'], ['87.15 [82.41]', '86.41', '86.19 (Ma13)', '82.82', '87.66', '10.48'], ['92.71 [89.80]', '91.97', '92.41 (Ma13)', '85.93', '92.9', '2.7'], ['93.44 [90.67]', '93.71', '93.72 (Ma11)', '48.67', '94.48', '2.26'], ['92.77 [88.44]', '91.92', '93.03 (Ko10)', '73.02', '93.28', '2.52'], ['86.01 [75.90]', '86.24', '86.95 (Ma11)', '60.11', '86.99', '3.66'], ['88.74 [84.03]', '88.0', '87.98 (Zh14)', '50.0', '88.77', '0.08'], ['90.50 [84.05]', '91.0', '91.85 (Zh14)', '45.16', '90.78', '0.62'], ['78.43 [66.16]', '76.84', '77.55 (Ko10)', '38.85', '79.71', '3.13']] | column | ['UAS', 'UAS', 'UAS', 'UAS', 'UAS', 'UAS'] | ['BiAtt-DP'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>BiAtt-DP</th> <th>RBGParser</th> <th>Best Published</th> <th>Crossed</th> <th>Uncrossed</th> <th>%Crossed</th> </tr> </thead> <tbody> <tr> <td>Language || Arabic</td> <td>80.34 [68.58]</td> <td>79.95</td> <td>81.12 (Ma11)</td> <td>17.24</td> <td>80.71</td> <td>0.58</td> </tr> <tr> <td>Language || Bulgarian</td> <td>93.96 [89.55]</td> <td>93.5</td> <td>94.02 (Zh14)</td> <td>79.59</td> <td>94.1</td> <td>0.98</td> </tr> <tr> <td>Language || Czech</td> <td>91.16 [85.14]</td> <td>90.5</td> <td>90.32 (Ma13)</td> <td>81.62</td> <td>91.63</td> <td>4.68</td> </tr> <tr> <td>Language || Danish</td> <td>91.56 [85.53]</td> <td>91.39</td> <td>92.00 (Zh13)</td> <td>73.33</td> <td>91.89</td> <td>1.8</td> </tr> <tr> <td>Language || Dutch</td> <td>87.15 [82.41]</td> <td>86.41</td> <td>86.19 (Ma13)</td> <td>82.82</td> <td>87.66</td> <td>10.48</td> </tr> <tr> <td>Language || German</td> <td>92.71 [89.80]</td> <td>91.97</td> <td>92.41 (Ma13)</td> <td>85.93</td> <td>92.9</td> <td>2.7</td> </tr> <tr> <td>Language || Japanese</td> <td>93.44 [90.67]</td> <td>93.71</td> <td>93.72 (Ma11)</td> <td>48.67</td> <td>94.48</td> <td>2.26</td> </tr> <tr> <td>Language || Portuguese</td> <td>92.77 [88.44]</td> <td>91.92</td> <td>93.03 (Ko10)</td> <td>73.02</td> <td>93.28</td> <td>2.52</td> </tr> <tr> <td>Language || Slovene</td> <td>86.01 [75.90]</td> <td>86.24</td> <td>86.95 (Ma11)</td> <td>60.11</td> <td>86.99</td> <td>3.66</td> </tr> <tr> <td>Language || Spanish</td> <td>88.74 [84.03]</td> <td>88.0</td> <td>87.98 (Zh14)</td> <td>50.0</td> <td>88.77</td> <td>0.08</td> </tr> <tr> <td>Language || Swedish</td> <td>90.50 [84.05]</td> <td>91.0</td> <td>91.85 (Zh14)</td> <td>45.16</td> <td>90.78</td> <td>0.62</td> </tr> <tr> <td>Language || Turkish</td> <td>78.43 [66.16]</td> <td>76.84</td> <td>77.55 (Ko10)</td> <td>38.85</td> <td>79.71</td> <td>3.13</td> </tr> </tbody></table> | Table 3 | table_3 | D16-1238 | 7 | emnlp2016 | It can be observed from Table 3 that the BiAttDP has highly competitive parsing accuracy as stateof-the-art parsers. Moreover, it achieves best UAS for 5 out of 12 languages. For the remaining seven languages, the UAS gaps between the BiAtt-DP and state-of-the-art parsers are within 1.0%, except Swedish. An arguably fair comparison for the BiAttDP is the MSTParser (McDonald and Pereira, 2006), since the BiAtt-DP replaces the scoring function for arcs but uses exactly the same search algorithm. Due to the space limit, we refer readers to (Lei et al., 2014) for results of the MSTParsers (also shown in Appendix B). The BiAtt-DP consistently outperforms both parser by up to 5% absolute UAS score. Finally, following (Pitler and McDonald, 2015), we also analyze the performance of the BiAtt-DP on both crossed and uncrossed arcs. Since the BiAttDP uses a graph-based non-projective parsing algorithm, it is interesting to evaluate the performance on crossed arcs, which result in the non-projectivity of the dependency tree. The last three columns of Table 3 show the recall of crossed arcs, that of uncrossed arcs, and the percentage of crossed arcs in the test set. Pitler and McDonald (2015) reported numbers on the same data for Dutch, German, Portuguese, and Slovene as in this paper. For these four languages, the BiAtt-DP achieves better UAS than that reported in (Pitler and McDonald, 2015). More importantly, we observe that the improvement on recall of crossed arcs (around 10–18% absolutely) is much more significant than that of uncrossed arcs (around 1–3% absolutely), which indicates the effectiveness of the BiAtt-DP in parsing languages with non-projective trees. | [1, 1, 1, 2, 0, 1, 2, 2, 1, 2, 1, 1] | ['It can be observed from Table 3 that the BiAttDP has highly competitive parsing accuracy as stateof-the-art parsers.', 'Moreover, it achieves best UAS for 5 out of 12 languages.', 'For the remaining seven languages, the UAS gaps between the BiAtt-DP and state-of-the-art parsers are within 1.0%, except Swedish.', 'An arguably fair comparison for the BiAttDP is the MSTParser (McDonald and Pereira, 2006), since the BiAtt-DP replaces the scoring function for arcs but uses exactly the same search algorithm.', 'Due to the space limit, we refer readers to (Lei et al., 2014) for results of the MSTParsers (also shown in Appendix B).', 'The BiAtt-DP consistently outperforms both parser by up to 5% absolute UAS score.', 'Finally, following (Pitler and McDonald, 2015), we also analyze the performance of the BiAtt-DP on both crossed and uncrossed arcs.', 'Since the BiAttDP uses a graph-based non-projective parsing algorithm, it is interesting to evaluate the performance on crossed arcs, which result in the non-projectivity of the dependency tree.', 'The last three columns of Table 3 show the recall of crossed arcs, that of uncrossed arcs, and the percentage of crossed arcs in the test set.', 'Pitler and McDonald (2015) reported numbers on the same data for Dutch, German, Portuguese, and Slovene as in this paper.', 'For these four languages, the BiAtt-DP achieves better UAS than that reported in (Pitler and McDonald, 2015).', 'More importantly, we observe that the improvement on recall of crossed arcs (around 10–18% absolutely) is much more significant than that of uncrossed arcs (around 1–3% absolutely), which indicates the effectiveness of the BiAtt-DP in parsing languages with non-projective trees.'] | [['BiAtt-DP'], ['BiAtt-DP', 'Language'], ['BiAtt-DP', 'Best Published', 'Swedish'], ['BiAtt-DP'], None, ['BiAtt-DP', 'RBGParser', 'Best Published'], ['BiAtt-DP'], None, ['Crossed', 'Uncrossed', '%Crossed'], None, ['BiAtt-DP', 'Dutch', 'German', 'Portuguese', 'Slovene'], ['BiAtt-DP', 'Crossed', 'Uncrossed']] | 1 |
D16-1242table_4 | The results on a subset of JSeM that is a translation of FraCaS. M15 refers to the accuracy of Mineshima et al. (2015) on the corresponding sections of FraCaS. | 2 | [['Section', 'Quantifier.1'], ['Section', 'Plural'], ['Section', 'Adjective'], ['Section', 'Verb'], ['Section', 'Attitude'], ['Section', 'Total']] | 1 | [['Gold'], ['System'], ['M15']] | [['92.5', '78.2', '78.4'], ['65.8', '52.6', '66.7'], ['57.1', '47.6', '68.2'], ['66.7', '66.7', '62.5'], ['78.6', '78.6', '76.9'], ['87.3', '74.1', '73.3']] | column | ['accuracy', 'accuracy', 'accuracy'] | ['Gold'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>#Problem</th> <th>Gold</th> <th>System</th> <th>M15</th> </tr> </thead> <tbody> <tr> <td>Section || Quantifier.1</td> <td>335</td> <td>92.5</td> <td>78.2</td> <td>78.4</td> </tr> <tr> <td>Section || Plural</td> <td>38</td> <td>65.8</td> <td>52.6</td> <td>66.7</td> </tr> <tr> <td>Section || Adjective</td> <td>21</td> <td>57.1</td> <td>47.6</td> <td>68.2</td> </tr> <tr> <td>Section || Verb</td> <td>9</td> <td>66.7</td> <td>66.7</td> <td>62.5</td> </tr> <tr> <td>Section || Attitude</td> <td>14</td> <td>78.6</td> <td>78.6</td> <td>76.9</td> </tr> <tr> <td>Section || Total</td> <td>417</td> <td>87.3</td> <td>74.1</td> <td>73.3</td> </tr> </tbody></table> | Table 4 | table_4 | D16-1242 | 5 | emnlp2016 | Out of the 523 problems, 417 are Japanese translations of the FraCaS problems. Table 4 shows a comparison between the performance of our system on this subset of the JSeM problems and the performance of the RTE system for English in Mineshima et al. (2015) on the corresponding problems in the FraCaS dataset. Mineshima et al. (2015) used system parses of the English C&C parser (Clark and Curran, 2007). The total accuracy of our system is comparable to that of Mineshima et al. (2015). Most errors we found are due to syntactic parse errors caused by the CCG parser, where no correct syntactic parses were found in n-best responses. Comparison between gold parses and system parses shows that correct syntactic disambiguation improves performance. | [0, 1, 2, 1, 2, 1] | ['Out of the 523 problems, 417 are Japanese translations of the FraCaS problems.', 'Table 4 shows a comparison between the performance of our system on this subset of the JSeM problems and the performance of the RTE system for English in Mineshima et al. (2015) on the corresponding problems in the FraCaS dataset.', 'Mineshima et al. (2015) used system parses of the English C&C parser (Clark and Curran, 2007).', 'The total accuracy of our system is comparable to that of Mineshima et al. (2015).', 'Most errors we found are due to syntactic parse errors caused by the CCG parser, where no correct syntactic parses were found in n-best responses.', 'Comparison between gold parses and system parses shows that correct syntactic disambiguation improves performance.'] | [None, None, ['M15'], ['Gold', 'M15'], None, ['System', 'Gold']] | 1 |
D16-1243table_2 | Experimental results. Top: development set; bottom: test set. AIC is not comparable between the two splits. HM and LUX are from McMahan and Stone (2015). We reimplemented HM and re-ran LUX from publicly available code, confirming all results to the reported precision except perplexity of LUX, for which we obtained a figure of 13.72. | 4 | [['Model', 'atomic', 'Feats.', 'raw'], ['Model', 'atomic', 'Feats.', 'buckets'], ['Model', 'atomic', 'Feats.', 'Fourier'], ['Model', 'RNN', 'Feats.', 'raw'], ['Model', 'RNN', 'Feats.', 'buckets'], ['Model', 'RNN', 'Feats.', 'Fourier'], ['Model', 'HM', 'Feats.', 'buckets'], ['Model', 'LUX', 'Feats.', 'raw'], ['Model', 'RNN', 'Feats.', 'Fourier']] | 1 | [['Perp.'], ['AIC'], ['Acc.']] | [['28.31', '0.108×10^5', '28.75%'], ['16.01', '0.131×10^5', '38.59%'], ['15.05', '8.86×10^5', '38.97%'], ['13.27', '8.40×10^5', '40.11%'], ['13.03', '0.126×10^5', '39.94%'], ['12.35', '8.33×10^5', '40.40%'], ['14.41', '0.482×10^5', '39.40%'], ['13.61', '0.413×10^5', '39.55%'], ['12.58', '0.403×10^5', '40.22%']] | column | ['Perp.', 'AIC', 'Acc.'] | ['Fourier'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Perp.</th> <th>AIC</th> <th>Acc.</th> </tr> </thead> <tbody> <tr> <td>Model || atomic || Feats. || raw</td> <td>28.31</td> <td>0.108×10^5</td> <td>28.75%</td> </tr> <tr> <td>Model || atomic || Feats. || buckets</td> <td>16.01</td> <td>0.131×10^5</td> <td>38.59%</td> </tr> <tr> <td>Model || atomic || Feats. || Fourier</td> <td>15.05</td> <td>8.86×10^5</td> <td>38.97%</td> </tr> <tr> <td>Model || RNN || Feats. || raw</td> <td>13.27</td> <td>8.40×10^5</td> <td>40.11%</td> </tr> <tr> <td>Model || RNN || Feats. || buckets</td> <td>13.03</td> <td>0.126×10^5</td> <td>39.94%</td> </tr> <tr> <td>Model || RNN || Feats. || Fourier</td> <td>12.35</td> <td>8.33×10^5</td> <td>40.40%</td> </tr> <tr> <td>Model || HM || Feats. || buckets</td> <td>14.41</td> <td>0.482×10^5</td> <td>39.40%</td> </tr> <tr> <td>Model || LUX || Feats. || raw</td> <td>13.61</td> <td>0.413×10^5</td> <td>39.55%</td> </tr> <tr> <td>Model || RNN || Feats. || Fourier</td> <td>12.58</td> <td>0.403×10^5</td> <td>40.22%</td> </tr> </tbody></table> | Table 2 | table_2 | D16-1243 | 3 | emnlp2016 | Results. The top section of Table 2 shows development set results comparing modeling effectiveness for atomic and sequence model architectures and different features. The Fourier feature transformation generally improves on raw HSV vectors and discretized embeddings. The value of modeling descriptions as sequences can also be observed in these results, the LSTM models consistently outperform their atomic counterparts. | [0, 1, 1, 2] | ['Results.', 'The top section of Table 2 shows development set results comparing modeling effectiveness for atomic and sequence model architectures and different features.', 'The Fourier feature transformation generally improves on raw HSV vectors and discretized embeddings.', 'The value of modeling descriptions as sequences can also be observed in these results, the LSTM models consistently outperform their atomic counterparts.'] | [None, None, ['Fourier'], None] | 1 |
D16-1247table_4 | Detailed Ja → En insertion position selection experimental result. | 1 | [['PBSMT'], ['Hiero'], ['No Flexible'], ['Baseline'], ['Proposed']] | 2 | [['Ja → En', 'BLEU'], ['Ja → En', 'RIBES'], ['Ja → En', 'Time'], ['En → Ja', 'BLEU'], ['En → Ja', 'RIBES'], ['En → Ja', 'Time'], ['Ja → Zh', 'BLEU'], ['Ja → Zh', 'RIBES'], ['Ja → Zh', 'Time'], ['Zh → Ja', 'BLEU'], ['Zh → Ja', 'RIBES'], ['Zh → Ja', 'Time']] | [['18.45', '64.51', '-', '27.48', '68.37', '-', '27.96', '78.90', '-', '34.65', '77.25', '-'], ['18.72', '65.11', '-', '30.19', '73.47', '-', '27.71', '80.91', '-', '35.43', '81.04', '-'], ['20.28', '65.08', '1.00', '28.77', '75.21', '1.00', '24.85', '66.60', '1.00', '30.51', '73.08', '1.00'], ['21.61', '69.82', '6.28', '30.57', '76.13', '3.30', '28.79', '78.11', '5.16', '34.32', '77.82', '5.28'], ['22.07†', '70.49†', '2.25', '30.50', '76.69†', '1.27', '29.83†', '79.73†', '2.21', '34.71†', '79.25†', '1.89']] | column | ['BLEU', 'RIBES', 'Time', 'BLEU', 'RIBES', 'Time', 'BLEU', 'RIBES', 'Time', 'BLEU', 'RIBES', 'Time'] | ['Proposed'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Ja → En || BLEU</th> <th>Ja → En || RIBES</th> <th>Ja → En || Time</th> <th>En → Ja || BLEU</th> <th>En → Ja || RIBES</th> <th>En → Ja || Time</th> <th>Ja → Zh || BLEU</th> <th>Ja → Zh || RIBES</th> <th>Ja → Zh || Time</th> <th>Zh → Ja || BLEU</th> <th>Zh → Ja || RIBES</th> <th>Zh → Ja || Time</th> </tr> </thead> <tbody> <tr> <td>PBSMT</td> <td>18.45</td> <td>64.51</td> <td>-</td> <td>27.48</td> <td>68.37</td> <td>-</td> <td>27.96</td> <td>78.90</td> <td>-</td> <td>34.65</td> <td>77.25</td> <td>-</td> </tr> <tr> <td>Hiero</td> <td>18.72</td> <td>65.11</td> <td>-</td> <td>30.19</td> <td>73.47</td> <td>-</td> <td>27.71</td> <td>80.91</td> <td>-</td> <td>35.43</td> <td>81.04</td> <td>-</td> </tr> <tr> <td>No Flexible</td> <td>20.28</td> <td>65.08</td> <td>1.00</td> <td>28.77</td> <td>75.21</td> <td>1.00</td> <td>24.85</td> <td>66.60</td> <td>1.00</td> <td>30.51</td> <td>73.08</td> <td>1.00</td> </tr> <tr> <td>Baseline</td> <td>21.61</td> <td>69.82</td> <td>6.28</td> <td>30.57</td> <td>76.13</td> <td>3.30</td> <td>28.79</td> <td>78.11</td> <td>5.16</td> <td>34.32</td> <td>77.82</td> <td>5.28</td> </tr> <tr> <td>Proposed</td> <td>22.07†</td> <td>70.49†</td> <td>2.25</td> <td>30.50</td> <td>76.69†</td> <td>1.27</td> <td>29.83†</td> <td>79.73†</td> <td>2.21</td> <td>34.71†</td> <td>79.25†</td> <td>1.89</td> </tr> </tbody></table> | Table 4 | table_4 | D16-1247 | 5 | emnlp2016 | The results are shown in Table 4. The Proposed method achieved significantly better automatic evaluation scores than the Baseline for all the language pairs except the BLEU score of En → Ja direction. Also, the decoding time is reduced by about 60% relative to that of the Baseline. Our tree-based model is better than the conventional models except Zh → Ja, where the accuracy of Chinese parsing for the input sentences has a bad effect. | [1, 1, 1, 1] | ['The results are shown in Table 4.', 'The Proposed method achieved significantly better automatic evaluation scores than the Baseline for all the language pairs except the BLEU score of En → Ja direction.', 'Also, the decoding time is reduced by about 60% relative to that of the Baseline.', 'Our tree-based model is better than the conventional models except Zh → Ja, where the accuracy of Chinese parsing for the input sentences has a bad effect.'] | [None, ['Proposed', 'En → Ja', 'BLEU', 'Baseline', 'Ja → En', 'Ja → Zh', 'Zh → Ja'], ['Proposed', 'Time', 'Baseline'], ['Proposed', 'Zh → Ja', 'PBSMT', 'Hiero']] | 1 |
D16-1249table_1 | Single system results in terms of (TER-BLEU)/2 (T-B, the lower the better) on 5 million Chinese to English training set. BP denotes the brevity penalty. NMT results are on a large vocabulary (300k) and with UNK replaced. The second column shows different alignments (Zh → En (one direction), GDFA (“grow-diag-final-and”), and MaxEnt (Ittycheriah and Roukos, 2005). A, T, and J mean optimize alignment only, translation only, and jointly. Gau. denotes the smoothed transformation. | 4 | [['single system', 'Tree-to-string', '-', '-'], ['single system', 'Cov. LVNMT (Mi et al. 2016b)', '-', '-'], ['single system', '+Alignment', 'Zh → En', 'A → J'], ['single system', '+Alignment', 'Zh → En', 'A → T'], ['single system', '+Alignment', 'Zh → En', 'A → T → J'], ['single system', '+Alignment', 'Zh → En', 'J'], ['single system', '+Alignment', 'GDFA', 'J'], ['single system', '+Alignment', 'MaxEnt', 'J'], ['single system', '+Alignment', 'MaxEnt', 'J + Gau.']] | 3 | [['MT06', '-', 'BP'], ['MT06', '-', 'BLEU'], ['MT06', '-', 'T-B'], ['MT08', 'News', 'BP'], ['MT08', 'News', 'BLEU'], ['MT08', 'News', 'T-B'], ['MT08', 'Web', 'BP'], ['MT08', 'Web', 'BLEU'], ['MT08', 'Web', 'T-B'], ['Avg', '-', 'T-B']] | [['0.95', '34.93', '9.45', '0.94', '31.12', '12.90', '0.90', '23.45', '17.72', '13.36'], ['0.92', '35.59', '10.71', '0.89', '30.18', '15.33', '0.97', '27.48', '16.67', '14.24'], ['0.95', '35.71', '10.38', '0.93', '30.73', '14.98', '0.96', '27.38', '16.24', '13.87'], ['0.95', '28.59', '16.99', '0.92', '24.09', '20.89', '0.97', '20.48', '23.31', '20.40'], ['0.95', '35.95', '10.24', '0.92', '30.95', '14.62', '0.97', '26.76', '17.04', '13.97'], ['0.96', '36.76', '9.67', '0.94', '31.24', '14.80', '0.96', '28.35', '15.61', '13.36'], ['0.96', '36.44', '10.16', '0.94', '30.66', '15.01', '0.96', '26.67', '16.72', '13.96'], ['0.95', '36.80', '9.49', '0.93', '31.74', '14.02', '0.96', '27.53', '16.21', '13.24'], ['0.96', '36.95', '9.71', '0.94', '32.43', '13.61', '0.97', '28.63', '15.80', '13.04']] | column | ['BP', 'BLEU', 'T-B', 'BP', 'BLEU', 'T-B', 'BP', 'BLEU', 'T-B', 'T-B'] | ['+Alignment'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>MT06 || - || BP</th> <th>MT06 || - || BLEU</th> <th>MT06 || - || T-B</th> <th>MT08 || News || BP</th> <th>MT08 || News || BLEU</th> <th>MT08 || News || T-B</th> <th>MT08 || Web || BP</th> <th>MT08 || Web || BLEU</th> <th>MT08 || Web || T-B</th> <th>Avg || - || T-B</th> </tr> </thead> <tbody> <tr> <td>single system || Tree-to-string || - || -</td> <td>0.95</td> <td>34.93</td> <td>9.45</td> <td>0.94</td> <td>31.12</td> <td>12.90</td> <td>0.90</td> <td>23.45</td> <td>17.72</td> <td>13.36</td> </tr> <tr> <td>single system || Cov. LVNMT (Mi et al. 2016b) || - || -</td> <td>0.92</td> <td>35.59</td> <td>10.71</td> <td>0.89</td> <td>30.18</td> <td>15.33</td> <td>0.97</td> <td>27.48</td> <td>16.67</td> <td>14.24</td> </tr> <tr> <td>single system || +Alignment || Zh → En || A → J</td> <td>0.95</td> <td>35.71</td> <td>10.38</td> <td>0.93</td> <td>30.73</td> <td>14.98</td> <td>0.96</td> <td>27.38</td> <td>16.24</td> <td>13.87</td> </tr> <tr> <td>single system || +Alignment || Zh → En || A → T</td> <td>0.95</td> <td>28.59</td> <td>16.99</td> <td>0.92</td> <td>24.09</td> <td>20.89</td> <td>0.97</td> <td>20.48</td> <td>23.31</td> <td>20.40</td> </tr> <tr> <td>single system || +Alignment || Zh → En || A → T → J</td> <td>0.95</td> <td>35.95</td> <td>10.24</td> <td>0.92</td> <td>30.95</td> <td>14.62</td> <td>0.97</td> <td>26.76</td> <td>17.04</td> <td>13.97</td> </tr> <tr> <td>single system || +Alignment || Zh → En || J</td> <td>0.96</td> <td>36.76</td> <td>9.67</td> <td>0.94</td> <td>31.24</td> <td>14.80</td> <td>0.96</td> <td>28.35</td> <td>15.61</td> <td>13.36</td> </tr> <tr> <td>single system || +Alignment || GDFA || J</td> <td>0.96</td> <td>36.44</td> <td>10.16</td> <td>0.94</td> <td>30.66</td> <td>15.01</td> <td>0.96</td> <td>26.67</td> <td>16.72</td> <td>13.96</td> </tr> <tr> <td>single system || +Alignment || MaxEnt || J</td> <td>0.95</td> <td>36.80</td> <td>9.49</td> <td>0.93</td> <td>31.74</td> <td>14.02</td> <td>0.96</td> <td>27.53</td> <td>16.21</td> <td>13.24</td> </tr> <tr> <td>single system || +Alignment || MaxEnt || J + Gau.</td> <td>0.96</td> <td>36.95</td> <td>9.71</td> <td>0.94</td> <td>32.43</td> <td>13.61</td> <td>0.97</td> <td>28.63</td> <td>15.80</td> <td>13.04</td> </tr> </tbody></table> | Table 1 | table_1 | D16-1249 | 4 | emnlp2016 | Experimental results in Table 1 show some interesting results. First, with the same alignment, J joint optimization works best than other optimization strategies (lines 3 to 6). Unfortunately, breaking down the network into two separate parts (A and T) and optimizing them separately do not help (lines 3 to 5). We have to conduct joint optimization J in order to get a comparable or better result (lines 3, 5 and 6) over the baseline system. Second, when we change the training alignment seeds (Zh → En, GDFA, and MaxEnt) NMT model does not yield significant different results (lines 6 to 8). Third, the smoothed transformation (J + Gau.) gives some improvements over the simple transformation (the last two lines), and achieves the best result (1.2 better than LVNMT, and 0.3 better than Tree-to-string). In terms of BLEU scores, we conduct the statistical significance tests with the signtest of Collins et al. (2005), the results show that the improvements of our J + Gau. over LVNMT are significant on three test sets (p < 0.01). At last, the brevity penalty (BP) consistently gets better after we add the alignment cost to NMT objective. Our alignment objective adjusts the translation length to be more in line with the human references accordingly. | [1, 1, 1, 2, 1, 1, 1, 1, 2] | ['Experimental results in Table 1 show some interesting results.', 'First, with the same alignment, J joint optimization works best than other optimization strategies (lines 3 to 6).', 'Unfortunately, breaking down the network into two separate parts (A and T) and optimizing them separately do not help (lines 3 to 5).', 'We have to conduct joint optimization J in order to get a comparable or better result (lines 3, 5 and 6) over the baseline system.', 'Second, when we change the training alignment seeds (Zh → En, GDFA, and MaxEnt) NMT model does not yield significant different results (lines 6 to 8).', 'Third, the smoothed transformation (J + Gau.) gives some improvements over the simple transformation (the last two lines), and achieves the best result (1.2 better than LVNMT, and 0.3 better than Tree-to-string).', 'In terms of BLEU scores, we conduct the statistical significance tests with the signtest of Collins et al. (2005), the results show that the improvements of our J + Gau. over LVNMT are significant on three test sets (p < 0.01).', 'At last, the brevity penalty (BP) consistently gets better after we add the alignment cost to NMT objective.', 'Our alignment objective adjusts the translation length to be more in line with the human references accordingly.'] | [None, ['Zh → En'], ['A → J', 'A → T', 'A → T → J'], ['A → J', 'A → T → J', 'J', 'Cov. LVNMT (Mi et al. 2016b)', 'Tree-to-string'], ['+Alignment', 'J'], ['J + Gau.', 'MaxEnt', 'Tree-to-string', 'Cov. LVNMT (Mi et al. 2016b)'], ['BLEU', 'J + Gau.', 'Cov. LVNMT (Mi et al. 2016b)'], ['BP', '+Alignment'], ['+Alignment']] | 1 |
D16-1250table_1 | Our results in bilingual and monolingual tasks. | 2 | [['Original embeddings', '-'], ['Unconstrained mapping', '-'], ['Unconstrained mapping', '+ length normalization'], ['Unconstrained mapping', '+ mean centering'], ['Orthogonal mapping', '-'], ['Orthogonal mapping', '+ length normalization'], ['Orthogonal mapping', '+ mean centering']] | 1 | [['EN-IT'], ['EN AN.']] | [['-', '76.66%'], ['34.93%', '73.80%'], ['33.80%', '73.61%'], ['38.47%', '73.71%'], ['36.73%', '76.66%'], ['36.87%', '76.66%'], ['39.27%', '76.59%']] | column | ['Accuracy', 'Accuracy'] | ['Orthogonal mapping', '+ length normalization', '+ mean centering'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>EN-IT</th> <th>EN AN.</th> </tr> </thead> <tbody> <tr> <td>Original embeddings || -</td> <td>-</td> <td>76.66%</td> </tr> <tr> <td>Unconstrained mapping || -</td> <td>34.93%</td> <td>73.80%</td> </tr> <tr> <td>Unconstrained mapping || + length normalization</td> <td>33.80%</td> <td>73.61%</td> </tr> <tr> <td>Unconstrained mapping || + mean centering</td> <td>38.47%</td> <td>73.71%</td> </tr> <tr> <td>Orthogonal mapping || -</td> <td>36.73%</td> <td>76.66%</td> </tr> <tr> <td>Orthogonal mapping || + length normalization</td> <td>36.87%</td> <td>76.66%</td> </tr> <tr> <td>Orthogonal mapping || + mean centering</td> <td>39.27%</td> <td>76.59%</td> </tr> </tbody></table> | Table 1 | table_1 | D16-1250 | 4 | emnlp2016 | The rows in Table 1 show, respectively, the results for the original embeddings, the basic mapping proposed by Mikolov et al. (2013b) (cf. Section 2) and the addition of orthogonality constraint (cf. Section 2.1), with and without length normalization and, incrementally, mean centering. In all the cases, length normalization and mean centering were applied to all embeddings, even if missing from the dictionary. The results show that the orthogonality constraint is key to preserve monolingual performance, and it also improves bilingual performance by enforcing a relevant property (monolingual invariance) that the transformation to learn should intuitively have. The contribution of length normalization alone is marginal, but when followed by mean centering we obtain further improvements in bilingual performance without hurting monolingual performance. | [1, 2, 1, 1] | ['The rows in Table 1 show, respectively, the results for the original embeddings, the basic mapping proposed by Mikolov et al. (2013b) (cf. Section 2) and the addition of orthogonality constraint (cf. Section 2.1), with and without length normalization and, incrementally, mean centering.', 'In all the cases, length normalization and mean centering were applied to all embeddings, even if missing from the dictionary.', 'The results show that the orthogonality constraint is key to preserve monolingual performance, and it also improves bilingual performance by enforcing a relevant property (monolingual invariance) that the transformation to learn should intuitively have.', 'The contribution of length normalization alone is marginal, but when followed by mean centering we obtain further improvements in bilingual performance without hurting monolingual performance.'] | [['Original embeddings', 'Unconstrained mapping', 'Orthogonal mapping', '+ length normalization', '+ mean centering'], None, ['Orthogonal mapping', '+ length normalization', '+ mean centering', 'EN-IT', 'EN AN.'], ['+ length normalization', '+ mean centering', 'EN-IT', 'EN AN.']] | 1 |
D16-1250table_2 | Comparison of our method to other work. | 1 | [['Original embeddings'], ['Mikolov et al. (2013b)'], ['Xing et al. (2015)'], ['Faruqui and Dyer (2014)'], ['Our method']] | 1 | [['EN-IT'], ['EN AN.']] | [['-', '76.66%'], ['34.93%', '73.80%'], ['36.87%', '76.66%'], ['37.80%', '69.64%'], ['39.27%', '76.59%']] | column | ['Accuracy', 'Accuracy'] | ['Our method'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>EN-IT</th> <th>EN AN.</th> </tr> </thead> <tbody> <tr> <td>Original embeddings</td> <td>-</td> <td>76.66%</td> </tr> <tr> <td>Mikolov et al. (2013b)</td> <td>34.93%</td> <td>73.80%</td> </tr> <tr> <td>Xing et al. (2015)</td> <td>36.87%</td> <td>76.66%</td> </tr> <tr> <td>Faruqui and Dyer (2014)</td> <td>37.80%</td> <td>69.64%</td> </tr> <tr> <td>Our method</td> <td>39.27%</td> <td>76.59%</td> </tr> </tbody></table> | Table 2 | table_2 | D16-1250 | 4 | emnlp2016 | Table 2 shows the results for our best performing configuration in comparison to previous work. As discussed before, (Mikolov et al., 2013b) and (Xing et al., 2015) were implemented as part of our framework, so they correspond to our uncostrained mapping with no preprocessing and orthogonal mapping with length normalization, respectively. | [1, 2] | ['Table 2 shows the results for our best performing configuration in comparison to previous work.', 'As discussed before, (Mikolov et al., 2013b) and (Xing et al., 2015) were implemented as part of our framework, so they correspond to our uncostrained mapping with no preprocessing and orthogonal mapping with length normalization, respectively.'] | [['Our method', 'Xing et al. (2015)', 'Original embeddings', 'Mikolov et al. (2013b)', 'Faruqui and Dyer (2014)'], ['Mikolov et al. (2013b)', 'Xing et al. (2015)']] | 1 |
D16-1253table_2 | Results of 4-way classification on the PDTB. | 2 | [['Temp', 'P'], ['Temp', 'R'], ['Temp', 'F1'], ['Comp', 'P'], ['Comp', 'R'], ['Comp', 'F1'], ['Cont', 'P'], ['Cont', 'R'], ['Cont', 'F1'], ['Expa', 'P'], ['Expa', 'R'], ['Expa', 'F1'], ['macro F1', '-']] | 1 | [['STN'], ['MT Nbi']] | [['33.33', '34.48'], ['14.55', '18.18'], ['20.25', '23.81'], ['38.54', '42.11'], ['25.52', '33.10'], ['30.71', '37.07'], ['38.36', '44.22'], ['41.03', '40.66'], ['39.65', '42.37'], ['59.60', '62.56'], ['66.36', '71.75'], ['62.80', '66.84'], ['38.35', '42.52']] | column | ['F1', 'F1'] | ['MT Nbi'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>STN</th> <th>MT Nbi</th> </tr> </thead> <tbody> <tr> <td>Temp || P</td> <td>33.33</td> <td>34.48</td> </tr> <tr> <td>Temp || R</td> <td>14.55</td> <td>18.18</td> </tr> <tr> <td>Temp || F1</td> <td>20.25</td> <td>23.81</td> </tr> <tr> <td>Comp || P</td> <td>38.54</td> <td>42.11</td> </tr> <tr> <td>Comp || R</td> <td>25.52</td> <td>33.10</td> </tr> <tr> <td>Comp || F1</td> <td>30.71</td> <td>37.07</td> </tr> <tr> <td>Cont || P</td> <td>38.36</td> <td>44.22</td> </tr> <tr> <td>Cont || R</td> <td>41.03</td> <td>40.66</td> </tr> <tr> <td>Cont || F1</td> <td>39.65</td> <td>42.37</td> </tr> <tr> <td>Expa || P</td> <td>59.60</td> <td>62.56</td> </tr> <tr> <td>Expa || R</td> <td>66.36</td> <td>71.75</td> </tr> <tr> <td>Expa || F1</td> <td>62.80</td> <td>66.84</td> </tr> <tr> <td>macro F1 || -</td> <td>38.35</td> <td>42.52</td> </tr> </tbody></table> | Table 2 | table_2 | D16-1253 | 3 | emnlp2016 | Table 2 shows the results of MT N combining our BiSynData (denoted as MT Nbi) on the PDTB. STN means we train MT N with only the main task. On the macro F1, MT Nbi gains an improvement of 4.17% over ST N. The improvement is significant under one-tailed t-test (p<0.05). A closer look into the results shows that MT Nbi performs better across all relations, on the precision, recall and F1 score, except a little drop on the recall of Cont. The reason for the recall drop of Cont is not clear. The greatest improvement is observed on Comp, up to 6.36% F1 score. The possible reason is that only while is ambiguous about Comp and T emp, while as, when and since are all ambiguous about T emp and Cont, among top 10 connectives in our BiSynData. Meanwhile the amount of labeled data for Comp is relatively small. Overall, using BiSynData under our multi-task model achieves significant improvements on the English DRRimp. We believe the reasons for the improvements are twofold: 1) the added synthetic English instances from our BiSynData can alleviate the meaning shift problem, and 2) a multi-task learning method is helpful for addressing the different word distribution problem between implicit and explicit data. | [1, 2, 1, 1, 1, 2, 1, 2, 2, 1, 2] | ['Table 2 shows the results of MT N combining our BiSynData (denoted as MT Nbi) on the PDTB.', 'STN means we train MT N with only the main task.', 'On the macro F1, MT Nbi gains an improvement of 4.17% over ST N.', 'The improvement is significant under one-tailed t-test (p<0.05).', 'A closer look into the results shows that MT Nbi performs better across all relations, on the precision, recall and F1 score, except a little drop on the recall of Cont.', 'The reason for the recall drop of Cont is not clear.', 'The greatest improvement is observed on Comp, up to 6.36% F1 score.', 'The possible reason is that only while is ambiguous about Comp and T emp, while as, when and since are all ambiguous about T emp and Cont, among top 10 connectives in our BiSynData.', 'Meanwhile the amount of labeled data for Comp is relatively small.', 'Overall, using BiSynData under our multi-task model achieves significant improvements on the English DRRimp.', 'We believe the reasons for the improvements are twofold: 1) the added synthetic English instances from our BiSynData can alleviate the meaning shift problem, and 2) a multi-task learning method is helpful for addressing the different word distribution problem between implicit and explicit data.'] | [['MT Nbi'], ['STN'], ['F1', 'MT Nbi'], ['F1'], ['MT Nbi', 'P', 'R', 'F1', 'Cont'], ['Cont', 'R'], ['Comp', 'F1'], ['Comp', 'Temp', 'Cont'], ['Comp'], ['MT Nbi'], None] | 1 |
D16-1255table_3 | Unlabeled attachment scores (UAS) on the PTB validation set after parsing and aligning the output. For ZGEN we also include a result using the tree z∗ produced directly by the system. For WORDS+BNPS, internal BNP arcs are always counted as correct. | 2 | [['Model', 'ZGEN-64(z ? )'], ['Model', 'ZGEN-64'], ['Model', 'NGRAM-64'], ['Model', 'NGRAM-512'], ['Model', 'LSTM-64'], ['Model', 'LSTM-512']] | 1 | [['WORDS'], ['WORDS+BNPS']] | [['39.7', '64.9'], ['40.8', '65.2'], ['46.1', '67.0'], ['47.2', '67.8'], ['51.3', '71.9'], ['52.8', '73.1']] | column | ['UAS', 'UAS'] | ['LSTM-64', 'LSTM-512'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>WORDS</th> <th>WORDS+BNPS</th> </tr> </thead> <tbody> <tr> <td>Model || ZGEN-64(z ? )</td> <td>39.7</td> <td>64.9</td> </tr> <tr> <td>Model || ZGEN-64</td> <td>40.8</td> <td>65.2</td> </tr> <tr> <td>Model || NGRAM-64</td> <td>46.1</td> <td>67.0</td> </tr> <tr> <td>Model || NGRAM-512</td> <td>47.2</td> <td>67.8</td> </tr> <tr> <td>Model || LSTM-64</td> <td>51.3</td> <td>71.9</td> </tr> <tr> <td>Model || LSTM-512</td> <td>52.8</td> <td>73.1</td> </tr> </tbody></table> | Table 3 | table_3 | D16-1255 | 5 | emnlp2016 | One proposed advantage of syntax in linearization models is that it can better capture long-distance relationships. Figure 1 shows results by sentence length and distortion, which is defined as the absolute difference between a token’s index position in y? and yˆ, normalized by M. The LSTM model exhibits consistently better performance than existing syntax models across sentence lengths and generates fewer long-range distortions than the ZGEN model. Finally, Table 3 compares the syntactic fluency of the output. As a lightweight test, we parse the output with the Yara Parser (Rasooli and Tetreault, 2015) and compare the unlabeled attachment scores (UAS) to the trees produced by the syntactic system. We first align the gold head to each output token. (In cases where the alignment is not one-to-one, we randomly sample among the possibilities.) The models with no knowledge of syntax are able to recover a higher proportion of gold arcs. | [0, 2, 1, 2, 2, 1] | ['One proposed advantage of syntax in linearization models is that it can better capture long-distance relationships.', 'Figure 1 shows results by sentence length and distortion, which is defined as the absolute difference between a token’s index position in y? and yˆ, normalized by M. The LSTM model exhibits consistently better performance than existing syntax models across sentence lengths and generates fewer long-range distortions than the ZGEN model.', 'Finally, Table 3 compares the syntactic fluency of the output.', 'As a lightweight test, we parse the output with the Yara Parser (Rasooli and Tetreault, 2015) and compare the unlabeled attachment scores (UAS) to the trees produced by the syntactic system.', 'We first align the gold head to each output token.', 'The models with no knowledge of syntax are able to recover a higher proportion of gold arcs.'] | [None, None, None, None, None, None] | 1 |
D16-1260table_3 | Evaluation results on relation prediction. | 2 | [['Metric', 'TransE'], ['Metric', 'tTransE'], ['Metric', 'TransH'], ['Metric', 'tTransH'], ['Metric', 'TransR'], ['Metric', 'tTransR']] | 2 | [['Mean Rank', 'Raw'], ['Mean Rank', 'Filter'], ['Hits@1 (%)', 'Raw'], ['Hits@1 (%)', 'Filter']] | [['1.53', '1.48', '69.4', '73.0'], ['1.42', '1.35', '71.1', '75.7'], ['1.51', '1.37', '70.5', '72.2'], ['1.38', '1.30', '74.6', '76.9'], ['1.40', '1.28', '71.1', '74.3'], ['1.27', '1.12', '74.5', '78.9']] | column | ['Mean Rank', 'Mean Rank', 'Hits@1 (%)', 'Hits@1 (%)'] | ['Metric'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Mean Rank || Raw</th> <th>Mean Rank || Filter</th> <th>Hits@1 (%) || Raw</th> <th>Hits@1 (%) || Filter</th> </tr> </thead> <tbody> <tr> <td>Metric || TransE</td> <td>1.53</td> <td>1.48</td> <td>69.4</td> <td>73.0</td> </tr> <tr> <td>Metric || tTransE</td> <td>1.42</td> <td>1.35</td> <td>71.1</td> <td>75.7</td> </tr> <tr> <td>Metric || TransH</td> <td>1.51</td> <td>1.37</td> <td>70.5</td> <td>72.2</td> </tr> <tr> <td>Metric || tTransH</td> <td>1.38</td> <td>1.30</td> <td>74.6</td> <td>76.9</td> </tr> <tr> <td>Metric || TransR</td> <td>1.40</td> <td>1.28</td> <td>71.1</td> <td>74.3</td> </tr> <tr> <td>Metric || tTransR</td> <td>1.27</td> <td>1.12</td> <td>74.5</td> <td>78.9</td> </tr> </tbody></table> | Table 3 | table_3 | D16-1260 | 3 | emnlp2016 | Relation prediction aims to predict relations given two entities. Evaluation results are shown in Table 3 on only YG15K due to limited space, where we report Hits@1 instead of Hits@10. Example prediction results for TransE and tTransE are compared in Table 4. For example, when testing (Billy Hughes,?,London,1862), it’s easy for TransE to mix relations wasBornIn and diedIn because they act similarly for a person and a place. But known that (Billy Hughes, isAffiliatedTo, National Labor Party) happened in 1916, and tTransE have learnt temporal order that wasBornIn?isAffiliatedTo?diedIn, so the regularization term |r-born-T ? r-affiliated| is smaller than |r-died-T - r-aff iliated|, so correct answer wasBornIn ranks higher than diedIn. | [0, 1, 0, 0, 0] | ['Relation prediction aims to predict relations given two entities.', 'Evaluation results are shown in Table 3 on only YG15K due to limited space, where we report Hits@1 instead of Hits@10.', 'Example prediction results for TransE and tTransE are compared in Table 4.', 'For example, when testing (Billy Hughes,?,London,1862), it’s easy for TransE to mix relations wasBornIn and diedIn because they act similarly for a person and a place.', 'But known that (Billy Hughes, isAffiliatedTo, National Labor Party) happened in 1916, and tTransE have learnt temporal order that wasBornIn?isAffiliatedTo?diedIn, so the regularization term |r-born-T ? r-affiliated| is smaller than |r-died-T - r-aff iliated|, so correct answer wasBornIn ranks higher than diedIn.'] | [None, ['Hits@1 (%)'], None, None, None] | 1 |
D16-1260table_5 | Evaluation results on triple classification (%). | 2 | [['Datasets', 'TransE'], ['Datasets', 'tTransE'], ['Datasets', 'TransH'], ['Datasets', 'tTransH'], ['Datasets', 'TransR'], ['Datasets', 'tTransR']] | 1 | [['YG15K'], ['YG36K']] | [['63.9', '71.9'], ['75.0', '82.7'], ['63.4', '72.1'], ['75.1', '82.3'], ['64.5', '74.9'], ['78.5', '83.9']] | column | ['Accuracy', 'Accuracy'] | ['tTransE'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>YG15K</th> <th>YG36K</th> </tr> </thead> <tbody> <tr> <td>Datasets || TransE</td> <td>63.9</td> <td>71.9</td> </tr> <tr> <td>Datasets || tTransE</td> <td>75.0</td> <td>82.7</td> </tr> <tr> <td>Datasets || TransH</td> <td>63.4</td> <td>72.1</td> </tr> <tr> <td>Datasets || tTransH</td> <td>75.1</td> <td>82.3</td> </tr> <tr> <td>Datasets || TransR</td> <td>64.5</td> <td>74.9</td> </tr> <tr> <td>Datasets || tTransR</td> <td>78.5</td> <td>83.9</td> </tr> </tbody></table> | Table 5 | table_5 | D16-1260 | 4 | emnlp2016 | Results. Table 5 reports the results on the test sets. The results indicate that time-aware embedding outperforms all the baselines consistently. Temporal order information may help to distinguish positive and negative triples as different head entities may have different temporally associated relations. If the temporal order is the same with most facts, the regularization term helps it get lower energies and vice versa. | [2, 1, 1, 2, 2] | ['Results.', 'Table 5 reports the results on the test sets.', 'The results indicate that time-aware embedding outperforms all the baselines consistently.', 'Temporal order information may help to distinguish positive and negative triples as different head entities may have different temporally associated relations.', 'If the temporal order is the same with most facts, the regularization term helps it get lower energies and vice versa.'] | [None, None, None, None, None] | 1 |
D16-1262table_4 | Parsing results trained with different update methods. Our system uses all-violations updates and is the most accurate. | 2 | [['Update', 'Greedy'], ['Update', 'Max-violation'], ['Update', 'All-violations']] | 1 | [['Dev F1'], ['Optimal'], ['Explored']] | [['87.9', '99.2%', '2313.8'], ['88.1', '99.9%', '217.3'], ['88.4', '99.8%', '309.6']] | column | ['Dev F1', 'Optimal', 'Explored'] | ['All-violations'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Dev F1</th> <th>Optimal</th> <th>Explored</th> </tr> </thead> <tbody> <tr> <td>Update || Greedy</td> <td>87.9</td> <td>99.2%</td> <td>2313.8</td> </tr> <tr> <td>Update || Max-violation</td> <td>88.1</td> <td>99.9%</td> <td>217.3</td> </tr> <tr> <td>Update || All-violations</td> <td>88.4</td> <td>99.8%</td> <td>309.6</td> </tr> </tbody></table> | Table 4 | table_4 | D16-1262 | 8 | emnlp2016 | Table 4 compares the different violation-based learning objectives, as discussed in Section 5. Our novel all-violation updates outperform the alternatives. We attribute this improvement to the robustness over poor search spaces, which the greedy update lacks, and the incentive to explore good parses early, which the max-violation update lacks. Learning curves in Figure 5 show that the all-violations update also converges more quickly. | [1, 1, 0, 0] | ['Table 4 compares the different violation-based learning objectives, as discussed in Section 5.', 'Our novel all-violation updates outperform the alternatives.', 'We attribute this improvement to the robustness over poor search spaces, which the greedy update lacks, and the incentive to explore good parses early, which the max-violation update lacks.', 'Learning curves in Figure 5 show that the all-violations update also converges more quickly.'] | [None, ['All-violations', 'Update'], None, None] | 1 |
D16-1264table_5 | Performance of various methods and humans. Logistic regression outperforms the baselines, while there is still a significant gap between humans. | 1 | [['Random Guess'], ['Sliding Window'], ['Sliding Win. + Dist.'], ['Logistic Regression'], ['Human']] | 2 | [['Exact Match', 'Dev'], ['Exact Match', 'Test'], ['F1', 'Dev'], ['F1', 'Test']] | [['1.1%', '1.3%', '4.1%', '4.3%'], ['13.2%', '12.5%', '20.2%', '19.7%'], ['13.3%', '13.0%', '20.2%', '20.0%'], ['40.0%', '40.4%', '51.0%', '51.0%'], ['80.3%', '77.0%', '90.5%', '86.8%']] | column | ['accuracy', 'accuracy', 'F1', 'F1'] | ['Logistic Regression'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Exact Match || Dev</th> <th>Exact Match || Test</th> <th>F1 || Dev</th> <th>F1 || Test</th> </tr> </thead> <tbody> <tr> <td>Random Guess</td> <td>1.1%</td> <td>1.3%</td> <td>4.1%</td> <td>4.3%</td> </tr> <tr> <td>Sliding Window</td> <td>13.2%</td> <td>12.5%</td> <td>20.2%</td> <td>19.7%</td> </tr> <tr> <td>Sliding Win. + Dist.</td> <td>13.3%</td> <td>13.0%</td> <td>20.2%</td> <td>20.0%</td> </tr> <tr> <td>Logistic Regression</td> <td>40.0%</td> <td>40.4%</td> <td>51.0%</td> <td>51.0%</td> </tr> <tr> <td>Human</td> <td>80.3%</td> <td>77.0%</td> <td>90.5%</td> <td>86.8%</td> </tr> </tbody></table> | Table 5 | table_5 | D16-1264 | 8 | emnlp2016 | Table 5 shows the performance of our models alongside human performance on the v1.0 of development and test sets. The logistic regression model significantly outperforms the baselines, but underperforms humans. We note that the model is able to select the sentence containing the answer correctly with 79.3% accuracy, hence, the bulk of the difficulty lies in finding the exact span within the sentence. | [1, 1, 2] | ['Table 5 shows the performance of our models alongside human performance on the v1.0 of development and test sets.', 'The logistic regression model significantly outperforms the baselines, but underperforms humans.', 'We note that the model is able to select the sentence containing the answer correctly with 79.3% accuracy, hence, the bulk of the difficulty lies in finding the exact span within the sentence.'] | [None, ['Logistic Regression', 'Random Guess', 'Sliding Window', 'Sliding Win. + Dist.', 'Human'], None] | 1 |
D17-1001table_3 | Evaluation results on the test set, where ∗ represents p-value < 0.05 against our method. | 2 | [['Method', 'Human'], ['Method', 'Proposed'], ['Method', 'Monotonic'], ['Method', 'w/o EM'], ['Method', '1-best tree']] | 1 | [['Recall'], ['Prec.'], ['UAS'], ['%']] | [['90.65', '88.21', '–', '–'], ['83.64', '78.91', '93.49', '98'], ['82.86*', '77.97*', '93.49', '98'], ['81.33*', '75.09*', '92.91*', '86'], ['80.11*', '73.26*', '93.56', '100']] | column | ['Recall', 'Prec.', 'UAS', '%'] | ['Proposed'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Recall</th> <th>Prec.</th> <th>UAS</th> <th>%</th> </tr> </thead> <tbody> <tr> <td>Method || Human</td> <td>90.65</td> <td>88.21</td> <td>–</td> <td>–</td> </tr> <tr> <td>Method || Proposed</td> <td>83.64</td> <td>78.91</td> <td>93.49</td> <td>98</td> </tr> <tr> <td>Method || Monotonic</td> <td>82.86*</td> <td>77.97*</td> <td>93.49</td> <td>98</td> </tr> <tr> <td>Method || w/o EM</td> <td>81.33*</td> <td>75.09*</td> <td>92.91*</td> <td>86</td> </tr> <tr> <td>Method || 1-best tree</td> <td>80.11*</td> <td>73.26*</td> <td>93.56</td> <td>100</td> </tr> </tbody></table> | Table 3 | table_3 | D17-1001 | 8 | emnlp2017 | Table 3 shows the performance on the test set for variations of our method and that of the human annotators. The last column shows the percentage of pairs where a root pair is reached to be aligned, called reachability. Our method is denoted as Proposed, while its variations include a method with only monotonic alignment (monotonic), without EM (w/o EM), and a method aligning only 1-best trees (1-best tree). The performance of the human annotators was assessed by considering one annotator as the test and the other two as the gold-standard, and then taking the averages, which is the same setting as our method. We regard this as the pseudo inter annotator agreement, since the conventional interannotator agreement is not directly applicable due to variations in aligned phrases. Our method significantly outperforms the others as it achieved the highest recall and precision for alignment quality. Our recall and precision reach 92% and 89% of those of humans, respectively. Non-compositional alignment is shown to contribute to alignment quality, while the feature enhanced EM is effective for both the alignment and parsing quality. Comparing our method and the one aligning only 1-best trees demonstrates that the alignment of parse forests largely contributes to the alignment quality. Although we confirmed that aligning larger forests slightly improved recall and precision, the improvements were statistically insignificant. The parsing quality was not much affected by phrase alignment, which is further investigated in the following. Finally, our method achieved 98% reachability, where 2% of unreachable cases were due to the beam search. While understanding that the reachability depends on experimental data, ours is notably higher than that of SCFG, reported as 9.1% in (Weese et al., 2014). These results show the ability of our method to accurately align paraphrases with divergent phrase correspondences. | [1, 2, 1, 2, 2, 1, 1, 2, 1, 1, 2, 1, 2, 1] | ['Table 3 shows the performance on the test set for variations of our method and that of the human annotators.', 'The last column shows the percentage of pairs where a root pair is reached to be aligned, called reachability.', 'Our method is denoted as Proposed, while its variations include a method with only monotonic alignment (monotonic), without EM (w/o EM), and a method aligning only 1-best trees (1-best tree).', 'The performance of the human annotators was assessed by considering one annotator as the test and the other two as the gold-standard, and then taking the averages, which is the same setting as our method.', 'We regard this as the pseudo inter annotator agreement, since the conventional interannotator agreement is not directly applicable due to variations in aligned phrases.', 'Our method significantly outperforms the others as it achieved the highest recall and precision for alignment quality.', 'Our recall and precision reach 92% and 89% of those of humans, respectively.', 'Non-compositional alignment is shown to contribute to alignment quality, while the feature enhanced EM is effective for both the alignment and parsing quality.', 'Comparing our method and the one aligning only 1-best trees demonstrates that the alignment of parse forests largely contributes to the alignment quality.', 'Although we confirmed that aligning larger forests slightly improved recall and precision, the improvements were statistically insignificant.', 'The parsing quality was not much affected by phrase alignment, which is further investigated in the following.', 'Finally, our method achieved 98% reachability, where 2% of unreachable cases were due to the beam search.', 'While understanding that the reachability depends on experimental data, ours is notably higher than that of SCFG, reported as 9.1% in (Weese et al., 2014).', 'These results show the ability of our method to accurately align paraphrases with divergent phrase correspondences.'] | [None, ['%'], ['Proposed', 'Monotonic', 'w/o EM', '1-best tree'], ['Human'], None, ['Proposed', 'Recall', 'Prec.'], ['Recall', 'Prec.', 'Human'], ['UAS'], ['Proposed', '1-best tree', 'UAS'], ['1-best tree', 'Recall', 'Prec.'], None, ['Proposed', '%'], ['Proposed', '%'], ['Proposed']] | 1 |
D17-1004table_4 | Model performance on the test set of TACRED, micro-averaged over instances. LR = Logistic Regression. | 3 | [['Traditional', 'Model', 'Patterns'], ['Traditional', 'Model', 'LR'], ['Traditional', 'Model', 'LR + Patterns'], ['Neural', 'Model', 'CNN'], ['Neural', 'Model', 'CNN-PE'], ['Neural', 'Model', 'SDP-LSTM'], ['Neural', 'Model', 'LSTM'], ['Neural', 'Model', 'Our model'], ['-', 'Model', 'Ensemble']] | 1 | [['P'], ['R'], ['F1']] | [['85.3', '23.4', '36.8'], ['72.0', '47.8', '57.5'], ['71.4', '50.1', '58.9'], ['72.1', '50.3', '59.2'], ['68.2', '55.4', '61.1'], ['62.0', '54.8', '58.2'], ['61.4', '61.7', '61.5'], ['67.7', '63.2', '65.4'], ['69.4', '64.8', '67.0']] | column | ['P', 'R', 'F1'] | ['Our model', 'Ensemble'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P</th> <th>R</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Traditional || Model || Patterns</td> <td>85.3</td> <td>23.4</td> <td>36.8</td> </tr> <tr> <td>Traditional || Model || LR</td> <td>72.0</td> <td>47.8</td> <td>57.5</td> </tr> <tr> <td>Traditional || Model || LR + Patterns</td> <td>71.4</td> <td>50.1</td> <td>58.9</td> </tr> <tr> <td>Neural || Model || CNN</td> <td>72.1</td> <td>50.3</td> <td>59.2</td> </tr> <tr> <td>Neural || Model || CNN-PE</td> <td>68.2</td> <td>55.4</td> <td>61.1</td> </tr> <tr> <td>Neural || Model || SDP-LSTM</td> <td>62.0</td> <td>54.8</td> <td>58.2</td> </tr> <tr> <td>Neural || Model || LSTM</td> <td>61.4</td> <td>61.7</td> <td>61.5</td> </tr> <tr> <td>Neural || Model || Our model</td> <td>67.7</td> <td>63.2</td> <td>65.4</td> </tr> <tr> <td>- || Model || Ensemble</td> <td>69.4</td> <td>64.8</td> <td>67.0</td> </tr> </tbody></table> | Table 4 | table_4 | D17-1004 | 6 | emnlp2017 | Table 4 summarizes our results. We observe that all neural models achieve higher F1 scores than the logistic regression and patterns systems, which demonstrates the effectiveness of neural models for relation extraction. Although positional embeddings help increase the F1 by around 2% over the plain CNN model, a simple (2-layer) LSTM model performs surprisingly better than CNN and dependency-based models. Lastly, our proposed position-aware mechanism is very effective and achieves an F1 score of 65.4%, with an absolute increase of 3.9% over the best baseline neural model (LSTM) and 7.9% over the baseline logistic regression system. We also run an ensemble of our position-aware attention model which takes majority votes from 5 runs with random initializations and it further pushes the F1 score up by 1.6%. | [1, 1, 1, 1, 1] | ['Table 4 summarizes our results.', 'We observe that all neural models achieve higher F1 scores than the logistic regression and patterns systems, which demonstrates the effectiveness of neural models for relation extraction.', 'Although positional embeddings help increase the F1 by around 2% over the plain CNN model, a simple (2-layer) LSTM model performs surprisingly better than CNN and dependency-based models.', 'Lastly, our proposed position-aware mechanism is very effective and achieves an F1 score of 65.4%, with an absolute increase of 3.9% over the best baseline neural model (LSTM) and 7.9% over the baseline logistic regression system.', 'We also run an ensemble of our position-aware attention model which takes majority votes from 5 runs with random initializations and it further pushes the F1 score up by 1.6%.'] | [None, ['Neural', 'F1', 'LR + Patterns'], ['CNN-PE', 'F1', 'CNN', 'LSTM', 'SDP-LSTM'], ['Our model', 'F1', 'LSTM', 'LR'], ['Ensemble', 'Our model', 'F1']] | 1 |
D17-1004table_5 | Model performance on TAC KBP 2015 slot filling evaluation, micro-averaged over queries. Hop-0 scores are calculated on the simple single-hop slot filling results; hop-1 scores are calculated on slot filling results chained on systems’ hop-0 predictions; hop-all scores are calculated based on the combination of the two. LR = logistic regression. | 2 | [['Model', 'Patterns'], ['Model', 'LR'], ['Model', 'LR + Patterns (2015 winning system)'], ['Model', 'LR trained on TACRED'], ['Model', 'LR trained on TACRED + Patterns'], ['Model', 'Our model'], ['Model', 'Our model + Patterns']] | 2 | [['Hop-0', 'P'], ['Hop-0', 'R'], ['Hop-0', 'F1'], ['Hop-1', 'P'], ['Hop-1', 'R'], ['Hop-1', 'F1'], ['Hop-all', 'P'], ['Hop-all', 'R'], ['Hop-all', 'F1']] | [['63.8', '17.7', '27.7', '49.3', '8.6', '14.7', '58.9', '13.3', '21.8'], ['36.6', '21.9', '27.4', '15.1', '10.1', '12.2', '25.6', '16.3', '19.9'], ['37.5', '24.5', '29.7', '16.5', '12.8', '14.4', '26.6', '19.0', '22.2'], ['32.7', '20.6', '25.3', '7.9', '9.5', '8.6', '16.8', '15.3', '16.0'], ['36.5', '26.5', '30.7', '11.0', '15.3', '12.8', '20.1', '21.2', '20.6'], ['39.0', '28.9', '33.2', '17.7', '13.9', '15.6', '28.2', '21.5', '24.4'], ['40.2', '31.5', '35.3', '19.4', '16.5', '17.8', '29.7', '24.2', '26.7']] | column | ['P', 'R', 'F1', 'P', 'R', 'F1', 'P', 'R', 'F1'] | ['Our model', 'Our model + Patterns', 'F1'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Hop-0 || P</th> <th>Hop-0 || R</th> <th>Hop-0 || F1</th> <th>Hop-1 || P</th> <th>Hop-1 || R</th> <th>Hop-1 || F1</th> <th>Hop-all || P</th> <th>Hop-all || R</th> <th>Hop-all || F1</th> </tr> </thead> <tbody> <tr> <td>Model || Patterns</td> <td>63.8</td> <td>17.7</td> <td>27.7</td> <td>49.3</td> <td>8.6</td> <td>14.7</td> <td>58.9</td> <td>13.3</td> <td>21.8</td> </tr> <tr> <td>Model || LR</td> <td>36.6</td> <td>21.9</td> <td>27.4</td> <td>15.1</td> <td>10.1</td> <td>12.2</td> <td>25.6</td> <td>16.3</td> <td>19.9</td> </tr> <tr> <td>Model || LR + Patterns (2015 winning system)</td> <td>37.5</td> <td>24.5</td> <td>29.7</td> <td>16.5</td> <td>12.8</td> <td>14.4</td> <td>26.6</td> <td>19.0</td> <td>22.2</td> </tr> <tr> <td>Model || LR trained on TACRED</td> <td>32.7</td> <td>20.6</td> <td>25.3</td> <td>7.9</td> <td>9.5</td> <td>8.6</td> <td>16.8</td> <td>15.3</td> <td>16.0</td> </tr> <tr> <td>Model || LR trained on TACRED + Patterns</td> <td>36.5</td> <td>26.5</td> <td>30.7</td> <td>11.0</td> <td>15.3</td> <td>12.8</td> <td>20.1</td> <td>21.2</td> <td>20.6</td> </tr> <tr> <td>Model || Our model</td> <td>39.0</td> <td>28.9</td> <td>33.2</td> <td>17.7</td> <td>13.9</td> <td>15.6</td> <td>28.2</td> <td>21.5</td> <td>24.4</td> </tr> <tr> <td>Model || Our model + Patterns</td> <td>40.2</td> <td>31.5</td> <td>35.3</td> <td>19.4</td> <td>16.5</td> <td>17.8</td> <td>29.7</td> <td>24.2</td> <td>26.7</td> </tr> </tbody></table> | Table 5 | table_5 | D17-1004 | 7 | emnlp2017 | Table 5 presents our results. We find that: (1) by only training our logistic regression model on TACRED (in contrast to on the 2 million bootstrapped examples used in the 2015 Stanford system) and combining it with patterns, we obtain a higher hop-0 F1 score than the 2015 Stanford system, and a similar hop-all F1; (2) our proposed position-aware attention model substantially outperforms the 2015 Stanford system on all hop-0, hop-1 and hop-all F1 scores. Combining it with the patterns, we achieve a hop-all F1 of 26.7%, an absolute improvement of 4.5% over the previous state-of-the-art result. | [1, 1, 1] | ['Table 5 presents our results.', 'We find that: (1) by only training our logistic regression model on TACRED (in contrast to on the 2 million bootstrapped examples used in the 2015 Stanford system) and combining it with patterns, we obtain a higher hop-0 F1 score than the 2015 Stanford system, and a similar hop-all F1; (2) our proposed position-aware attention model substantially outperforms the 2015 Stanford system on all hop-0, hop-1 and hop-all F1 scores.', 'Combining it with the patterns, we achieve a hop-all F1 of 26.7%, an absolute improvement of 4.5% over the previous state-of-the-art result.'] | [None, ['LR trained on TACRED + Patterns', 'Hop-all', 'F1', 'Our model', 'Hop-0', 'Hop-1'], ['Our model + Patterns', 'Hop-all', 'F1']] | 1 |
D17-1006table_3 | Final results. | 2 | [['Method', 'PMI'], ['Method', 'Bigram'], ['Method', 'Event-Comp'], ['Method', 'RNN'], ['Method', 'MemNet']] | 1 | [['G&C16'], ['C&J08']] | [['30.52', '30.92'], ['29.67', '25.43'], ['49.57', '43.28'], ['45.74', '43.17'], ['55.12', '46.67']] | column | ['accuracy', 'accuracy'] | ['MemNet'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>G&C16</th> <th>C&J08</th> </tr> </thead> <tbody> <tr> <td>Method || PMI</td> <td>30.52</td> <td>30.92</td> </tr> <tr> <td>Method || Bigram</td> <td>29.67</td> <td>25.43</td> </tr> <tr> <td>Method || Event-Comp</td> <td>49.57</td> <td>43.28</td> </tr> <tr> <td>Method || RNN</td> <td>45.74</td> <td>43.17</td> </tr> <tr> <td>Method || MemNet</td> <td>55.12</td> <td>46.67</td> </tr> </tbody></table> | Table 3 | table_3 | D17-1006 | 8 | emnlp2017 | 5.4 Final Results Table 3 shows the final results on the G&C 16 and C&J08 datasets, respectively. We compare the results of our final model with the following baselines:. • PMI is the co-occurrence based model of Chambers and Jurafsky (2008), who calculate event pair relations based on Pointwise Mutual Information (PMI), scoring each candidate event ec by the sum of PMI scores between the given events e0, e1, ..., en−1 and the candidate. • Bigram is the counting based model of Jans et al. (2012), calculating event pair relations based on skip bigram probabilities, trained using maximum likelihood estimation. • Event-Comp is the neural event relation model proposed by Granroth-Wilding and Clark (2016). They learn event representations by calculating pair-wise event scores using a Siamese network. • RNN is the method of Pichotta and Mooney (2016), who model event chains by directly using hc in Section 4.2 to predict the output, rather than taking them as features for event pair relation modeling. • MemNet is the proposed deep memory network model. Our reimplementation of PMI and Bigrams follows (Granroth-Wilding and Clark, 2016). It can be seen from the table that the statistical counting-based models PMI and Bigram significantly underperform the neural network models Event-Comp, RNN and MemNet, which is largely due to their sparsity and lack of semantic representation power. Under our event representation, Bigram does not outperform PMI significantly either, although considering the order of event pairs. This is likely due to sparsity of events when all arguments are considered. Direct comparison between Event-Comp and RNN shows that the event-pair model gives comparable results to the strong-order LSTM model. Although Granroth-Wilding and Clark (2016) and Pichotta and Mooney (2016) both compared with statistical baselines, they did not make direct comparisons between their methods, which represent two different approaches to the task. Our results show that they each have their unique advantages, which confirm our intuition in the introduction. By considering both pairwise relations and chain temporal orders, our method significantly outperform both Event-Comp and RNN (p − value < 0.01 using t-test), giving the best reported results on both datasets. | [1, 1, 2, 2, 2, 2, 2, 2, 1, 1, 2, 1, 2, 2, 1] | ['5.4 Final Results Table 3 shows the final results on the G&C 16 and C&J08 datasets, respectively.', 'We compare the results of our final model with the following baselines:.', '• PMI is the co-occurrence based model of Chambers and Jurafsky (2008), who calculate event pair relations based on Pointwise Mutual Information (PMI), scoring each candidate event ec by the sum of PMI scores between the given events e0, e1, ..., en−1 and the candidate.', '• Bigram is the counting based model of Jans et al. (2012), calculating event pair relations based on skip bigram probabilities, trained using maximum likelihood estimation.', '• Event-Comp is the neural event relation model proposed by Granroth-Wilding and Clark (2016). They learn event representations by calculating pair-wise event scores using a Siamese network.', '• RNN is the method of Pichotta and Mooney (2016), who model event chains by directly using hc in Section 4.2 to predict the output, rather than taking them as features for event pair relation modeling.', '• MemNet is the proposed deep memory network model.', 'Our reimplementation of PMI and Bigrams follows (Granroth-Wilding and Clark, 2016).', 'It can be seen from the table that the statistical counting-based models PMI and Bigram significantly underperform the neural network models Event-Comp, RNN and MemNet, which is largely due to their sparsity and lack of semantic representation power.', 'Under our event representation, Bigram does not outperform PMI significantly either, although considering the order of event pairs.', 'This is likely due to sparsity of events when all arguments are considered.', 'Direct comparison between Event-Comp and RNN shows that the event-pair model gives comparable results to the strong-order LSTM model.', 'Although Granroth-Wilding and Clark (2016) and Pichotta and Mooney (2016) both compared with statistical baselines, they did not make direct comparisons between their methods, which represent two different approaches to the task.', 'Our results show that they each have their unique advantages, which confirm our intuition in the introduction.', 'By considering both pairwise relations and chain temporal orders, our method significantly outperform both Event-Comp and RNN (p − value < 0.01 using t-test), giving the best reported results on both datasets.'] | [['G&C16', 'C&J08'], None, ['PMI'], ['Bigram'], ['Event-Comp'], ['RNN'], ['MemNet'], ['PMI', 'Bigram'], ['PMI', 'Bigram', 'Event-Comp', 'RNN', 'MemNet'], ['Bigram', 'PMI'], None, ['Event-Comp', 'RNN', 'MemNet'], None, None, ['MemNet', 'Event-Comp', 'RNN']] | 1 |
D17-1214table_1 | Results for SWEAR compared to top published results on the WikiReading test set. | 2 | [['Model', 'Placeholder seq2seq (HE16)'], ['Model', 'SoftAttend (CH17)'], ['Model', 'Reinforce (CH17)'], ['Model', 'Placeholder seq2seq (CH17)'], ['Model', 'SWEAR (w/ zeros)'], ['Model', 'SWEAR']] | 1 | [['Mean F1']] | [['71.8'], ['71.6'], ['74.5'], ['75.6'], ['76.4'], ['76.8']] | column | ['Mean F1'] | ['SWEAR'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Mean F1</th> </tr> </thead> <tbody> <tr> <td>Model || Placeholder seq2seq (HE16)</td> <td>71.8</td> </tr> <tr> <td>Model || SoftAttend (CH17)</td> <td>71.6</td> </tr> <tr> <td>Model || Reinforce (CH17)</td> <td>74.5</td> </tr> <tr> <td>Model || Placeholder seq2seq (CH17)</td> <td>75.6</td> </tr> <tr> <td>Model || SWEAR (w/ zeros)</td> <td>76.4</td> </tr> <tr> <td>Model || SWEAR</td> <td>76.8</td> </tr> </tbody></table> | Table 1 | table_1 | D17-1214 | 4 | emnlp2017 | Before exploring unsupervised pre-training, we present summary results for SWEAR in a fully supervised setting, for comparison to previous work on the WikiReading task, namely that of Hewlett et al.(2016) and Choi et al.(2017), which we refer to as HE16 and CH17 in tables. Table 1 shows that SWEAR outperforms the best, results for various models reported in both publications, including the hierarchical models SoftAttend and Reinforce presented by Choi et al. (2017). Interestingly, SoftAttend computes an attention over sentence encodings, analogous to SWEAR’s attention over overlapping window encodings, but it does so on the basis of less powerful encoders (BoW or convolution vs RNN), suggesting that the extra computation spent by the RNN provides a meaningful boost to performance. For further experiments, results, and discussion see Section 5.2. | [2, 1, 1, 0] | ['Before exploring unsupervised pre-training, we present summary results for SWEAR in a fully supervised setting, for comparison to previous work on the WikiReading task, namely that of Hewlett et al.(2016) and Choi et al.(2017), which we refer to as HE16 and CH17 in tables.', 'Table 1 shows that SWEAR outperforms the best, results for various models reported in both publications, including the hierarchical models SoftAttend and Reinforce presented by Choi et al. (2017).', 'Interestingly, SoftAttend computes an attention over sentence encodings, analogous to SWEAR’s attention over overlapping window encodings, but it does so on the basis of less powerful encoders (BoW or convolution vs RNN), suggesting that the extra computation spent by the RNN provides a meaningful boost to performance.', 'For further experiments, results, and discussion see Section 5.2.'] | [['SWEAR', 'SWEAR (w/ zeros)', 'Placeholder seq2seq (HE16)', 'SoftAttend (CH17)', 'Reinforce (CH17)', 'Placeholder seq2seq (CH17)'], ['SWEAR', 'SoftAttend (CH17)', 'Reinforce (CH17)'], ['SWEAR', 'SoftAttend (CH17)', 'Mean F1'], None] | 1 |
D17-1214table_2 | Mean F1 for SWEAR on each type of property compared with the best results for each type reported in Hewlett et al. (2016), which come from different models. Other publications did not report these sub-scores. | 1 | [['Categorical'], ['Relational'], ['Date']] | 1 | [['HE16 Best'], ['SWEAR']] | [['88.6', '88.6'], ['56.5', '63.4'], ['73.8', '82.5']] | column | ['F1', 'F1'] | ['SWEAR', 'Relational', 'Date'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>HE16 Best</th> <th>SWEAR</th> </tr> </thead> <tbody> <tr> <td>Categorical</td> <td>88.6</td> <td>88.6</td> </tr> <tr> <td>Relational</td> <td>56.5</td> <td>63.4</td> </tr> <tr> <td>Date</td> <td>73.8</td> <td>82.5</td> </tr> </tbody></table> | Table 2 | table_2 | D17-1214 | 4 | emnlp2017 | To quantify the effect of initializing the window encoder with the question state, we report results for two variants of SWEAR: In SWEAR the window encoder is initialized with the question encoding, while in SWEAR w/ zeros, the window encoder is initialized with zeros. In both cases the question encoding is used for attention over the window encodings. For SWEAR w/ zeros it is additionally concatenated with the document encoding and passed through a 2-layer fully connected neural network before the decoding step. Conditioning on the question increases Mean F1 by 0.4. Hewlett et al.(2016) grouped properties by answer distribution: Categorical properties have a small list of possible answers, such as countries, Relational properties have an open set of answers, such as spouses or places of birth, and Date properties (a subset of relational properties) have date answers, such as date of birth. We reproduce this grouping in Table 2 to show that SWEAR improves performance for Relational and Date properties, demonstrating that it is better able to extract precise information from documents. | [0, 0, 0, 0, 1, 1] | ['To quantify the effect of initializing the window encoder with the question state, we report results for two variants of SWEAR: In SWEAR the window encoder is initialized with the question encoding, while in SWEAR w/ zeros, the window encoder is initialized with zeros.', 'In both cases the question encoding is used for attention over the window encodings.', 'For SWEAR w/ zeros it is additionally concatenated with the document encoding and passed through a 2-layer fully connected neural network before the decoding step.', 'Conditioning on the question increases Mean F1 by 0.4.', 'Hewlett et al.(2016) grouped properties by answer distribution: Categorical properties have a small list of possible answers, such as countries, Relational properties have an open set of answers, such as spouses or places of birth, and Date properties (a subset of relational properties) have date answers, such as date of birth.', 'We reproduce this grouping in Table 2 to show that SWEAR improves performance for Relational and Date properties, demonstrating that it is better able to extract precise information from documents.'] | [None, None, None, None, ['Categorical', 'Relational', 'Date', 'HE16 Best'], ['SWEAR', 'HE16 Best', 'Relational', 'Date']] | 1 |
D17-1214table_4 | Mean F1 results for SWEAR (fully supervised) and SWEAR-SS (semi-supervised) trained on 1%, 0.5%, and 0.1% subsets, respectively. Variants of SWEAR-SS indicate different sources of fixed encoder weights. 2 | 2 | [['Model', 'SWEAR'], ['Model', 'SWEAR-SS (RAE)'], ['Model', 'SWEAR-SS (VRAE)']] | 1 | [['1%'], ['0.5%'], ['0.1%']] | [['63.5', '57.6', '39.5'], ['64.7', '62.8', '55.3'], ['65.7', '64.0', '60.7']] | column | ['F1', 'F1', 'F1'] | ['SWEAR-SS (VRAE)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>1%</th> <th>0.5%</th> <th>0.1%</th> </tr> </thead> <tbody> <tr> <td>Model || SWEAR</td> <td>63.5</td> <td>57.6</td> <td>39.5</td> </tr> <tr> <td>Model || SWEAR-SS (RAE)</td> <td>64.7</td> <td>62.8</td> <td>55.3</td> </tr> <tr> <td>Model || SWEAR-SS (VRAE)</td> <td>65.7</td> <td>64.0</td> <td>60.7</td> </tr> </tbody></table> | Table 4 | table_4 | D17-1214 | 8 | emnlp2017 | Table 4 and 5 show the results of SWEAR and semi-supervised models with pretrained and fixed embeddings. Results show that SWEAR-SS always improves over SWEAR at small data sizes, with the difference become dramatic as the dataset becomes very small. VRAE pretraining yields the best performance. As training and testing datasets have different distributions in perproperty subsets, Mean F1 for supervised and semi-supervised models drops compared to uniform sampling. However, initialization with pretrained VRAE model leads to a substantial improvement on both subsamples. We further experimented by initializing the decoder (vs. only the encoder) with pretrained autoencoder weights but this resulted in a lower Mean F1. | [1, 1, 1, 2, 1, 2] | ['Table 4 and 5 show the results of SWEAR and semi-supervised models with pretrained and fixed embeddings.', 'Results show that SWEAR-SS always improves over SWEAR at small data sizes, with the difference become dramatic as the dataset becomes very small.', 'VRAE pretraining yields the best performance.', 'As training and testing datasets have different distributions in perproperty subsets, Mean F1 for supervised and semi-supervised models drops compared to uniform sampling.', 'However, initialization with pretrained VRAE model leads to a substantial improvement on both subsamples.', 'We further experimented by initializing the decoder (vs. only the encoder) with pretrained autoencoder weights but this resulted in a lower Mean F1.'] | [None, ['SWEAR-SS (RAE)', 'SWEAR-SS (VRAE)', 'SWEAR'], ['SWEAR-SS (VRAE)'], ['SWEAR', '0.1%'], ['SWEAR-SS (VRAE)', '1%', '0.5%'], None] | 1 |
D17-1214table_6 | Results for semi-supervised reviewer models trained on the 1% subset of WikiReading. | 3 | [['Model', 'SWEAR-PR', '-'], ['Model', 'SWEAR-PR', 'dropout on input only'], ['Model', 'SWEAR-PR', 'no dropout'], ['Model', 'SWEAR-PR', 'shared reviewer cells'], ['Model', 'SWEAR-MLR', '-'], ['Model', 'SWEAR-MLR', 'w/o skip connections']] | 1 | [['Mean F1']] | [['66.5'], ['65.4'], ['64.6'], ['63.8'], ['63.0'], ['60.0']] | column | ['Mean F1'] | ['SWEAR-PR'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Mean F1</th> </tr> </thead> <tbody> <tr> <td>Model || SWEAR-PR || -</td> <td>66.5</td> </tr> <tr> <td>Model || SWEAR-PR || dropout on input only</td> <td>65.4</td> </tr> <tr> <td>Model || SWEAR-PR || no dropout</td> <td>64.6</td> </tr> <tr> <td>Model || SWEAR-PR || shared reviewer cells</td> <td>63.8</td> </tr> <tr> <td>Model || SWEAR-MLR || -</td> <td>63.0</td> </tr> <tr> <td>Model || SWEAR-MLR || w/o skip connections</td> <td>60.0</td> </tr> </tbody></table> | Table 6 | table_6 | D17-1214 | 8 | emnlp2017 | Table 6 shows the results of semi-supervised reviewer models. When trained on 1% of the training data, SWEAR-MLR and the supervised SWEAR model perform similarly. Without using skip connections between embedding and hidden layers, the performance drops. The SWEARPR model further improves Mean F1 and outperforms the strongest SWEAR-SS model, even without fine-tuning the weights initialized from the autoencoder. | [1, 1, 1, 1] | ['Table 6 shows the results of semi-supervised reviewer models.', 'When trained on 1% of the training data, SWEAR-MLR and the supervised SWEAR model perform similarly.', 'Without using skip connections between embedding and hidden layers, the performance drops.', 'The SWEARPR model further improves Mean F1 and outperforms the strongest SWEAR-SS model, even without fine-tuning the weights initialized from the autoencoder.'] | [None, ['SWEAR-MLR', 'SWEAR-PR', 'Mean F1'], ['w/o skip connections', 'Mean F1'], ['SWEAR-PR', 'Mean F1']] | 1 |
D17-1215table_5 | Transferability of adversarial examples across models. Each row measures performance on adversarial examples generated to target one particular model; each column evaluates one (possibly different) model on these examples. | 3 | [['Targeted Model', 'ADDSENT', 'ML Single'], ['Targeted Model', 'ADDSENT', 'ML Ens.'], ['Targeted Model', 'ADDSENT', 'BiDAF Single'], ['Targeted Model', 'ADDSENT', 'BiDAF Ens.'], ['Targeted Model', 'ADDANY', 'ML Single'], ['Targeted Model', 'ADDANY', 'ML Ens.'], ['Targeted Model', 'ADDANY', 'BiDAF Single'], ['Targeted Model', 'ADDANY', 'BiDAF Ens.']] | 3 | [['Model under Evaluation', 'ML', 'Single'], ['Model under Evaluation', 'ML', 'Ens.'], ['Model under Evaluation', 'BiDAF', 'Single'], ['Model under Evaluation', 'BiDAF', 'Ens.']] | [['27.3', '33.4', '40.3', '39.1'], ['31.6', '29.4', '40.2', '38.7'], ['32.7', '34.8', '34.3', '37.4'], ['32.7', '34.2', '38.3', '34.2'], ['7.6', '54.1', '57.1', '60.9'], ['44.9', '11.7', '50.4', '54.8'], ['58.4', '60.5', '4.8', '46.4'], ['48.8', '51.1', '25', '2.7']] | column | ['F1', 'F1', 'F1', 'F1'] | ['ADDSENT', 'ADDANY'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Model under Evaluation || ML || Single</th> <th>Model under Evaluation || ML || Ens.</th> <th>Model under Evaluation || BiDAF || Single</th> <th>Model under Evaluation || BiDAF || Ens.</th> </tr> </thead> <tbody> <tr> <td>Targeted Model || ADDSENT || ML Single</td> <td>27.3</td> <td>33.4</td> <td>40.3</td> <td>39.1</td> </tr> <tr> <td>Targeted Model || ADDSENT || ML Ens.</td> <td>31.6</td> <td>29.4</td> <td>40.2</td> <td>38.7</td> </tr> <tr> <td>Targeted Model || ADDSENT || BiDAF Single</td> <td>32.7</td> <td>34.8</td> <td>34.3</td> <td>37.4</td> </tr> <tr> <td>Targeted Model || ADDSENT || BiDAF Ens.</td> <td>32.7</td> <td>34.2</td> <td>38.3</td> <td>34.2</td> </tr> <tr> <td>Targeted Model || ADDANY || ML Single</td> <td>7.6</td> <td>54.1</td> <td>57.1</td> <td>60.9</td> </tr> <tr> <td>Targeted Model || ADDANY || ML Ens.</td> <td>44.9</td> <td>11.7</td> <td>50.4</td> <td>54.8</td> </tr> <tr> <td>Targeted Model || ADDANY || BiDAF Single</td> <td>58.4</td> <td>60.5</td> <td>4.8</td> <td>46.4</td> </tr> <tr> <td>Targeted Model || ADDANY || BiDAF Ens.</td> <td>48.8</td> <td>51.1</td> <td>25</td> <td>2.7</td> </tr> </tbody></table> | Table 5 | table_5 | D17-1215 | 7 | emnlp2017 | Table 5 shows the results of evaluating the four main models on adversarial examples generated by running either ADDSENT or ADDANY against each model. ADDSENT adversarial examples transfer between models quite effectively; in particular, they are harder than ADDONESENT examples, which implies that examples that fool one model are more likely to fool other models. The ADDANY adversarial examples exhibited more limited transferability between models. For both ADDSENT and ADDANY, examples transferred slightly better between single and ensemble versions of the same model. | [1, 2, 2, 1] | ['Table 5 shows the results of evaluating the four main models on adversarial examples generated by running either ADDSENT or ADDANY against each model.', 'ADDSENT adversarial examples transfer between models quite effectively; in particular, they are harder than ADDONESENT examples, which implies that examples that fool one model are more likely to fool other models.', 'The ADDANY adversarial examples exhibited more limited transferability between models.', ' For both ADDSENT and ADDANY, examples transferred slightly better between single and ensemble versions of the same model.'] | [['ADDSENT', 'ADDANY'], ['ADDSENT'], ['ADDANY'], ['ML Single', 'BiDAF Single', 'ML Ens.', 'BiDAF Ens.']] | 1 |
D17-1216table_4 | Comparison of accuracy for our model and three baselines on RocStories Spring 2016 Test Set. The result of DSSM is adapted from (Mostafazadeh et al., 2016a). | 2 | [['System', 'Narrative Event Chain'], ['System', 'DSSM'], ['System', 'RNN Model'], ['Our Model', ' -']] | 1 | [['Accuracy']] | [['57.62%'], ['58.52%'], ['58.93%'], ['67.02%']] | column | ['Accuracy'] | ['Our Model'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Accuracy</th> </tr> </thead> <tbody> <tr> <td>System || Narrative Event Chain</td> <td>57.62%</td> </tr> <tr> <td>System || DSSM</td> <td>58.52%</td> </tr> <tr> <td>System || RNN Model</td> <td>58.93%</td> </tr> <tr> <td>Our Model || -</td> <td>67.02%</td> </tr> </tbody></table> | Table 4 | table_4 | D17-1216 | 7 | emnlp2017 | Table 4 shows the results. From this table, we can see that :. 1) Our model outperforms all baselines significantly. Compared with baselines, the accuracy improvement on test set is at least 13.7%. This demonstrates the effectiveness of our model by mining and exploiting heteregenous knowledge. 2) The event narrative knowledge only is insufficient for commonsense machine comprehension. Compared with Narrative Event Chain Model, our model achieves a 16.3% accuracy improvement by considering richer commonsense knowledge, rather than only narrative event knowledge. 3) It is necessary to distinguish different kinds of commonsense relations for machine comprehension and commonsense reasoning. Compared with DSSM and RNN, which model all relations between two elements using a single semantic similarity score, our model achieves significant accuracy improvements by modeling, distinguishing and selecting different types of commonsense relations between different kinds of elements. | [1, 2, 1, 1, 2, 2, 1, 2, 1] | ['Table 4 shows the results.', 'From this table, we can see that :.', '1) Our model outperforms all baselines significantly.', 'Compared with baselines, the accuracy improvement on test set is at least 13.7%.', 'This demonstrates the effectiveness of our model by mining and exploiting heteregenous knowledge.', '2) The event narrative knowledge only is insufficient for commonsense machine comprehension.', 'Compared with Narrative Event Chain Model, our model achieves a 16.3% accuracy improvement by considering richer commonsense knowledge, rather than only narrative event knowledge.', '3) It is necessary to distinguish different kinds of commonsense relations for machine comprehension and commonsense reasoning.', ' Compared with DSSM and RNN, which model all relations between two elements using a single semantic similarity score, our model achieves significant accuracy improvements by modeling, distinguishing and selecting different types of commonsense relations between different kinds of elements.'] | [None, None, ['Our Model', 'Accuracy'], ['Our Model', 'Accuracy'], ['Our Model'], ['Narrative Event Chain'], ['Narrative Event Chain', 'Accuracy'], ['DSSM', 'RNN Model'], ['DSSM', 'RNN Model', 'Our Model', 'Accuracy']] | 1 |
D17-1216table_5 | Comparison of the performance using single type of knowledge. | 2 | [['System', 'Event Narrative Knowledge'], ['System', 'Entity Semantic Knowledge'], ['System', 'Sentiment Coherent Knowledge'], ['Our Model (All Knowledge)', '-']] | 1 | [['Accuracy']] | [['60.98%'], ['57.14%'], ['61.30%'], ['67.02%']] | column | ['Accuracy'] | ['Our Model (All Knowledge)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Accuracy</th> </tr> </thead> <tbody> <tr> <td>System || Event Narrative Knowledge</td> <td>60.98%</td> </tr> <tr> <td>System || Entity Semantic Knowledge</td> <td>57.14%</td> </tr> <tr> <td>System || Sentiment Coherent Knowledge</td> <td>61.30%</td> </tr> <tr> <td>Our Model (All Knowledge) || -</td> <td>67.02%</td> </tr> </tbody></table> | Table 5 | table_5 | D17-1216 | 7 | emnlp2017 | The first group of experiments was conducted using only one kind of knowledge at a time in our model. Table 5 shows the results. We can see that using a single kind of knowledge is insufficient for commonsense machine comprehension: all single-knowledge settings cannot achieve competitive performance to the all-knowledge setting. | [2, 1, 1] | ['The first group of experiments was conducted using only one kind of knowledge at a time in our model.', 'Table 5 shows the results.', 'We can see that using a single kind of knowledge is insufficient for commonsense machine comprehension: all single-knowledge settings cannot achieve competitive performance to the all-knowledge setting.'] | [['Our Model (All Knowledge)'], None, ['Our Model (All Knowledge)', 'Accuracy']] | 1 |
D17-1216table_7 | Comparison of the performance using different inference rule selection mechanism. | 2 | [['System', 'Minimum Cost Mechanism'], ['System', 'Average Cost Mechanism'], ['Our Model (Attention Mechanism)', ' -']] | 1 | [['Accuracy']] | [['54.84%'], ['63.01%'], ['67.02%']] | column | ['Accuracy'] | ['Our Model (Attention Mechanism)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Accuracy</th> </tr> </thead> <tbody> <tr> <td>System || Minimum Cost Mechanism</td> <td>54.84%</td> </tr> <tr> <td>System || Average Cost Mechanism</td> <td>63.01%</td> </tr> <tr> <td>Our Model (Attention Mechanism) || -</td> <td>67.02%</td> </tr> </tbody></table> | Table 7 | table_7 | D17-1216 | 10 | emnlp2017 | Table 7 show the results. We can see that: 1) the minimum cost mechanism cannot achieve competitive performance, we believe this is because the selection of rules should not depend on the cost of them, and considering all valid inferences is critical for reasoning; 2) our attention mechanism can effectively model the inference rule selection possibility. Compared with the average cost mechanism, our method achieved a 6.36% accuracy improvement. This also verified the necessity of an effective inference rule probability model. | [1, 1, 1, 2] | ['Table 7 show the results.', 'We can see that: 1) the minimum cost mechanism cannot achieve competitive performance, we believe this is because the selection of rules should not depend on the cost of them, and considering all valid inferences is critical for reasoning; 2) our attention mechanism can effectively model the inference rule selection possibility.', 'Compared with the average cost mechanism, our method achieved a 6.36% accuracy improvement.', 'This also verified the necessity of an effective inference rule probability model.'] | [None, ['Minimum Cost Mechanism', 'Accuracy', 'Our Model (Attention Mechanism)'], ['Average Cost Mechanism', 'Accuracy'], None] | 1 |
D17-1216table_8 | Comparison of the performance by removing negation rules. | 2 | [['System', 'Our Model'], ['System', '-w/o Negation Rules']] | 1 | [['Accuracy']] | [['67.02%'], ['63.12%']] | column | ['Accuracy'] | ['-w/o Negation Rules'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Accuracy</th> </tr> </thead> <tbody> <tr> <td>System || Our Model</td> <td>67.02%</td> </tr> <tr> <td>System || -w/o Negation Rules</td> <td>63.12%</td> </tr> </tbody></table> | Table 8 | table_8 | D17-1216 | 10 | emnlp2017 | Table 8 show the results. We can see that removing negation rules will significantly drop the system performance, which confirm the effectiveness of our proposed negation rules. | [1, 1] | ['Table 8 show the results.', 'We can see that removing negation rules will significantly drop the system performance, which confirm the effectiveness of our proposed negation rules.'] | [None, ['-w/o Negation Rules', 'Accuracy', 'Our Model']] | 1 |
D17-1218table_3 | Cross-domain experiments, best values per column are highlighted, in-domain results (for comparison) in italics; results only for selected systems. For each source/target combination we show two scores: Macro-F1 score (left-hand column) and F1 score for claims (right-hand column). | 4 | [['Source/Sys.', 'CNN-rand', '-', 'MT'], ['Source/Sys.', 'CNN-rand', '-', 'OC'], ['Source/Sys.', 'CNN-rand', '-', 'PE'], ['Source/Sys.', 'CNN-rand', '-', 'VG'], ['Source/Sys.', 'CNN-rand', '-', 'WD'], ['Source/Sys.', 'CNN-rand', '-', 'WTP'], ['Source/Sys.', 'CNN-rand', '-', ' Average'], ['Source/Sys.', 'LR All Features', '-', 'MT'], ['Source/Sys.', 'LR All Features', '-', 'OC'], ['Source/Sys.', 'LR All Features', '-', 'PE'], ['Source/Sys.', 'LR All Features', '-', 'VG'], ['Source/Sys.', 'LR All Features', '-', 'WD'], ['Source/Sys.', 'LR All Features', '-', 'WTP'], ['Source/Sys.', 'LR All Features', '-', 'Average'], ['Source/Sys.', 'Single feature groups (averages across all source domains)', 'LR', '+Discourse'], ['Source/Sys.', 'Single feature groups (averages across all source domains)', 'LR', '+Embeddings'], ['Source/Sys.', 'Single feature groups (averages across all source domains)', 'LR', '+Lexical'], ['Source/Sys.', 'Single feature groups (averages across all source domains)', 'LR', '+Structure'], ['Source/Sys.', 'Single feature groups (averages across all source domains)', 'LR', '+Syntax'], ['Source/Sys.', 'baseline', '-', 'Majority bsl'], ['Source/Sys.', 'baseline', '-', 'Random bsl']] | 3 | [['Target', 'MT', 'Macro-F1'], ['Target', 'MT', 'F1'], ['Target', 'OC', 'Macro-F1'], ['Target', 'OC', 'F1'], ['Target', 'PE', 'Macro-F1'], ['Target', 'PE', 'F1'], ['Target', 'VG', 'Macro-F1'], ['Target', 'VG', 'F1'], ['Target', 'WD', 'Macro-F1'], ['Target', 'WD', 'F1'], ['Target', 'WTP', 'Macro-F1'], ['Target', 'WTP', 'F1'], ['Target', 'Average', 'Macro-F1'], ['Target', 'Average', 'F1']] | [['78.6', '67.3', '51', '7.4', '56.9', '22.1', '57.2', '15.7', '52.4', '9.4', '49.4', '10.9', '53.4', '13.1'], ['57.1', '39.7', '60.5', '25.6', '56.4', '42.8', '58.9', '37.3', '54.6', '13.2', '58.4', '28.9', '57.1', '32.4'], ['59.8', '18', '54.2', '9.5', '73.6', '61.1', '57.5', '18.7', '55.5', '15.9', '54.7', '16', '56.3', '15.6'], ['68.7', '51.5', '55.8', '19.2', '57', '32', '65.9', '45', '51.7', '10.5', '54.7', '22', '57.6', '27'], ['64.4', '3.5', '51.3', '1.3', '41.3', '0', '44.5', '0', '61.1', '25.8', '46.7', '0', '49.6', '1'], ['58.5', '26.6', '56.8', '15.4', '56', '18.5', '55.3', '19.4', '52.9', '11.6', '58.6', '28.9', '55.9', '18.3'], ['61.7', '27.9', '53.8', '10.6', '53.5', '23.1', '54.7', '18.2', '53.4', '12.1', '52.8', '15.6', '55', '17.9'], ['74.4', '62.7', '53.9', '17', '51.9', '29.5', '56.1', '34.2', '55.1', '14.5', '52.5', '21.2', '53.9', '23.3'], ['60', '45.1', '59.9', '22.9', '56.7', '47', '58.6', '38', '54.1', '12.2', '57.7', '27.5', '57.4', '34'], ['58.1', '36.3', '54.6', '17.3', '70.6', '60.6', '54.1', '21.4', '54', '13.5', '54.4', '20.4', '55', '21.8'], ['65.8', '51.4', '57.3', '21.7', '57', '45.1', '62.5', '42.6', '54.5', '13.1', '55.1', '24.8', '57.9', '31.2'], ['62.6', '38.5', '55.4', '19', '56', '30.1', '55.1', '23.3', '63.8', '23.3', '53.6', '20.9', '56.5', '26.3'], ['58', '41.7', '56.1', '20.3', '56.8', '42.6', '59.1', '38', '52.2', '11.2', '59.7', '30.2', '56.5', '30.8'], ['60.9', '42.6', '55.5', '19.1', '55.7', '38.9', '56.6', '31', '54', '12.9', '54.7', '23', '56.2', '27.9'], ['40.2', '15', '31.7', '5.8', '30.3', '27.4', '27.7', '19.9', '40.9', '4.5', '25.3', '13.3', '32.7', '14.3'], ['56.6', '35.2', '51.4', '12.8', '53.6', '30.7', '53.3', '24.3', '54.2', '13.2', '52.9', '19', '53.7', '22.5'], ['61', '42.2', '55.2', '18.3', '56.2', '38.6', '54.7', '29.1', '53.1', '11.9', '54.9', '23.4', '55.9', '27.2'], ['44.2', '22.9', '53.6', '18.5', '52.5', '38.4', '53.6', '32.1', '49.1', '9', '53.4', '23.3', '51.1', '24'], ['54.8', '37', '54.2', '17.5', '54.3', '40.6', '55.7', '32', '53', '11.8', '53.8', '22.5', '54.3', '26.9'], ['42.9', '0', '48', '0', '41.3', '0', '44.5', '0', '48.6', '0', '46.7', '0', '45.3', '0'], ['47.5', '30.6', '50.5', '14', '51', '38.4', '51', '29.3', '49.3', '9.3', '50.3', '20.2', '49.9', '23.6']] | column | ['Macro-F1', 'F1', 'Macro-F1', 'F1', 'Macro-F1', 'F1', 'Macro-F1', 'F1', 'Macro-F1', 'F1', 'Macro-F1', 'F1', 'Macro-F1', 'F1'] | ['Target', 'MT'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Target || MT || Macro-F1</th> <th>Target || MT || F1</th> <th>Target || OC || Macro-F1</th> <th>Target || OC || F1</th> <th>Target || PE || Macro-F1</th> <th>Target || PE || F1</th> <th>Target || VG || Macro-F1</th> <th>Target || VG || F1</th> <th>Target || WD || Macro-F1</th> <th>Target || WD || F1</th> <th>Target || WTP || Macro-F1</th> <th>Target || WTP || F1</th> <th>Target || Average || Macro-F1</th> <th>Target || Average || F1</th> </tr> </thead> <tbody> <tr> <td>Source/Sys. || CNN-rand || - || MT</td> <td>78.6</td> <td>67.3</td> <td>51</td> <td>7.4</td> <td>56.9</td> <td>22.1</td> <td>57.2</td> <td>15.7</td> <td>52.4</td> <td>9.4</td> <td>49.4</td> <td>10.9</td> <td>53.4</td> <td>13.1</td> </tr> <tr> <td>Source/Sys. || CNN-rand || - || OC</td> <td>57.1</td> <td>39.7</td> <td>60.5</td> <td>25.6</td> <td>56.4</td> <td>42.8</td> <td>58.9</td> <td>37.3</td> <td>54.6</td> <td>13.2</td> <td>58.4</td> <td>28.9</td> <td>57.1</td> <td>32.4</td> </tr> <tr> <td>Source/Sys. || CNN-rand || - || PE</td> <td>59.8</td> <td>18</td> <td>54.2</td> <td>9.5</td> <td>73.6</td> <td>61.1</td> <td>57.5</td> <td>18.7</td> <td>55.5</td> <td>15.9</td> <td>54.7</td> <td>16</td> <td>56.3</td> <td>15.6</td> </tr> <tr> <td>Source/Sys. || CNN-rand || - || VG</td> <td>68.7</td> <td>51.5</td> <td>55.8</td> <td>19.2</td> <td>57</td> <td>32</td> <td>65.9</td> <td>45</td> <td>51.7</td> <td>10.5</td> <td>54.7</td> <td>22</td> <td>57.6</td> <td>27</td> </tr> <tr> <td>Source/Sys. || CNN-rand || - || WD</td> <td>64.4</td> <td>3.5</td> <td>51.3</td> <td>1.3</td> <td>41.3</td> <td>0</td> <td>44.5</td> <td>0</td> <td>61.1</td> <td>25.8</td> <td>46.7</td> <td>0</td> <td>49.6</td> <td>1</td> </tr> <tr> <td>Source/Sys. || CNN-rand || - || WTP</td> <td>58.5</td> <td>26.6</td> <td>56.8</td> <td>15.4</td> <td>56</td> <td>18.5</td> <td>55.3</td> <td>19.4</td> <td>52.9</td> <td>11.6</td> <td>58.6</td> <td>28.9</td> <td>55.9</td> <td>18.3</td> </tr> <tr> <td>Source/Sys. || CNN-rand || - || Average</td> <td>61.7</td> <td>27.9</td> <td>53.8</td> <td>10.6</td> <td>53.5</td> <td>23.1</td> <td>54.7</td> <td>18.2</td> <td>53.4</td> <td>12.1</td> <td>52.8</td> <td>15.6</td> <td>55</td> <td>17.9</td> </tr> <tr> <td>Source/Sys. || LR All Features || - || MT</td> <td>74.4</td> <td>62.7</td> <td>53.9</td> <td>17</td> <td>51.9</td> <td>29.5</td> <td>56.1</td> <td>34.2</td> <td>55.1</td> <td>14.5</td> <td>52.5</td> <td>21.2</td> <td>53.9</td> <td>23.3</td> </tr> <tr> <td>Source/Sys. || LR All Features || - || OC</td> <td>60</td> <td>45.1</td> <td>59.9</td> <td>22.9</td> <td>56.7</td> <td>47</td> <td>58.6</td> <td>38</td> <td>54.1</td> <td>12.2</td> <td>57.7</td> <td>27.5</td> <td>57.4</td> <td>34</td> </tr> <tr> <td>Source/Sys. || LR All Features || - || PE</td> <td>58.1</td> <td>36.3</td> <td>54.6</td> <td>17.3</td> <td>70.6</td> <td>60.6</td> <td>54.1</td> <td>21.4</td> <td>54</td> <td>13.5</td> <td>54.4</td> <td>20.4</td> <td>55</td> <td>21.8</td> </tr> <tr> <td>Source/Sys. || LR All Features || - || VG</td> <td>65.8</td> <td>51.4</td> <td>57.3</td> <td>21.7</td> <td>57</td> <td>45.1</td> <td>62.5</td> <td>42.6</td> <td>54.5</td> <td>13.1</td> <td>55.1</td> <td>24.8</td> <td>57.9</td> <td>31.2</td> </tr> <tr> <td>Source/Sys. || LR All Features || - || WD</td> <td>62.6</td> <td>38.5</td> <td>55.4</td> <td>19</td> <td>56</td> <td>30.1</td> <td>55.1</td> <td>23.3</td> <td>63.8</td> <td>23.3</td> <td>53.6</td> <td>20.9</td> <td>56.5</td> <td>26.3</td> </tr> <tr> <td>Source/Sys. || LR All Features || - || WTP</td> <td>58</td> <td>41.7</td> <td>56.1</td> <td>20.3</td> <td>56.8</td> <td>42.6</td> <td>59.1</td> <td>38</td> <td>52.2</td> <td>11.2</td> <td>59.7</td> <td>30.2</td> <td>56.5</td> <td>30.8</td> </tr> <tr> <td>Source/Sys. || LR All Features || - || Average</td> <td>60.9</td> <td>42.6</td> <td>55.5</td> <td>19.1</td> <td>55.7</td> <td>38.9</td> <td>56.6</td> <td>31</td> <td>54</td> <td>12.9</td> <td>54.7</td> <td>23</td> <td>56.2</td> <td>27.9</td> </tr> <tr> <td>Source/Sys. || Single feature groups (averages across all source domains) || LR || +Discourse</td> <td>40.2</td> <td>15</td> <td>31.7</td> <td>5.8</td> <td>30.3</td> <td>27.4</td> <td>27.7</td> <td>19.9</td> <td>40.9</td> <td>4.5</td> <td>25.3</td> <td>13.3</td> <td>32.7</td> <td>14.3</td> </tr> <tr> <td>Source/Sys. || Single feature groups (averages across all source domains) || LR || +Embeddings</td> <td>56.6</td> <td>35.2</td> <td>51.4</td> <td>12.8</td> <td>53.6</td> <td>30.7</td> <td>53.3</td> <td>24.3</td> <td>54.2</td> <td>13.2</td> <td>52.9</td> <td>19</td> <td>53.7</td> <td>22.5</td> </tr> <tr> <td>Source/Sys. || Single feature groups (averages across all source domains) || LR || +Lexical</td> <td>61</td> <td>42.2</td> <td>55.2</td> <td>18.3</td> <td>56.2</td> <td>38.6</td> <td>54.7</td> <td>29.1</td> <td>53.1</td> <td>11.9</td> <td>54.9</td> <td>23.4</td> <td>55.9</td> <td>27.2</td> </tr> <tr> <td>Source/Sys. || Single feature groups (averages across all source domains) || LR || +Structure</td> <td>44.2</td> <td>22.9</td> <td>53.6</td> <td>18.5</td> <td>52.5</td> <td>38.4</td> <td>53.6</td> <td>32.1</td> <td>49.1</td> <td>9</td> <td>53.4</td> <td>23.3</td> <td>51.1</td> <td>24</td> </tr> <tr> <td>Source/Sys. || Single feature groups (averages across all source domains) || LR || +Syntax</td> <td>54.8</td> <td>37</td> <td>54.2</td> <td>17.5</td> <td>54.3</td> <td>40.6</td> <td>55.7</td> <td>32</td> <td>53</td> <td>11.8</td> <td>53.8</td> <td>22.5</td> <td>54.3</td> <td>26.9</td> </tr> <tr> <td>Source/Sys. || baseline || - || Majority bsl</td> <td>42.9</td> <td>0</td> <td>48</td> <td>0</td> <td>41.3</td> <td>0</td> <td>44.5</td> <td>0</td> <td>48.6</td> <td>0</td> <td>46.7</td> <td>0</td> <td>45.3</td> <td>0</td> </tr> <tr> <td>Source/Sys. || baseline || - || Random bsl</td> <td>47.5</td> <td>30.6</td> <td>50.5</td> <td>14</td> <td>51</td> <td>38.4</td> <td>51</td> <td>29.3</td> <td>49.3</td> <td>9.3</td> <td>50.3</td> <td>20.2</td> <td>49.9</td> <td>23.6</td> </tr> </tbody></table> | Table 3 | table_3 | D17-1218 | 7 | emnlp2017 | For all six datasets, training on different sources resulted in a performance drop. Table 3 lists the results of the best feature-based (LR All features) and deep learning (CNN-rand) systems, as well as single feature groups (averages over all source domains, results for individual source domains can be found in the supplementary material to this article). We note the biggest performance drops on the datasets which performed best in the indomain setting (MT and PE). For the lowest scoring datasets, OC and WTP, the differences are only marginal when trained on a suitable dataset (VG and OC, respectively). The best feature-based approach outperforms the best deep learning approach in most scenarios. In particular, as opposed to the in-domain experiments, the difference of the Claim-F1 measure between the feature-based approaches and the deep learning approaches is striking. In the feature-based approaches, on average, a combination of all features yields the best results for both Macro-F1and Claim-F1. When comparing single features, lexical ones do the best job. Looking at the best overall system (LR withall features), the average test results when training on different source datasets are between 54% Macro-F1 resp. 23% Claim-F1 (both MT) and 58% (VG) resp. 34% (OC). Depending on the goal that should be achieved, training on VG (highest average Macro-F1) or OC (highest average Claim-F1) seems to be the best choice when the domain of test data is unknown (we analyze this finding inmore depth in¤6). MT clearly gives the best results as target domain, followed by PE and VG. | [2, 1, 1, 1, 2, 1, 1, 2, 1, 1, 1] | ['For all six datasets, training on different sources resulted in a performance drop.', 'Table 3 lists the results of the best feature-based (LR All features) and deep learning (CNN-rand) systems, as well as single feature groups (averages over all source domains, results for individual source domains can be found in the supplementary material to this article).', 'We note the biggest performance drops on the datasets which performed best in the indomain setting (MT and PE).', 'For the lowest scoring datasets, OC and WTP, the differences are only marginal when trained on a suitable dataset (VG and OC, respectively).', 'The best feature-based approach outperforms the best deep learning approach in most scenarios.', 'In particular, as opposed to the in-domain experiments, the difference of the Claim-F1 measure between the feature-based approaches and the deep learning approaches is striking.', 'In the feature-based approaches, on average, a combination of all features yields the best results for both Macro-F1and Claim-F1.', 'When comparing single features, lexical ones do the best job.', 'Looking at the best overall system (LR withall features), the average test results when training on different source datasets are between 54% Macro-F1 resp. 23% Claim-F1 (both MT) and 58% (VG) resp. 34% (OC).', 'Depending on the goal that should be achieved, training on VG (highest average Macro-F1) or OC (highest average Claim-F1) seems to be the best choice when the domain of test data is unknown (we analyze this finding inmore depth in¤6).', 'MT clearly gives the best results as target domain, followed by PE and VG.'] | [None, ['LR All Features', 'CNN-rand', 'Single feature groups (averages across all source domains)'], ['MT', 'PE', 'CNN-rand', 'LR All Features'], ['Source/Sys.', 'CNN-rand', 'OC', 'Target', 'WTP', 'LR All Features', 'VG'], None, ['Target', 'MT', 'Source/Sys.'], ['Target', 'Average'], None, ['Single feature groups (averages across all source domains)', 'LR', '+Embeddings', 'Target', 'VG', 'Source/Sys.', 'OC'], ['Target', 'VG', 'OC'], ['Target', 'MT', 'PE', 'VG']] | 1 |
D17-1219table_2 | Results for the full QG systems using BLEU 1–4, METEOR. The first stage of the two pipeline systems are the feature-rich linear model (LREG) and our best performing selection model respectively. | 3 | [['Model', 'Conservative', 'LREG(C&L)+ NQG'], ['Model', 'Conservative', 'Ours + NQG'], ['Model', 'Liberal', 'LREG(C&L)+ NQG'], ['Model', 'Liberal', 'Ours + NQG']] | 1 | [['BLEU 1'], ['BLEU 2'], ['BLEU 3'], ['BLEU 4'], ['METEOR']] | [['38.3', '23.15', '15.64', '10.97', '15.09'], ['40.08', '24.26', '16.39', '11.5', '15.67'], ['51.55', '40.17', '34.35', '30.59', '24.17'], ['52.89', '41.16', '35.15', '31.25', '24.76']] | column | ['BLEU 1', 'BLEU 2', 'BLEU 3', 'BLEU 4', 'METEOR'] | ['Ours + NQG'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>BLEU 1</th> <th>BLEU 2</th> <th>BLEU 3</th> <th>BLEU 4</th> <th>METEOR</th> </tr> </thead> <tbody> <tr> <td>Model || Conservative || LREG(C&L)+ NQG</td> <td>38.3</td> <td>23.15</td> <td>15.64</td> <td>10.97</td> <td>15.09</td> </tr> <tr> <td>Model || Conservative || Ours + NQG</td> <td>40.08</td> <td>24.26</td> <td>16.39</td> <td>11.5</td> <td>15.67</td> </tr> <tr> <td>Model || Liberal || LREG(C&L)+ NQG</td> <td>51.55</td> <td>40.17</td> <td>34.35</td> <td>30.59</td> <td>24.17</td> </tr> <tr> <td>Model || Liberal || Ours + NQG</td> <td>52.89</td> <td>41.16</td> <td>35.15</td> <td>31.25</td> <td>24.76</td> </tr> </tbody></table> | Table 2 | table_2 | D17-1219 | 4 | emnlp2017 | Table 2 shows that the QG system incorporating our best performing sentence extractor outperforms its LREG counterpart across metrics. Note that to calculate the score for the matching case, similar to our earlier work (Du et al., 2017), we adapt the image captioning evaluation scripts of Chen et al.(2015) since there can be several gold standard questions for a single input sentence. | [1, 2] | ['Table 2 shows that the QG system incorporating our best performing sentence extractor outperforms its LREG counterpart across metrics.', 'Note that to calculate the score for the matching case, similar to our earlier work (Du et al., 2017), we adapt the image captioning evaluation scripts of Chen et al.(2015) since there can be several gold standard questions for a single input sentence.'] | [['Ours + NQG'], ['Ours + NQG']] | 1 |
D17-1224table_1 | Comparison results on overall evaluation | 2 | [['Methods', 'Lead'], ['Methods', 'Coverage'], ['Methods', 'TextRank'], ['Methods', 'Centroid'], ['Methods', 'ILP'], ['Methods', 'ClusterCMRW'], ['Methods', 'Submodular'], ['Methods', 'SenDivRank'], ['Methods', 'Our Approach']] | 1 | [['R-1'], ['R-2'], ['R-SU4']] | [['0.48029', '0.16183', '0.21156'], ['0.48085', '0.15849', '0.20615'], ['0.49453', '0.1637', '0.21457'], ['0.48582', '0.16099', '0.20919'], ['0.49302', '0.16651', '0.21493'], ['0.49363', '0.17205', '0.22033'], ['0.50273', '0.16963', '0.21775'], ['0.48701', '0.17491', '0.22382'], ['0.50215', '0.18631', '0.23426']] | column | ['R-1', 'R-2', 'R-SU4'] | ['Our Approach'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>R-1</th> <th>R-2</th> <th>R-SU4</th> </tr> </thead> <tbody> <tr> <td>Methods || Lead</td> <td>0.48029</td> <td>0.16183</td> <td>0.21156</td> </tr> <tr> <td>Methods || Coverage</td> <td>0.48085</td> <td>0.15849</td> <td>0.20615</td> </tr> <tr> <td>Methods || TextRank</td> <td>0.49453</td> <td>0.1637</td> <td>0.21457</td> </tr> <tr> <td>Methods || Centroid</td> <td>0.48582</td> <td>0.16099</td> <td>0.20919</td> </tr> <tr> <td>Methods || ILP</td> <td>0.49302</td> <td>0.16651</td> <td>0.21493</td> </tr> <tr> <td>Methods || ClusterCMRW</td> <td>0.49363</td> <td>0.17205</td> <td>0.22033</td> </tr> <tr> <td>Methods || Submodular</td> <td>0.50273</td> <td>0.16963</td> <td>0.21775</td> </tr> <tr> <td>Methods || SenDivRank</td> <td>0.48701</td> <td>0.17491</td> <td>0.22382</td> </tr> <tr> <td>Methods || Our Approach</td> <td>0.50215</td> <td>0.18631</td> <td>0.23426</td> </tr> </tbody></table> | Table 1 | table_1 | D17-1224 | 3 | emnlp2017 | Firstly, we perform evaluation on the whole articles and Table 1 shows the comparison results. We can see that our approach outperforms all the baseline methods with respect to ROUGE-2 and ROUGE-SU4. The Submodular method achieves the highest ROUGE-1 score, but our approach also achieves very high ROUGE-1 score, which is very close to that of the Submodular method. | [1, 1, 1] | ['Firstly, we perform evaluation on the whole articles and Table 1 shows the comparison results.', 'We can see that our approach outperforms all the baseline methods with respect to ROUGE-2 and ROUGE-SU4.', 'The Submodular method achieves the highest ROUGE-1 score, but our approach also achieves very high ROUGE-1 score, which is very close to that of the Submodular method.'] | [None, ['Our Approach', 'R-2', 'R-SU4'], ['Submodular', 'R-1', 'Our Approach']] | 1 |
D17-1224table_2 | Comparison results on two-part evaluation I | 2 | [['Method', 'Lead'], ['Method', 'Coverage'], ['Method', 'TextRank'], ['Method', 'Centroid'], ['Method', 'ILP'], ['Method', 'ClusterCMRW'], ['Method', 'Submodular'], ['Method', 'SenDivRank'], ['Method', 'Our Approach']] | 1 | [['R-1'], ['R-2'], ['R-SU4']] | [['0.38757', '0.10631', '0.15138'], ['0.38932', '0.10399', '0.14714'], ['0.40246', '0.10651', '0.15327'], ['0.3891', '0.10297', '0.14774'], ['0.40004', '0.11256', '0.15641'], ['0.40565', '0.11855', '0.16195'], ['0.3999', '0.11044', '0.15442'], ['0.39462', '0.11575', '0.16028'], ['0.41913', '0.13369', '0.17735']] | column | ['R-1', 'R-2', 'R-SU4'] | ['Our Approach'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>R-1</th> <th>R-2</th> <th>R-SU4</th> </tr> </thead> <tbody> <tr> <td>Method || Lead</td> <td>0.38757</td> <td>0.10631</td> <td>0.15138</td> </tr> <tr> <td>Method || Coverage</td> <td>0.38932</td> <td>0.10399</td> <td>0.14714</td> </tr> <tr> <td>Method || TextRank</td> <td>0.40246</td> <td>0.10651</td> <td>0.15327</td> </tr> <tr> <td>Method || Centroid</td> <td>0.3891</td> <td>0.10297</td> <td>0.14774</td> </tr> <tr> <td>Method || ILP</td> <td>0.40004</td> <td>0.11256</td> <td>0.15641</td> </tr> <tr> <td>Method || ClusterCMRW</td> <td>0.40565</td> <td>0.11855</td> <td>0.16195</td> </tr> <tr> <td>Method || Submodular</td> <td>0.3999</td> <td>0.11044</td> <td>0.15442</td> </tr> <tr> <td>Method || SenDivRank</td> <td>0.39462</td> <td>0.11575</td> <td>0.16028</td> </tr> <tr> <td>Method || Our Approach</td> <td>0.41913</td> <td>0.13369</td> <td>0.17735</td> </tr> </tbody></table> | Table 2 | table_2 | D17-1224 | 4 | emnlp2017 | Table 2 shows the comparison results based on this evaluation protocol (two part evaluation I). Furthermore, we allow the first part in a reference article to match with the second part in a peer article, and vice versa. We allow one-to-one matching and find the optimal matching between the two sets of parts, which refers to the matching with the largest sum of the similarity values of the matched parts. We then compute and average the ROUGE scores of the matched parts. | [1, 2, 2, 1] | ['Table 2 shows the comparison results based on this evaluation protocol (two part evaluation I).', 'Furthermore, we allow the first part in a reference article to match with the second part in a peer article, and vice versa.', 'We allow one-to-one matching and find the optimal matching between the two sets of parts, which refers to the matching with the largest sum of the similarity values of the matched parts.', 'We then compute and average the ROUGE scores of the matched parts.'] | [None, None, None, ['R-1', 'R-2', 'R-SU4']] | 1 |
D17-1224table_4 | Manual evaluation results | 2 | [['Method', 'TextRank'], ['Method', 'Centroid'], ['Method', 'ILP'], ['Method', 'ClusterCMRW'], ['Method', 'Submodular'], ['Method', 'SenDivRank'], ['Method', 'Our Approach']] | 1 | [['Cov.'], ['Read.'], ['Overall']] | [['2.86', '2.34', '2.5'], ['2.83', '2.17', '2.33'], ['2.17', '1.17', '2.27'], ['3.33', '2.34', '2.83'], ['2.51', '2.03', '2.34'], ['3.51', '2.47', '0.86'], ['3.85', '3.32', '3.47']] | column | ['Cov.', 'Read.', 'Overall'] | ['Our Approach'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Cov.</th> <th>Read.</th> <th>Overall</th> </tr> </thead> <tbody> <tr> <td>Method || TextRank</td> <td>2.86</td> <td>2.34</td> <td>2.5</td> </tr> <tr> <td>Method || Centroid</td> <td>2.83</td> <td>2.17</td> <td>2.33</td> </tr> <tr> <td>Method || ILP</td> <td>2.17</td> <td>1.17</td> <td>2.27</td> </tr> <tr> <td>Method || ClusterCMRW</td> <td>3.33</td> <td>2.34</td> <td>2.83</td> </tr> <tr> <td>Method || Submodular</td> <td>2.51</td> <td>2.03</td> <td>2.34</td> </tr> <tr> <td>Method || SenDivRank</td> <td>3.51</td> <td>2.47</td> <td>0.86</td> </tr> <tr> <td>Method || Our Approach</td> <td>3.85</td> <td>3.32</td> <td>3.47</td> </tr> </tbody></table> | Table 4 | table_4 | D17-1224 | 4 | emnlp2017 | Table 4 shows the manual evaluation results. We can see that our proposed approach can produce news overview articles with better content coverage, readability and overall responsiveness than baseline methods. The quality of the news overview articles is generally acceptable by the human judges. | [1, 1, 2] | ['Table 4 shows the manual evaluation results.', 'We can see that our proposed approach can produce news overview articles with better content coverage, readability and overall responsiveness than baseline methods.', 'The quality of the news overview articles is generally acceptable by the human judges.'] | [None, ['Our Approach', 'Cov.', 'Read.', 'Overall', 'Submodular'], None] | 1 |
D17-1226table_1 | Comparisons of Feature Weights Learned Using In-doc or Cross-doc Coreferent Event Pairs, Euc: Euclidean Distance, Cos: Cosine Similarity | 2 | [['Features', 'Event Word Embedding: Euc'], ['Features', 'Event Word Embedding: Cos'], ['Features', 'Context Embedding: Euc'], ['Features', 'Context Embedding: Cos'], ['Features', 'Argument Embedding']] | 1 | [['WD'], ['CD']] | [['1.017', '0.207'], ['1.086', '1.142'], ['0.038', '0.422'], ['0.004', '3.91'], ['0.349', '3.27']] | column | ['F1', 'F1'] | ['WD', 'CD'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>WD</th> <th>CD</th> </tr> </thead> <tbody> <tr> <td>Features || Event Word Embedding: Euc</td> <td>1.017</td> <td>0.207</td> </tr> <tr> <td>Features || Event Word Embedding: Cos</td> <td>1.086</td> <td>1.142</td> </tr> <tr> <td>Features || Context Embedding: Euc</td> <td>0.038</td> <td>0.422</td> </tr> <tr> <td>Features || Context Embedding: Cos</td> <td>0.004</td> <td>3.91</td> </tr> <tr> <td>Features || Argument Embedding</td> <td>0.349</td> <td>3.27</td> </tr> </tbody></table> | Table 1 | table_1 | D17-1226 | 6 | emnlp2017 | Table 1 shows the comparisons of feature weights. We can see that within-document event linking mainly relies on the euclidean distance and cosine similarity scores calculated using event word features, with a reasonable amount of weight assigned to overlapped arguments’ embedding as well. However, only very small weights were assigned to the similarity and distance scores calculated using context embeddings. In contrast, in the classifier trained with cross-doc coreferent event mention pairs, the highest weight was assigned to the cosine similarity score calculated using context embeddings of two event mentions. Additionally, both the cosine similarity score calculate dusing event word embeddings and the overlapped argument features were assigned high weights as well. The comparisons clearly demonstrate the significantly different nature of WD and CD event coreference. | [1, 1, 1, 1, 1, 2] | ['Table 1 shows the comparisons of feature weights.', 'We can see that within-document event linking mainly relies on the euclidean distance and cosine similarity scores calculated using event word features, with a reasonable amount of weight assigned to overlapped arguments’ embedding as well.', 'However, only very small weights were assigned to the similarity and distance scores calculated using context embeddings.', 'In contrast, in the classifier trained with cross-doc coreferent event mention pairs, the highest weight was assigned to the cosine similarity score calculated using context embeddings of two event mentions.', 'Additionally, both the cosine similarity score calculate dusing event word embeddings and the overlapped argument features were assigned high weights as well.', 'The comparisons clearly demonstrate the significantly different nature of WD and CD event coreference.'] | [None, ['WD', 'Event Word Embedding: Euc', 'Event Word Embedding: Cos'], ['WD', 'Context Embedding: Euc', 'Context Embedding: Cos'], ['CD', 'Context Embedding: Cos'], ['Event Word Embedding: Cos', 'WD', 'Argument Embedding'], ['WD', 'CD']] | 1 |
D17-1226table_3 | Withinand cross-document event coreference result on ECB+ Corpus. | 2 | [['Cross-Document Coreference Results', 'LEMMA'], ['Cross-Document Coreference Results', 'Common Classifier (WD)'], ['Cross-Document Coreference Results', 'Common Classifier (WD) + 2nd Order Relations'], ['Cross-Document Coreference Results', 'Common Classifier (CD)'], ['Cross-Document Coreference Results', 'Common Classifier (CD) + 2nd Order Relations'], ['Cross-Document Coreference Results', 'WD & CD Classifiers'], ['Cross-Document Coreference Results', 'WD & CD Classifiers + 2nd Order Relations (Full Model)'], ['Cross-Document Coreference Results', 'HDDCRP Yang et al. (2015)'], ['Cross-Document Coreference Results', 'HDP-LEX Bejan and Harabagiu (2010)'], ['Cross-Document Coreference Results', 'Agglomerative Chen et al. (2009)'], ['Within-Document Coreference Result', 'LEMMA'], ['Within-Document Coreference Result', 'Common Classifier (WD)'], ['Within-Document Coreference Result', 'Common Classifier (WD) + 2nd Order Relations'], ['Within-Document Coreference Result', 'Common Classifier (CD)'], ['Within-Document Coreference Result', 'Common Classifier (CD) + 2nd Order Relations'], ['Within-Document Coreference Result', 'WD & CD Classifiers'], ['Within-Document Coreference Result', 'WD & CD Classifiers + 2nd Order Relations (Full Model)'], ['Within-Document Coreference Result', 'HDDCRP Yang et al. (2015)'], ['Within-Document Coreference Result', 'HDP-LEX Bejan and Harabagiu (2010)'], ['Within-Document Coreference Result', 'Agglomerative Chen et al. (2009)']] | 2 | [['B3', 'R'], ['B3', 'P'], ['B3', 'F1'], ['MUC', 'R'], ['MUC', 'P'], ['MUC', 'F1'], ['CEAFEe', 'R'], ['CEAFEe', 'P'], ['CEAFEe', 'F1'], ['CoNLL', 'F1']] | [['39.5', '73.9', '51.4', '58.1', '78.2', '66.7', '58.9', '37.5', '46.2', '54.8'], ['46', '72.8', '56.4', '60.4', '76.8', '68.4', '59.5', '42.1', '49.3', '58'], ['48.8', '72.1', '58.2', '61.8', '78.9', '69.3', '59.3', '44.1', '50.6', '59.4'], ['44.9', '64.7', '53', '66.1', '66.4', '66.2', '51.9', '46.4', '49', '56.1'], ['52.2', '58.4', '55.1', '70.4', '66.2', '68.3', '54.1', '45.2', '49.2', '57.5'], ['49', '71.9', '58.3', '63.8', '78.9', '70.6', '59.3', '48.1', '53.1', '60.7'], ['56.2', '66.6', '61', '67.5', '80.4', '73.4', '59', '54.2', '56.5', '63.6'], ['40.6', '78.5', '53.5', '67.1', '80.3', '73.1', '68.9', '38.6', '49.5', '58.7'], ['43.7', '65.6', '52.5', '63.5', '75.5', '69', '60.2', '34.8', '44.1', '55.2'], ['40.2', '73.2', '51.9', '59.2', '78.3', '67.4', '65.6', '30.2', '41.1', '53.6'], ['56.8', '80.9', '66.7', '35.9', '76.2', '48.8', '67.4', '62.9', '65.1', '60.2'], ['59.7', '80.5', '68.6', '44.6', '75', '55.9', '68.2', '67.7', '67.9', '64.2'], ['62.7', '79.4', '70', '50.3', '75.2', '60.3', '68.6', '70.5', '69.5', '66.6'], ['65.2', '67.1', '66.1', '47.6', '53.9', '50.5', '69.2', '62.1', '65.5', '60.7'], ['66.9', '69.1', '68', '56.7', '55.1', '55.9', '70.4', '63.6', '66.8', '62.8'], ['63.8', '79.9', '70.9', '51.6', '75.3', '61.2', '68.6', '70.5', '69.5', '67.2'], ['69.2', '76', '72.4', '58.5', '67.3', '62.6', '67.9', '76.1', '71.8', '68.9'], ['67.3', '85.6', '75.4', '41.7', '74.3', '53.4', '79', '65.1', '71.7', '66.8'], ['67.6', '74.7', '71', '39.1', '50', '43.9', '71.4', '66.2', '68.7', '61.2'], ['67.6', '80.7', '73.5', '39.2', '61.9', '48', '76', '65.6', '70.4', '63.9']] | column | ['R', 'P', 'F1', 'R', 'P', 'F1', 'R', 'P', 'F1', 'F1'] | ['WD & CD Classifiers', 'WD & CD Classifiers + 2nd Order Relations (Full Model)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>B3 || R</th> <th>B3 || P</th> <th>B3 || F1</th> <th>MUC || R</th> <th>MUC || P</th> <th>MUC || F1</th> <th>CEAFEe || R</th> <th>CEAFEe || P</th> <th>CEAFEe || F1</th> <th>CoNLL || F1</th> </tr> </thead> <tbody> <tr> <td>Cross-Document Coreference Results || LEMMA</td> <td>39.5</td> <td>73.9</td> <td>51.4</td> <td>58.1</td> <td>78.2</td> <td>66.7</td> <td>58.9</td> <td>37.5</td> <td>46.2</td> <td>54.8</td> </tr> <tr> <td>Cross-Document Coreference Results || Common Classifier (WD)</td> <td>46</td> <td>72.8</td> <td>56.4</td> <td>60.4</td> <td>76.8</td> <td>68.4</td> <td>59.5</td> <td>42.1</td> <td>49.3</td> <td>58</td> </tr> <tr> <td>Cross-Document Coreference Results || Common Classifier (WD) + 2nd Order Relations</td> <td>48.8</td> <td>72.1</td> <td>58.2</td> <td>61.8</td> <td>78.9</td> <td>69.3</td> <td>59.3</td> <td>44.1</td> <td>50.6</td> <td>59.4</td> </tr> <tr> <td>Cross-Document Coreference Results || Common Classifier (CD)</td> <td>44.9</td> <td>64.7</td> <td>53</td> <td>66.1</td> <td>66.4</td> <td>66.2</td> <td>51.9</td> <td>46.4</td> <td>49</td> <td>56.1</td> </tr> <tr> <td>Cross-Document Coreference Results || Common Classifier (CD) + 2nd Order Relations</td> <td>52.2</td> <td>58.4</td> <td>55.1</td> <td>70.4</td> <td>66.2</td> <td>68.3</td> <td>54.1</td> <td>45.2</td> <td>49.2</td> <td>57.5</td> </tr> <tr> <td>Cross-Document Coreference Results || WD & CD Classifiers</td> <td>49</td> <td>71.9</td> <td>58.3</td> <td>63.8</td> <td>78.9</td> <td>70.6</td> <td>59.3</td> <td>48.1</td> <td>53.1</td> <td>60.7</td> </tr> <tr> <td>Cross-Document Coreference Results || WD & CD Classifiers + 2nd Order Relations (Full Model)</td> <td>56.2</td> <td>66.6</td> <td>61</td> <td>67.5</td> <td>80.4</td> <td>73.4</td> <td>59</td> <td>54.2</td> <td>56.5</td> <td>63.6</td> </tr> <tr> <td>Cross-Document Coreference Results || HDDCRP Yang et al. (2015)</td> <td>40.6</td> <td>78.5</td> <td>53.5</td> <td>67.1</td> <td>80.3</td> <td>73.1</td> <td>68.9</td> <td>38.6</td> <td>49.5</td> <td>58.7</td> </tr> <tr> <td>Cross-Document Coreference Results || HDP-LEX Bejan and Harabagiu (2010)</td> <td>43.7</td> <td>65.6</td> <td>52.5</td> <td>63.5</td> <td>75.5</td> <td>69</td> <td>60.2</td> <td>34.8</td> <td>44.1</td> <td>55.2</td> </tr> <tr> <td>Cross-Document Coreference Results || Agglomerative Chen et al. (2009)</td> <td>40.2</td> <td>73.2</td> <td>51.9</td> <td>59.2</td> <td>78.3</td> <td>67.4</td> <td>65.6</td> <td>30.2</td> <td>41.1</td> <td>53.6</td> </tr> <tr> <td>Within-Document Coreference Result || LEMMA</td> <td>56.8</td> <td>80.9</td> <td>66.7</td> <td>35.9</td> <td>76.2</td> <td>48.8</td> <td>67.4</td> <td>62.9</td> <td>65.1</td> <td>60.2</td> </tr> <tr> <td>Within-Document Coreference Result || Common Classifier (WD)</td> <td>59.7</td> <td>80.5</td> <td>68.6</td> <td>44.6</td> <td>75</td> <td>55.9</td> <td>68.2</td> <td>67.7</td> <td>67.9</td> <td>64.2</td> </tr> <tr> <td>Within-Document Coreference Result || Common Classifier (WD) + 2nd Order Relations</td> <td>62.7</td> <td>79.4</td> <td>70</td> <td>50.3</td> <td>75.2</td> <td>60.3</td> <td>68.6</td> <td>70.5</td> <td>69.5</td> <td>66.6</td> </tr> <tr> <td>Within-Document Coreference Result || Common Classifier (CD)</td> <td>65.2</td> <td>67.1</td> <td>66.1</td> <td>47.6</td> <td>53.9</td> <td>50.5</td> <td>69.2</td> <td>62.1</td> <td>65.5</td> <td>60.7</td> </tr> <tr> <td>Within-Document Coreference Result || Common Classifier (CD) + 2nd Order Relations</td> <td>66.9</td> <td>69.1</td> <td>68</td> <td>56.7</td> <td>55.1</td> <td>55.9</td> <td>70.4</td> <td>63.6</td> <td>66.8</td> <td>62.8</td> </tr> <tr> <td>Within-Document Coreference Result || WD & CD Classifiers</td> <td>63.8</td> <td>79.9</td> <td>70.9</td> <td>51.6</td> <td>75.3</td> <td>61.2</td> <td>68.6</td> <td>70.5</td> <td>69.5</td> <td>67.2</td> </tr> <tr> <td>Within-Document Coreference Result || WD & CD Classifiers + 2nd Order Relations (Full Model)</td> <td>69.2</td> <td>76</td> <td>72.4</td> <td>58.5</td> <td>67.3</td> <td>62.6</td> <td>67.9</td> <td>76.1</td> <td>71.8</td> <td>68.9</td> </tr> <tr> <td>Within-Document Coreference Result || HDDCRP Yang et al. (2015)</td> <td>67.3</td> <td>85.6</td> <td>75.4</td> <td>41.7</td> <td>74.3</td> <td>53.4</td> <td>79</td> <td>65.1</td> <td>71.7</td> <td>66.8</td> </tr> <tr> <td>Within-Document Coreference Result || HDP-LEX Bejan and Harabagiu (2010)</td> <td>67.6</td> <td>74.7</td> <td>71</td> <td>39.1</td> <td>50</td> <td>43.9</td> <td>71.4</td> <td>66.2</td> <td>68.7</td> <td>61.2</td> </tr> <tr> <td>Within-Document Coreference Result || Agglomerative Chen et al. (2009)</td> <td>67.6</td> <td>80.7</td> <td>73.5</td> <td>39.2</td> <td>61.9</td> <td>48</td> <td>76</td> <td>65.6</td> <td>70.4</td> <td>63.9</td> </tr> </tbody></table> | Table 3 | table_3 | D17-1226 | 8 | emnlp2017 | Table 3 shows the comparison results for both within-document and cross-document event coreference resolution. In the first stage of iterative merging, using two distinct WD and CD classifiers for corresponding WD and CD merges yields clear improvements for both WD and CD event coreference resolution tasks, compared with using one common classifier for both types of merges. In addition, the second stage of iterative merging further improves both WD and CD event coreference resolution performance stably by leveraging second order event inter-dependencies. The improvements are consistent when measured using various coreference resolution evaluation metrics. | [1, 1, 1, 1] | ['Table 3 shows the comparison results for both within-document and cross-document event coreference resolution.', 'In the first stage of iterative merging, using two distinct WD and CD classifiers for corresponding WD and CD merges yields clear improvements for both WD and CD event coreference resolution tasks, compared with using one common classifier for both types of merges.', 'In addition, the second stage of iterative merging further improves both WD and CD event coreference resolution performance stably by leveraging second order event inter-dependencies.', 'The improvements are consistent when measured using various coreference resolution evaluation metrics.'] | [['Cross-Document Coreference Results', 'Within-Document Coreference Result'], ['WD & CD Classifiers', 'Common Classifier (WD)', 'Common Classifier (CD)'], ['WD & CD Classifiers + 2nd Order Relations (Full Model)'], ['R', 'P', 'F1']] | 1 |
D17-1228table_3 | Results of quality assessments with 5scale mean opinion scores (MOS) and JFK style assessments with binary ratings. Style results are statistically significant compared to the selectivesampling by paired t-tests (p < 0.5%). | 2 | [['Methods', 'vanilla-sampling'], ['Methods', 'selective-sampling'], ['Methods', 'cg-ir'], ['Methods', 'rank'], ['Methods', 'multiply'], ['Methods', 'finetune'], ['Methods', 'finetune-cg-ir'], ['Methods', 'finetune-cg-topic'], ['Methods', 'singer-songwriter'], ['Methods', 'starwars']] | 1 | [['Quality (MOS)'], ['Style']] | [['2.286 ± 0.046', '—'], ['2.681 ± 0.049', '10.42%'], ['2.566 ± 0.048', '10.24%'], ['2.477 ± 0.048', '21.88%'], ['2.627 ± 0.048', '13.54%'], ['2.597 ± 0.046', '20.83%'], ['2.627 ± 0.049', '20.31%'], ['2.667 ± 0.045', '21.09%'], ['2.373 ± 0.045', '—'], ['2.677 ± 0.048', '—']] | column | ['Quality (MOS)', 'Style'] | ['Quality (MOS)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Quality (MOS)</th> <th>Style</th> </tr> </thead> <tbody> <tr> <td>Methods || vanilla-sampling</td> <td>2.286 ± 0.046</td> <td>—</td> </tr> <tr> <td>Methods || selective-sampling</td> <td>2.681 ± 0.049</td> <td>10.42%</td> </tr> <tr> <td>Methods || cg-ir</td> <td>2.566 ± 0.048</td> <td>10.24%</td> </tr> <tr> <td>Methods || rank</td> <td>2.477 ± 0.048</td> <td>21.88%</td> </tr> <tr> <td>Methods || multiply</td> <td>2.627 ± 0.048</td> <td>13.54%</td> </tr> <tr> <td>Methods || finetune</td> <td>2.597 ± 0.046</td> <td>20.83%</td> </tr> <tr> <td>Methods || finetune-cg-ir</td> <td>2.627 ± 0.049</td> <td>20.31%</td> </tr> <tr> <td>Methods || finetune-cg-topic</td> <td>2.667 ± 0.045</td> <td>21.09%</td> </tr> <tr> <td>Methods || singer-songwriter</td> <td>2.373 ± 0.045</td> <td>—</td> </tr> <tr> <td>Methods || starwars</td> <td>2.677 ± 0.048</td> <td>—</td> </tr> </tbody></table> | Table 3 | table_3 | D17-1228 | 8 | emnlp2017 | We conducted mean opinion score (MOS) tests for overall quality assessment of generated responses with questionnaires described above. Table 3 shows the MOS results with standard error. It can be seen that all the systems based on selective sampling are significantly better than vanilla sampling baseline. When restricting output’s style and/or topic, the MOS score results of most systems do not decline significantly except singer-songwriter, which attempts to generate lyrics-like outputs in response to political debate questions, resulting in uninterpretable strings. Table 3 also shows the likelihood of being labeled as JFK for different methods. It is encouraging that finetune based approaches have similar chances as the rank system which retrieves sentences directly from JFK corpus, and are significantly better than the selective sampling baseline. | [2, 1, 1, 1, 1, 1] | ['We conducted mean opinion score (MOS) tests for overall quality assessment of generated responses with questionnaires described above.', 'Table 3 shows the MOS results with standard error.', 'It can be seen that all the systems based on selective sampling are significantly better than vanilla sampling baseline.', 'When restricting output’s style and/or topic, the MOS score results of most systems do not decline significantly except singer-songwriter, which attempts to generate lyrics-like outputs in response to political debate questions, resulting in uninterpretable strings.', 'Table 3 also shows the likelihood of being labeled as JFK for different methods.', 'It is encouraging that finetune based approaches have similar chances as the rank system which retrieves sentences directly from JFK corpus, and are significantly better than the selective sampling baseline.'] | [['Quality (MOS)'], None, ['selective-sampling', 'vanilla-sampling', 'Quality (MOS)'], ['Style', 'Quality (MOS)', 'singer-songwriter'], ['multiply', 'finetune', 'finetune-cg-topic'], ['finetune', 'rank', 'Quality (MOS)', 'selective-sampling']] | 1 |
D17-1229table_1 | Results of different strategies to leverage the current label. | 2 | [['Models', 'No current label'], ['Models', 'True current label'], ['Models', 'Predicted current label'], ['Models', 'Scheduled Sampling'], ['Models', 'Average Embedding'], ['Models', 'Uncertainty Propagation']] | 2 | [['Accuracy', 'Switchboard'], ['Accuracy', 'MapTask']] | [['72.93%', '61.27%'], ['73.15%', '63.36%'], ['73.91%', '64.53%'], ['74.43%', '64.50%'], ['75.04%', '65.09%'], ['75.61%', '65.87%']] | column | ['Accuracy', 'Accuracy'] | ['Uncertainty Propagation'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Accuracy || Switchboard</th> <th>Accuracy || MapTask</th> </tr> </thead> <tbody> <tr> <td>Models || No current label</td> <td>72.93%</td> <td>61.27%</td> </tr> <tr> <td>Models || True current label</td> <td>73.15%</td> <td>63.36%</td> </tr> <tr> <td>Models || Predicted current label</td> <td>73.91%</td> <td>64.53%</td> </tr> <tr> <td>Models || Scheduled Sampling</td> <td>74.43%</td> <td>64.50%</td> </tr> <tr> <td>Models || Average Embedding</td> <td>75.04%</td> <td>65.09%</td> </tr> <tr> <td>Models || Uncertainty Propagation</td> <td>75.61%</td> <td>65.87%</td> </tr> </tbody></table> | Table 1 | table_1 | D17-1229 | 4 | emnlp2017 | Table 1 compares our results with those obtained by the baselines. Our two models, Uncertainty Propagation and Average Embedding, outperform all the baselines. Among these two models, Uncertainty Propagation, which is more analytically grounded, outperforms the Average Embedding model. Using the true current label during training seems to degrade performance compared to using the predicted label, which is expected, since the true label is not available during testing. The Scheduled Sampling method performs similarly to the predicted-label method for the MapTask corpus, and outperforms this method for the Switchboard corpus. | [1, 1, 1, 1, 1] | ['Table 1 compares our results with those obtained by the baselines.', 'Our two models, Uncertainty Propagation and Average Embedding, outperform all the baselines.', 'Among these two models, Uncertainty Propagation, which is more analytically grounded, outperforms the Average Embedding model.', 'Using the true current label during training seems to degrade performance compared to using the predicted label, which is expected, since the true label is not available during testing.', 'The Scheduled Sampling method performs similarly to the predicted-label method for the MapTask corpus, and outperforms this method for the Switchboard corpus.'] | [None, ['Average Embedding', 'Uncertainty Propagation', 'Accuracy'], ['Uncertainty Propagation', 'Accuracy', 'Average Embedding'], ['True current label', 'Predicted current label'], ['Scheduled Sampling', 'Predicted current label', 'MapTask', 'Switchboard']] | 1 |
D17-1237table_1 | Performance of three agents on different User Types. Tested on 2000 dialogues using the best model during training. Succ.: success rate, Turn: average turns, Reward: average reward. | 2 | [['Agent', 'Rule'], ['Agent', 'Rule+'], ['Agent', 'RL'], ['Agent', 'HRL']] | 2 | [['Type A', 'Succ.'], ['Type A', 'Turn'], ['Type A', 'Reward'], ['Type B', 'Succ.'], ['Type B', 'Turn'], ['Type B', 'Reward'], ['Type C', 'Succ.'], ['Type C', 'Turn'], ['Type C', 'Reward']] | [['0.322', '46.2', '-24', '0.24', '54.2', '-42.9', '0.205', '54.3', '-49.3'], ['0.535', '82', '-3.7', '0.385', '110.5', '-44.95', '0.34', '108.1', '-51.85'], ['0.437', '45.6', '-3.3', '0.34', '52.2', '-23.8', '0.348', '49.5', '-21.1'], ['0.632', '43', '33.2', '0.6', '44.5', '26.7', '0.622', '42.7', '31.7']] | column | ['Succ.', 'Turn', 'Reward', 'Succ.', 'Turn', 'Reward', 'Succ.', 'Turn', 'Reward'] | ['HRL'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Type A || Succ.</th> <th>Type A || Turn</th> <th>Type A || Reward</th> <th>Type B || Succ.</th> <th>Type B || Turn</th> <th>Type B || Reward</th> <th>Type C || Succ.</th> <th>Type C || Turn</th> <th>Type C || Reward</th> </tr> </thead> <tbody> <tr> <td>Agent || Rule</td> <td>0.322</td> <td>46.2</td> <td>-24</td> <td>0.24</td> <td>54.2</td> <td>-42.9</td> <td>0.205</td> <td>54.3</td> <td>-49.3</td> </tr> <tr> <td>Agent || Rule+</td> <td>0.535</td> <td>82</td> <td>-3.7</td> <td>0.385</td> <td>110.5</td> <td>-44.95</td> <td>0.34</td> <td>108.1</td> <td>-51.85</td> </tr> <tr> <td>Agent || RL</td> <td>0.437</td> <td>45.6</td> <td>-3.3</td> <td>0.34</td> <td>52.2</td> <td>-23.8</td> <td>0.348</td> <td>49.5</td> <td>-21.1</td> </tr> <tr> <td>Agent || HRL</td> <td>0.632</td> <td>43</td> <td>33.2</td> <td>0.6</td> <td>44.5</td> <td>26.7</td> <td>0.622</td> <td>42.7</td> <td>31.7</td> </tr> </tbody></table> | Table 1 | table_1 | D17-1237 | 7 | emnlp2017 | Table 1 shows the performance on test data. For all types of users, the HRL-based agent yielded more robust dialogue policies outperforming the hand-crafted rule-based agents and flat RL-based agent measured on success rate. It also needed fewer turns per dialogue session to accomplish a task than the rule-based agents and flat RL agent. The results across all three types of simulated users suggest the following conclusions. | [1, 1, 1, 2] | ['Table 1 shows the performance on test data.', 'For all types of users, the HRL-based agent yielded more robust dialogue policies outperforming the hand-crafted rule-based agents and flat RL-based agent measured on success rate.', 'It also needed fewer turns per dialogue session to accomplish a task than the rule-based agents and flat RL agent.', 'The results across all three types of simulated users suggest the following conclusions.'] | [None, ['Type A', 'Type B', 'Type C', 'HRL', 'RL', 'Rule', 'Succ.'], ['HRL', 'Rule', 'RL', 'Turn'], None] | 1 |
D17-1240table_4 | Overall macro and micro precision/recall. Best results are marked in bold. | 2 | [['Measure', 'Macro prec./rec.'], ['Measure', 'Micro prec./rec.']] | 1 | [['Baseline'], ['UUR']] | [['0.19', '0.33'], ['0.19', '0.33']] | row | ['Macro prec./rec.', 'Macro prec./rec.'] | ['Baseline', 'UUR'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Baseline</th> <th>UUR</th> </tr> </thead> <tbody> <tr> <td>Measure || Macro prec./rec.</td> <td>0.19</td> <td>0.33</td> </tr> <tr> <td>Measure || Micro prec./rec.</td> <td>0.19</td> <td>0.33</td> </tr> </tbody></table> | Table 4 | table_4 | D17-1240 | 7 | emnlp2017 | We observe that while in the SB bucket both the baseline and U U R perform equally well, for all the other buckets U U R massively outperforms the baseline. This implies that for the case where the likeliness of borrowing is the strongest, the baseline does as good as U U R. However, as one moves down the rank list, U U R turns out to be a considerably better predictor than the baseline. The overall macro and micro precision/recall as shown in table 4 further strengthens our observation that U U R is a better metric than the baseline. | [1, 2, 2, 1] | ['We observe that while in the SB bucket both the baseline and U U R perform equally well, for all the other buckets U U R massively outperforms the baseline.', 'This implies that for the case where the likeliness of borrowing is the strongest, the baseline does as good as U U R.', 'However, as one moves down the rank list, U U R turns out to be a considerably better predictor than the baseline.', 'The overall macro and micro precision/recall as shown in table 4 further strengthens our observation that U U R is a better metric than the baseline.'] | [['Baseline', 'UUR', 'Macro prec./rec.', 'Micro prec./rec.'], ['Baseline', 'UUR'], ['Baseline', 'UUR'], ['Macro prec./rec.', 'Micro prec./rec.', 'UUR', 'Baseline']] | 1 |
D17-1240table_6 | Bucket-wise precision (p)/recall (r) for U U R and the baseline metrics for the two new ground truths. Best results are marked in bold. | 2 | [['Bucket type', 'SB'], ['Bucket type', 'LB'], ['Bucket type', 'BL'], ['Bucket type', 'LM'], ['Bucket type', 'SM']] | 2 | [['Young-Baseline', 'p/r'], ['Young-UUR', 'p/r'], ['Elder-UUR', 'p'], ['Elder-baseline', 'r'], ['Elder-UUR', 'r']] | [['0.27', '0.27', '0.36', '0.25', '0.33'], ['0.09', '0.09', '0.18', '0.08', '0.17'], ['0.08', '0.16', '0.08', '0.28', '0.14'], ['0.18', '0.18', '0.45', '0.14', '0.35'], ['0.33', '0.41', '0.25', '0.41', '0.25']] | column | ['p/r', 'p/r', 'p', 'r', 'r'] | ['Young-UUR', 'p/r'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Young-Baseline || p/r</th> <th>Young-UUR || p/r</th> <th>Elder-UUR || p</th> <th>Elder-baseline || r</th> <th>Elder-UUR || r</th> </tr> </thead> <tbody> <tr> <td>Bucket type || SB</td> <td>0.27</td> <td>0.27</td> <td>0.36</td> <td>0.25</td> <td>0.33</td> </tr> <tr> <td>Bucket type || LB</td> <td>0.09</td> <td>0.09</td> <td>0.18</td> <td>0.08</td> <td>0.17</td> </tr> <tr> <td>Bucket type || BL</td> <td>0.08</td> <td>0.16</td> <td>0.08</td> <td>0.28</td> <td>0.14</td> </tr> <tr> <td>Bucket type || LM</td> <td>0.18</td> <td>0.18</td> <td>0.45</td> <td>0.14</td> <td>0.35</td> </tr> <tr> <td>Bucket type || SM</td> <td>0.33</td> <td>0.41</td> <td>0.25</td> <td>0.41</td> <td>0.25</td> </tr> </tbody></table> | Table 6 | table_6 | D17-1240 | 8 | emnlp2017 | Table 6 shows the bucket-wise precision and recall for U U R and the baseline metrics with respect to two new ground truths. For the young population once again the number of words in each bucket for all the three sets is the same thus making the values of the precision and the recall same. In fact, the precision/recall for this ground truth is exactly same as in the case of the original ground truth. | [1, 1, 1] | ['Table 6 shows the bucket-wise precision and recall for U U R and the baseline metrics with respect to two new ground truths.', 'For the young population once again the number of words in each bucket for all the three sets is the same thus making the values of the precision and the recall same.', 'In fact, the precision/recall for this ground truth is exactly same as in the case of the original ground truth.'] | [['p', 'r'], ['Young-UUR', 'Young-Baseline', 'Elder-baseline', 'SB', 'p/r'], ['p/r', 'Young-UUR', 'Young-Baseline', 'SB']] | 1 |
D17-1244table_7 | Results obtained for each dimension with the best combination of features for all dimensions (Verb + Personx + Persony + Personx Persony, boldfaced in Table 6) | 2 | [['Dimension', 'Cooperative'], ['Dimension', 'Equal'], ['Dimension', 'Intense'], ['Dimension', 'Pleasure'], ['Dimension', 'Active'], ['Dimension', 'Intimate'], ['Dimension', 'Temporary'], ['Dimension', 'Concurrent'], ['Dimension', 'Spat. Near'], ['Dimension', 'Average']] | 2 | [['1 (1st descriptor)', 'P'], ['1 (1st descriptor)', 'R'], ['1 (1st descriptor)', 'F'], ['0 (unknown)', 'P'], ['0 (unknown)', 'R'], ['0 (unknown)', 'F'], ['-1 (2nd descriptor)', 'P'], ['-1 (2nd descriptor)', 'R'], ['-1 (2nd descriptor)', 'F'], ['All', 'P'], ['All', 'R'], ['All', 'F']] | [['0.73', '0.96', '0.83', '0', '0', '0', '0.6', '0.19', '0.29', '0.66', '0.72', '0.65'], ['0.56', '0.1', '0.17', '0', '0', '0', '0.74', '0.97', '0.84', '0.68', '0.74', '0.66'], ['0.39', '0.3', '0.34', '0', '0', '0', '0.78', '0.85', '0.82', '0.67', '0.71', '0.69'], ['0.4', '0.28', '0.33', '0', '0', '0', '0.87', '0.93', '0.9', '0.79', '0.82', '0.8'], ['0.69', '0.85', '0.76', '0', '0', '0', '0.68', '0.51', '0.58', '0.67', '0.69', '0.67'], ['0.44', '0.17', '0.24', '0', '0', '0', '0.88', '0.98', '0.93', '0.81', '0.86', '0.83'], ['0.85', '0.96', '0.91', '0', '0', '0', '0.33', '0.1', '0.16', '0.77', '0.83', '0.79'], ['0.72', '0.8', '0.76', '0', '0', '0', '0.77', '0.75', '0.76', '0.71', '0.74', '0.73'], ['0.66', '0.68', '0.67', '0', '0', '0', '0.73', '0.79', '0.76', '0.66', '0.7', '0.68'], ['0.69', '0.74', '0.7', '0', '0', '0', '0.77', '0.8', '0.76', '0.71', '0.76', '0.72']] | column | ['P', 'R', 'F', 'P', 'R', 'F', 'P', 'R', 'F', 'P', 'R', 'F'] | ['Dimension', '1 (1st descriptor)', '-1 (2nd descriptor)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>1 (1st descriptor) || P</th> <th>1 (1st descriptor) || R</th> <th>1 (1st descriptor) || F</th> <th>0 (unknown) || P</th> <th>0 (unknown) || R</th> <th>0 (unknown) || F</th> <th>-1 (2nd descriptor) || P</th> <th>-1 (2nd descriptor) || R</th> <th>-1 (2nd descriptor) || F</th> <th>All || P</th> <th>All || R</th> <th>All || F</th> </tr> </thead> <tbody> <tr> <td>Dimension || Cooperative</td> <td>0.73</td> <td>0.96</td> <td>0.83</td> <td>0</td> <td>0</td> <td>0</td> <td>0.6</td> <td>0.19</td> <td>0.29</td> <td>0.66</td> <td>0.72</td> <td>0.65</td> </tr> <tr> <td>Dimension || Equal</td> <td>0.56</td> <td>0.1</td> <td>0.17</td> <td>0</td> <td>0</td> <td>0</td> <td>0.74</td> <td>0.97</td> <td>0.84</td> <td>0.68</td> <td>0.74</td> <td>0.66</td> </tr> <tr> <td>Dimension || Intense</td> <td>0.39</td> <td>0.3</td> <td>0.34</td> <td>0</td> <td>0</td> <td>0</td> <td>0.78</td> <td>0.85</td> <td>0.82</td> <td>0.67</td> <td>0.71</td> <td>0.69</td> </tr> <tr> <td>Dimension || Pleasure</td> <td>0.4</td> <td>0.28</td> <td>0.33</td> <td>0</td> <td>0</td> <td>0</td> <td>0.87</td> <td>0.93</td> <td>0.9</td> <td>0.79</td> <td>0.82</td> <td>0.8</td> </tr> <tr> <td>Dimension || Active</td> <td>0.69</td> <td>0.85</td> <td>0.76</td> <td>0</td> <td>0</td> <td>0</td> <td>0.68</td> <td>0.51</td> <td>0.58</td> <td>0.67</td> <td>0.69</td> <td>0.67</td> </tr> <tr> <td>Dimension || Intimate</td> <td>0.44</td> <td>0.17</td> <td>0.24</td> <td>0</td> <td>0</td> <td>0</td> <td>0.88</td> <td>0.98</td> <td>0.93</td> <td>0.81</td> <td>0.86</td> <td>0.83</td> </tr> <tr> <td>Dimension || Temporary</td> <td>0.85</td> <td>0.96</td> <td>0.91</td> <td>0</td> <td>0</td> <td>0</td> <td>0.33</td> <td>0.1</td> <td>0.16</td> <td>0.77</td> <td>0.83</td> <td>0.79</td> </tr> <tr> <td>Dimension || Concurrent</td> <td>0.72</td> <td>0.8</td> <td>0.76</td> <td>0</td> <td>0</td> <td>0</td> <td>0.77</td> <td>0.75</td> <td>0.76</td> <td>0.71</td> <td>0.74</td> <td>0.73</td> </tr> <tr> <td>Dimension || Spat. Near</td> <td>0.66</td> <td>0.68</td> <td>0.67</td> <td>0</td> <td>0</td> <td>0</td> <td>0.73</td> <td>0.79</td> <td>0.76</td> <td>0.66</td> <td>0.7</td> <td>0.68</td> </tr> <tr> <td>Dimension || Average</td> <td>0.69</td> <td>0.74</td> <td>0.7</td> <td>0</td> <td>0</td> <td>0</td> <td>0.77</td> <td>0.8</td> <td>0.76</td> <td>0.71</td> <td>0.76</td> <td>0.72</td> </tr> </tbody></table> | Table 7 | table_7 | D17-1244 | 8 | emnlp2017 | Table 7 presents results per dimension with the best overall combination of features (Verb + Personx + Persony + Personx Persony). All dimensions obtain overall F-measures between 0.65 and 0.83 (last column). Results per label are heavily biased towards the most frequent label per dimension (Figure2), although it is the case that the models we experiment with predict both 1 and -1 for all dimensions. As stated above,none of them predict 0, but this limitation does not substantially penalize overall performance because of the low frequency of this label. The model obtains the same F-measures for1and-1with concurrent dimension (0.76), and the labels of this dimension are virtually distributed uniformly (46.4% vs. 47.1%, Figure 2). Similarly, F-measures for 1 and -1 with spatially near and active dimensions are similar (0.67 vs. 0.76 and 0.76 vs. 0.58), and the labels are distributed relatively evenly in our corpus (40.4% vs 51.4% and.58.4% vs. 35.3%). Finally, F-measures per label with other dimension are biased towards the most frequent label. For example, only 15% of all pairs of people have an enduring relationship (Figure 2), and the F-measure for 1 with temporary dimension is much higher (0.91) that for -1 (0.16). | [1, 1, 1, 1, 1, 1, 1, 1] | ['Table 7 presents results per dimension with the best overall combination of features (Verb + Personx + Persony + Personx Persony).', 'All dimensions obtain overall F-measures between 0.65 and 0.83 (last column).', 'Results per label are heavily biased towards the most frequent label per dimension (Figure2), although it is the case that the models we experiment with predict both 1 and -1 for all dimensions.', 'As stated above,none of them predict 0, but this limitation does not substantially penalize overall performance because of the low frequency of this label.', 'The model obtains the same F-measures for1and-1with concurrent dimension (0.76), and the labels of this dimension are virtually distributed uniformly (46.4% vs. 47.1%, Figure 2).', 'Similarly, F-measures for 1 and -1 with spatially near and active dimensions are similar (0.67 vs. 0.76 and 0.76 vs. 0.58), and the labels are distributed relatively evenly in our corpus (40.4% vs 51.4% and.58.4% vs. 35.3%).', 'Finally, F-measures per label with other dimension are biased towards the most frequent label.', 'For example, only 15% of all pairs of people have an enduring relationship (Figure 2), and the F-measure for 1 with temporary dimension is much higher (0.91) that for -1 (0.16).'] | [None, ['Dimension', 'All', 'F'], ['1 (1st descriptor)', '0 (unknown)', '-1 (2nd descriptor)', 'F'], ['1 (1st descriptor)', '-1 (2nd descriptor)', 'F'], ['F', '1 (1st descriptor)', '-1 (2nd descriptor)', 'Concurrent'], ['F', '1 (1st descriptor)', '-1 (2nd descriptor)', 'Spat. Near', 'Active'], ['F'], ['F', '1 (1st descriptor)', 'Temporary', '-1 (2nd descriptor)']] | 1 |
D17-1245table_4 | Results obtained on the test set for the argument detection task (L=lexical features) | 2 | [['Approach', 'RF+L'], ['Approach', 'LR+L'], ['Approach', 'LR+all features']] | 1 | [['Precision'], ['Recall'], ['F1']] | [['0.76', '0.69', '0.71'], ['0.76', '0.71', '0.73'], ['0.8', '0.77', '0.78']] | column | ['Precision', 'Recall', 'F1'] | ['LR+all features'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Precision</th> <th>Recall</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Approach || RF+L</td> <td>0.76</td> <td>0.69</td> <td>0.71</td> </tr> <tr> <td>Approach || LR+L</td> <td>0.76</td> <td>0.71</td> <td>0.73</td> </tr> <tr> <td>Approach || LR+all features</td> <td>0.8</td> <td>0.77</td> <td>0.78</td> </tr> </tbody></table> | Table 4 | table_4 | D17-1245 | 3 | emnlp2017 | We cast the argument detection task as a binary classification task, and we apply the supervised algorithms described in Section 2.1. Table 4 reports on the obtained results with the different configurations,. while Table 5 reports on the results obtained by the best configuration, i.e., LR + All features, per each category. | [2, 1, 0] | ['We cast the argument detection task as a binary classification task, and we apply the supervised algorithms described in Section 2.1.', 'Table 4 reports on the obtained results with the different configurations.', 'Table 5 reports on the results obtained by the best configuration, i.e., LR + All features, per each category.'] | [None, ['LR+all features'], None] | 1 |
D17-1245table_5 | Results obtained by the best model on each category of the test set for the argument detection task | 2 | [['Category', 'non-arg'], ['Category', 'arg'], ['Category', 'avg/total']] | 1 | [['P'], ['R'], ['F1'], ['#arguments per category']] | [['0.46', '0.6', '0.52', '187'], ['0.89', '0.82', '0.85', '713'], ['0.8', '0.77', '0.78', '900']] | column | ['P', 'R', 'F1', '#arguments per category'] | ['avg/total'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P</th> <th>R</th> <th>F1</th> <th>#arguments per category</th> </tr> </thead> <tbody> <tr> <td>Category || non-arg</td> <td>0.46</td> <td>0.6</td> <td>0.52</td> <td>187</td> </tr> <tr> <td>Category || arg</td> <td>0.89</td> <td>0.82</td> <td>0.85</td> <td>713</td> </tr> <tr> <td>Category || avg/total</td> <td>0.8</td> <td>0.77</td> <td>0.78</td> <td>900</td> </tr> </tbody></table> | Table 5 | table_5 | D17-1245 | 3 | emnlp2017 | Table 4 reports on the obtained results with the different configurations,. while Table 5 reports on the results obtained by the best configuration, i.e., LR + All features, per each category. | [0, 1] | ['Table 4 reports on the obtained results with the different configurations.', 'Table 5 reports on the results obtained by the best configuration, i.e., LR + All features, per each category.'] | [None, ['avg/total']] | 1 |
D17-1245table_6 | Results obtained on the test set for the factual vs opinion argument classification task (L=lexical features) | 2 | [['Approach', 'RF+L'], ['Approach', 'LR+L'], ['Approach', 'LR+all features']] | 1 | [['Precision'], ['Recall'], ['F1']] | [['0.75', '0.68', '0.71'], ['0.75', '0.75', '0.75'], ['0.81', '0.79', '0.8']] | column | ['Precision', 'Recall', 'F1'] | ['LR+all features'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Precision</th> <th>Recall</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Approach || RF+L</td> <td>0.75</td> <td>0.68</td> <td>0.71</td> </tr> <tr> <td>Approach || LR+L</td> <td>0.75</td> <td>0.75</td> <td>0.75</td> </tr> <tr> <td>Approach || LR+all features</td> <td>0.81</td> <td>0.79</td> <td>0.8</td> </tr> </tbody></table> | Table 6 | table_6 | D17-1245 | 4 | emnlp2017 | To address the task of factual vs opinion arguments classification, we apply the supervised classification algorithms described in Section 2.1. Tweets from Grexit dataset are used as training set, and those from Brexit dataset as test set. Table 6 reports on the obtained results. | [1, 2, 1] | ['To address the task of factual vs opinion arguments classification, we apply the supervised classification algorithms described in Section 2.1.', 'Tweets from Grexit dataset are used as training set, and those from Brexit dataset as test set.', 'Table 6 reports on the obtained results.'] | [['RF+L', 'LR+L'], None, None] | 1 |
D17-1245table_8 | Results obtained on the test set for the source identification task | 2 | [['Approach', 'Baseline'], ['Approach', 'Matching+heurist.']] | 1 | [['Precision'], ['Recall'], ['F1']] | [['0.26', '0.48', '0.33'], ['0.69', '0.64', '0.67']] | column | ['Precision', 'Recall', 'F1'] | ['Matching+heurist.'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Precision</th> <th>Recall</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Approach || Baseline</td> <td>0.26</td> <td>0.48</td> <td>0.33</td> </tr> <tr> <td>Approach || Matching+heurist.</td> <td>0.69</td> <td>0.64</td> <td>0.67</td> </tr> </tbody></table> | Table 8 | table_8 | D17-1245 | 5 | emnlp2017 | Table 8 reports on the obtained results. As baseline, we use a method that considers all the NEs detected in the tweet as sources. Most of the errors of the algorithm are due to information sources not recognized as NEs (in particular, when the source is a Twitter user), or NEs that are linked to the wrong DBpedia page. However, in order to draw more interesting conclusions on the most suitable methods to address this task, we would need the increase the size of the dataset. | [1, 2, 2, 2] | ['Table 8 reports on the obtained results.', 'As baseline, we use a method that considers all the NEs detected in the tweet as sources.', 'Most of the errors of the algorithm are due to information sources not recognized as NEs (in particular, when the source is a Twitter user), or NEs that are linked to the wrong DBpedia page.', 'However, in order to draw more interesting conclusions on the most suitable methods to address this task, we would need the increase the size of the dataset.'] | [None, ['Baseline'], ['Baseline', 'Matching+heurist.', 'F1'], ['Matching+heurist.', 'F1']] | 1 |
D17-1260table_2 | Evaluation results of different ordered rules. As a reference, the performance of optimised student policy is success rate 0.767, #turn 5.10 and reward 0.5124. | 2 | [['Ordered Rules', 'R1 R2 R3'], ['Ordered Rules', 'R1 R4 R2 R3'], ['Ordered Rules', 'R1* R2 R3'], ['Ordered Rules', 'R1* R4 R2 R3']] | 1 | [['Success Rate'], ['#Turn'], ['Reward']] | [['0.695', '4.58', '0.4657'], ['0.749', '5.16', '0.491'], ['0.705', '4.44', '0.4824'], ['0.753', '4.98', '0.5042']] | column | ['Success Rate', '#Turn', 'Reward'] | ['R1* R4 R2 R3'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Success Rate</th> <th>#Turn</th> <th>Reward</th> </tr> </thead> <tbody> <tr> <td>Ordered Rules || R1 R2 R3</td> <td>0.695</td> <td>4.58</td> <td>0.4657</td> </tr> <tr> <td>Ordered Rules || R1 R4 R2 R3</td> <td>0.749</td> <td>5.16</td> <td>0.491</td> </tr> <tr> <td>Ordered Rules || R1* R2 R3</td> <td>0.705</td> <td>4.44</td> <td>0.4824</td> </tr> <tr> <td>Ordered Rules || R1* R4 R2 R3</td> <td>0.753</td> <td>4.98</td> <td>0.5042</td> </tr> </tbody></table> | Table 2 | table_2 | D17-1260 | 9 | emnlp2017 | Table 2 is the evaluation results of different ordered rules. The rule R4 can significantly boost the success rate (comparing line 2 with line 1),while the rule R1* can both boost the success rate and decrease the dialogue length (comparing line 3 with line 1). The combination of R4 and R1* takes respective advantages (comparing line 4 with line 1, line 2 and line 3). The performance of final order rules is comparable to the performance of optimized student policy. | [1, 1, 1, 2] | ['Table 2 is the evaluation results of different ordered rules.', 'The rule R4 can significantly boost the success rate (comparing line 2 with line 1),while the rule R1* can both boost the success rate and decrease the dialogue length (comparing line 3 with line 1).', 'The combination of R4 and R1* takes respective advantages (comparing line 4 with line 1, line 2 and line 3).', 'The performance of final order rules is comparable to the performance of optimized student policy.'] | [None, ['Success Rate', '#Turn'], ['R1* R4 R2 R3', 'R1 R2 R3', 'R1 R4 R2 R3', 'R1* R2 R3', 'Reward'], ['R1* R4 R2 R3']] | 1 |
D17-1267table_3 | Cognate clustering results on the Algonquian dataset (in %). The absolute percentage of fully found sets is given in parentheses. | 2 | [['System', 'Heuristic Baseline'], ['System', 'LEXSTAT'], ['System', 'SemaPhoR']] | 1 | [['Found Sets'], ['Purity']] | [['18.9 (9.9)', '96.4'], ['19.6 (10.5)', '97.1'], ['63.1(48.2)', '70.3']] | column | ['Found Sets', 'Purity'] | ['SemaPhoR'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Found Sets</th> <th>Purity</th> </tr> </thead> <tbody> <tr> <td>System || Heuristic Baseline</td> <td>18.9 (9.9)</td> <td>96.4</td> </tr> <tr> <td>System || LEXSTAT</td> <td>19.6 (10.5)</td> <td>97.1</td> </tr> <tr> <td>System || SemaPhoR</td> <td>63.1(48.2)</td> <td>70.3</td> </tr> </tbody></table> | Table 3 | table_3 | D17-1267 | 8 | emnlp2017 | Table 3 shows the results. LEXSTAT performs slightly better than the heuristic baseline, but both are limited by their inability to relate words that have non-identical definitions. In fact, only 21.4% of all gold cognate sets in the Algonquian dataset contain at least two words with the same definition, which establishes an upper bound on the number of found sets for systems that are designed to operate on word lists, rather than dictionaries. For example, most of the cognates in Figure 1 cannot be captured by such systems. Our system, SemaPhoR, finds approximately three times as many cognate sets as LEXSTAT, and over 75% of those sets are complete with respect to the gold annotation. In practical terms, our system is able to provide concrete evidence for the existence of most of the proto-words that have reflexes in the recorded languages, and identifies the majority of those reflexes in the process. The purity of the produced clusters indicates that there are many more hits than misses in the system output. In addition, the clusters can be sorted according to their confidence scores, in order to facilitate the analysis of the results by an expert linguist. | [1, 1, 2, 0, 1, 2, 1, 2] | ['Table 3 shows the results.', 'LEXSTAT performs slightly better than the heuristic baseline, but both are limited by their inability to relate words that have non-identical definitions.', 'In fact, only 21.4% of all gold cognate sets in the Algonquian dataset contain at least two words with the same definition, which establishes an upper bound on the number of found sets for systems that are designed to operate on word lists, rather than dictionaries.', 'For example, most of the cognates in Figure 1 cannot be captured by such systems.', 'Our system, SemaPhoR, finds approximately three times as many cognate sets as LEXSTAT, and over 75% of those sets are complete with respect to the gold annotation.', 'In practical terms, our system is able to provide concrete evidence for the existence of most of the proto-words that have reflexes in the recorded languages, and identifies the majority of those reflexes in the process.', 'The purity of the produced clusters indicates that there are many more hits than misses in the system output.', 'In addition, the clusters can be sorted according to their confidence scores, in order to facilitate the analysis of the results by an expert linguist.'] | [None, ['LEXSTAT', 'Heuristic Baseline', 'Purity'], None, None, ['SemaPhoR', 'LEXSTAT', 'Found Sets'], ['SemaPhoR'], ['Purity'], None] | 1 |
D17-1269table_1 | We show dramatic improvement on 3 European languages in a low-resource setting. More detailed results in Table 2 show that this improvement continues to a wide variety of languages. The baseline is a simple direct transfer model. The previous state-of-the-art (SOA) is Tsai et al. (2016) | 1 | [['Baseline'], ['Previous SOA'], ['Cheap Translation']] | 1 | [['German'], ['Spanish'], ['Dutch'], ['Avg']] | [['22.61', '45.77', '43.1', '37.27'], ['48.12', '60.55', '61.56', '56.74'], ['57.53', '65.18', '66.5', '62.65']] | column | ['F1', 'F1', 'F1', 'F1'] | ['Cheap Translation', 'Avg'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>German</th> <th>Spanish</th> <th>Dutch</th> <th>Avg</th> </tr> </thead> <tbody> <tr> <td>Baseline</td> <td>22.61</td> <td>45.77</td> <td>43.1</td> <td>37.27</td> </tr> <tr> <td>Previous SOA</td> <td>48.12</td> <td>60.55</td> <td>61.56</td> <td>56.74</td> </tr> <tr> <td>Cheap Translation</td> <td>57.53</td> <td>65.18</td> <td>66.5</td> <td>62.65</td> </tr> </tbody></table> | Table 1 | table_1 | D17-1269 | 1 | emnlp2017 | We show that our approach gives non-trivial scores across several languages, and when combined with orthogonal features from Wikipedia, improves on state-of-the-art scores. Table 1 compares a simple direct transfer baseline, the previous state-of-the-art in cross-lingual NER, and our proposed algorithm. For these languages, we beat the baseline by 25.4 points, and the state-of-the-art by 5.9 points. In addition, we found that translating from a language related to the target language gives a further boost. We conclude with a case study of a truly low-resource language, Uyghur, and show a good score, despite having almost no target language resources. | [2, 1, 1, 2, 2] | ['We show that our approach gives non-trivial scores across several languages, and when combined with orthogonal features from Wikipedia, improves on state-of-the-art scores.', 'Table 1 compares a simple direct transfer baseline, the previous state-of-the-art in cross-lingual NER, and our proposed algorithm.', 'For these languages, we beat the baseline by 25.4 points, and the state-of-the-art by 5.9 points.', 'In addition, we found that translating from a language related to the target language gives a further boost.', 'We conclude with a case study of a truly low-resource language, Uyghur, and show a good score, despite having almost no target language resources.'] | [None, ['Baseline', 'Previous SOA', 'Cheap Translation'], ['Cheap Translation', 'Avg', 'Baseline', 'Previous SOA'], None, None] | 1 |
D17-1274table_4 | Comparison Analysis for Each Slot Type. | 2 | [['Slot Type', 'state_of_death'], ['Slot Type', 'date_of_birth'], ['Slot Type', 'age'], ['Slot Type', 'per:alternate_names'], ['Slot Type', 'origin'], ['Slot Type', 'country_of_birth'], ['Slot Type', 'city_of_death'], ['Slot Type', 'state_of_headq.'], ['Slot Type', 'cities_of_residence'], ['Slot Type', 'states_of_residence'], ['Slot Type', 'country_of_headq.'], ['Slot Type', 'city_of_headq.'], ['Slot Type', 'employee_of'], ['Slot Type', 'countries_of_residence']] | 2 | [['Impact of Attention (%)', 'Local'], ['Impact of Attention (%)', 'Global-KB'], ['Training Data Distribution (%)', '-'], ['F1 (%)', '-'], ['Wide Context Distribution (%)', '-'], ['Impact of Dependency Graph (%)', '-']] | [['9.8', '-0.4', '0.9', '41.8', '66.7', '44.2'], ['7.3', '121.3', '1.3', '84.1', '20', '-81.9'], ['4.1', '-5.3', '1.3', '98.5', '15.9', '28.5'], ['-2', '21.2', '1.5', '36.6', '41.5', '62'], ['-0.9', '7.8', '1.7', '61.5', '29.3', '137.3'], ['16.7', '12', '1.9', '61.5', '55.6', '162.5'], ['1.1', '3.3', '1.9', '61.3', '70.3', '24.4'], ['9.7', '-5.1', '3.1', '51.7', '54.8', '95.7'], ['4.5', '5.7', '3.5', '57.3', '77', '40.5'], ['-4.3', '2.3', '3.8', '50.5', '45.9', '175.8'], ['5.6', '-0.8', '5.3', '41.5', '54.4', '146.3'], ['1.6', '-6.9', '6.7', '30.3', '54.9', '39.3'], ['14.9', '4.9', '7.3', '65.9', '54.9', '132.5'], ['37.7', '8.6', '7.4', '47.4', '47.2', '134.9']] | column | ['Impact of Attention (%)', 'Impact of Attention (%)', 'Training Data Distribution (%)', 'F1 (%)', 'Wide Context Distribution (%)', 'Impact of Dependency Graph (%)'] | ['Training Data Distribution (%)', 'F1 (%)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Impact of Attention (%) || Local</th> <th>Impact of Attention (%) || Global-KB</th> <th>Training Data Distribution (%) || -</th> <th>F1 (%) || -</th> <th>Wide Context Distribution (%) || -</th> <th>Impact of Dependency Graph (%) || -</th> </tr> </thead> <tbody> <tr> <td>Slot Type || state_of_death</td> <td>9.8</td> <td>-0.4</td> <td>0.9</td> <td>41.8</td> <td>66.7</td> <td>44.2</td> </tr> <tr> <td>Slot Type || date_of_birth</td> <td>7.3</td> <td>121.3</td> <td>1.3</td> <td>84.1</td> <td>20</td> <td>-81.9</td> </tr> <tr> <td>Slot Type || age</td> <td>4.1</td> <td>-5.3</td> <td>1.3</td> <td>98.5</td> <td>15.9</td> <td>28.5</td> </tr> <tr> <td>Slot Type || per:alternate_names</td> <td>-2</td> <td>21.2</td> <td>1.5</td> <td>36.6</td> <td>41.5</td> <td>62</td> </tr> <tr> <td>Slot Type || origin</td> <td>-0.9</td> <td>7.8</td> <td>1.7</td> <td>61.5</td> <td>29.3</td> <td>137.3</td> </tr> <tr> <td>Slot Type || country_of_birth</td> <td>16.7</td> <td>12</td> <td>1.9</td> <td>61.5</td> <td>55.6</td> <td>162.5</td> </tr> <tr> <td>Slot Type || city_of_death</td> <td>1.1</td> <td>3.3</td> <td>1.9</td> <td>61.3</td> <td>70.3</td> <td>24.4</td> </tr> <tr> <td>Slot Type || state_of_headq.</td> <td>9.7</td> <td>-5.1</td> <td>3.1</td> <td>51.7</td> <td>54.8</td> <td>95.7</td> </tr> <tr> <td>Slot Type || cities_of_residence</td> <td>4.5</td> <td>5.7</td> <td>3.5</td> <td>57.3</td> <td>77</td> <td>40.5</td> </tr> <tr> <td>Slot Type || states_of_residence</td> <td>-4.3</td> <td>2.3</td> <td>3.8</td> <td>50.5</td> <td>45.9</td> <td>175.8</td> </tr> <tr> <td>Slot Type || country_of_headq.</td> <td>5.6</td> <td>-0.8</td> <td>5.3</td> <td>41.5</td> <td>54.4</td> <td>146.3</td> </tr> <tr> <td>Slot Type || city_of_headq.</td> <td>1.6</td> <td>-6.9</td> <td>6.7</td> <td>30.3</td> <td>54.9</td> <td>39.3</td> </tr> <tr> <td>Slot Type || employee_of</td> <td>14.9</td> <td>4.9</td> <td>7.3</td> <td>65.9</td> <td>54.9</td> <td>132.5</td> </tr> <tr> <td>Slot Type || countries_of_residence</td> <td>37.7</td> <td>8.6</td> <td>7.4</td> <td>47.4</td> <td>47.2</td> <td>134.9</td> </tr> </tbody></table> | Table 4 | table_4 | D17-1274 | 8 | emnlp2017 | Table 4 shows the distribution of training data and the F-score of each single type. We can see that, for some slot types, such as per:dateofbirthandper:age, the entity types of their candidate fillers are easy to learn and differentiate from other slot types, and their indicative words are usually explicit, thus our approach can get high f-score with limited training data (less than 507 instances). In contrast, for some slots,such as org:location of headquarters, their clues are implicit and the entity types of candidate filler sare difficult to be inferred. Although the size of training data is larger (more than 1,433 instances), the f-score remains quite low. One possible solution is to incorporate fine-grained entity types from existing tools into the neural architecture.". | [1, 1, 1, 1, 2] | ['Table 4 shows the distribution of training data and the F-score of each single type.', 'We can see that, for some slot types, such as per:dateofbirthandper:age, the entity types of their candidate fillers are easy to learn and differentiate from other slot types, and their indicative words are usually explicit, thus our approach can get high f-score with limited training data (less than 507 instances).', 'In contrast, for some slots,such as org:location of headquarters, their clues are implicit and the entity types of candidate filler sare difficult to be inferred.', 'Although the size of training data is larger (more than 1,433 instances), the f-score remains quite low.', 'One possible solution is to incorporate fine-grained entity types from existing tools into the neural architecture.".'] | [['Training Data Distribution (%)', 'F1 (%)'], ['date_of_birth', 'age', 'F1 (%)'], ['state_of_headq.', 'country_of_headq.', 'city_of_headq.', 'F1 (%)'], ['state_of_headq.', 'country_of_headq.', 'city_of_headq.', 'Training Data Distribution (%)', 'F1 (%)'], None] | 1 |
D17-1275table_2 | Development set results on Darkode. Bolded F1 values represent statistically-significant improvements over all other system values in the column with p < 0.05 according to a bootstrap resampling test. Our post-level system outperforms our binary classifier at whole-post accuracy and on type-level product extraction, even though it is less good on the token-level metric. All systems consistently identify product NPs better than they identify product tokens. However, there is a substantial gap between our systems and human performance. | 2 | [['Token Prediction', 'Freq'], ['Token Prediction', 'Dict'], ['Token Prediction', 'NER'], ['Token Prediction', 'Binary'], ['Token Prediction', 'Post'], ['Token Prediction', 'Human*'], ['NP Prediction', 'Freq'], ['NP Prediction', 'Dict'], ['NP Prediction', 'First'], ['NP Prediction', 'NER'], ['NP Prediction', 'Binary'], ['NP Prediction', 'Post'], ['NP Prediction', 'Human*']] | 2 | [['Token', 'P'], ['Token', 'R'], ['Token', 'F1'], ['Products', 'P'], ['Products', 'R'], ['Products', 'F1'], ['Post', 'Acc.'], ['NPs', 'P'], ['NPs', 'R'], ['NPs', 'F1'], ['Products', 'P'], ['Products', 'R'], ['Products', 'F1'], ['Post', 'Acc.']] | [['41.9', '42.5', '42.2', '48.4', '33.5', '39.6', '45.3', '-', '-', '-', '-', '-', '-', '-'], ['57.9', '51.1', '54.3', '65.6', '44', '52.7', '60.8', '-', '-', '-', '-', '-', '-', '-'], ['59.7', '62.2', '60.9', '60.8', '62.6', '61.7', '72.2', '-', '-', '-', '-', '-', '-', '-'], ['62.4', '76', '68.5', '58.1', '77.6', '66.4', '75.2', '-', '-', '-', '-', '-', '-', '-'], ['82.4', '36.1', '50.3', '83.5', '56.6', '67.5', '82.4', '-', '-', '-', '-', '-', '-', '-'], ['86.9', '80.4', '83.5', '87.7', '77.6', '82.2', '89.2', '-', '-', '-', '-', '-', '-', '-'], ['-', '-', '-', '-', '-', '-', '-', '61.8', '28.9', '39.4', '61.8', '50', '55.2', '61.8'], ['-', '-', '-', '-', '-', '-', '-', '57.9', '61.8', '59.8', '71.8', '57.5', '63.8', '68'], ['-', '-', '-', '-', '-', '-', '-', '73.1', '34.2', '46.7', '70.3', '59.1', '65.4', '73.1'], ['-', '-', '-', '-', '-', '-', '-', '63.6', '63.3', '63.4', '69.7', '70.3', '70', '76.3'], ['-', '-', '-', '-', '-', '-', '-', '67', '74.8', '70.7', '65.5', '82.5', '73', '82.4'], ['-', '-', '-', '-', '-', '-', '-', '87.6', '41', '55.9', '87.6', '70.8', '78.3', '87.6'], ['-', '-', '-', '-', '-', '-', '-', '87.6', '83.2', '85.3', '91.6', '84.9', '88.1', '93']] | column | ['P', 'R', 'F1', 'P', 'R', 'F1', 'Acc.', 'P', 'R', 'F1', 'P', 'R', 'F1', 'Acc.'] | ['Binary', 'Post', 'Acc.'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Token || P</th> <th>Token || R</th> <th>Token || F1</th> <th>Products || P</th> <th>Products || R</th> <th>Products || F1</th> <th>Post || Acc.</th> <th>NPs || P</th> <th>NPs || R</th> <th>NPs || F1</th> <th>Products || P</th> <th>Products || R</th> <th>Products || F1</th> <th>Post || Acc.</th> </tr> </thead> <tbody> <tr> <td>Token Prediction || Freq</td> <td>41.9</td> <td>42.5</td> <td>42.2</td> <td>48.4</td> <td>33.5</td> <td>39.6</td> <td>45.3</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> </tr> <tr> <td>Token Prediction || Dict</td> <td>57.9</td> <td>51.1</td> <td>54.3</td> <td>65.6</td> <td>44</td> <td>52.7</td> <td>60.8</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> </tr> <tr> <td>Token Prediction || NER</td> <td>59.7</td> <td>62.2</td> <td>60.9</td> <td>60.8</td> <td>62.6</td> <td>61.7</td> <td>72.2</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> </tr> <tr> <td>Token Prediction || Binary</td> <td>62.4</td> <td>76</td> <td>68.5</td> <td>58.1</td> <td>77.6</td> <td>66.4</td> <td>75.2</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> </tr> <tr> <td>Token Prediction || Post</td> <td>82.4</td> <td>36.1</td> <td>50.3</td> <td>83.5</td> <td>56.6</td> <td>67.5</td> <td>82.4</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> </tr> <tr> <td>Token Prediction || Human*</td> <td>86.9</td> <td>80.4</td> <td>83.5</td> <td>87.7</td> <td>77.6</td> <td>82.2</td> <td>89.2</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> </tr> <tr> <td>NP Prediction || Freq</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>61.8</td> <td>28.9</td> <td>39.4</td> <td>61.8</td> <td>50</td> <td>55.2</td> <td>61.8</td> </tr> <tr> <td>NP Prediction || Dict</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>57.9</td> <td>61.8</td> <td>59.8</td> <td>71.8</td> <td>57.5</td> <td>63.8</td> <td>68</td> </tr> <tr> <td>NP Prediction || First</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>73.1</td> <td>34.2</td> <td>46.7</td> <td>70.3</td> <td>59.1</td> <td>65.4</td> <td>73.1</td> </tr> <tr> <td>NP Prediction || NER</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>63.6</td> <td>63.3</td> <td>63.4</td> <td>69.7</td> <td>70.3</td> <td>70</td> <td>76.3</td> </tr> <tr> <td>NP Prediction || Binary</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>67</td> <td>74.8</td> <td>70.7</td> <td>65.5</td> <td>82.5</td> <td>73</td> <td>82.4</td> </tr> <tr> <td>NP Prediction || Post</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>87.6</td> <td>41</td> <td>55.9</td> <td>87.6</td> <td>70.8</td> <td>78.3</td> <td>87.6</td> </tr> <tr> <td>NP Prediction || Human*</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>87.6</td> <td>83.2</td> <td>85.3</td> <td>91.6</td> <td>84.9</td> <td>88.1</td> <td>93</td> </tr> </tbody></table> | Table 2 | table_2 | D17-1275 | 6 | emnlp2017 | Table 2 shows development set results on Darkode for each of the four systems for each metric described in Section 3. Our learning-based systems substantially outperform the baselines on the metrics they are optimized for. The post-level system underperforms the binary classifier on the token evaluation, but is superior at not only postlevel accuracy but also product type F1. This lends credence to our hypothesis that picking one product suffices to characterize a large fraction of posts. Comparing the automatic systems with human annotator performance we see a substantial gap. Note that our best annotator’s token F1 was 89.8, and NP post accuracy was 100%; a careful, well-trained annotator can achieve very high performance, indicating a high skyline. | [1, 2, 1, 2, 1, 1] | ['Table 2 shows development set results on Darkode for each of the four systems for each metric described in Section 3.', 'Our learning-based systems substantially outperform the baselines on the metrics they are optimized for.', 'The post-level system underperforms the binary classifier on the token evaluation, but is superior at not only postlevel accuracy but also product type F1.', 'This lends credence to our hypothesis that picking one product suffices to characterize a large fraction of posts.', 'Comparing the automatic systems with human annotator performance we see a substantial gap.', 'Note that our best annotator’s token F1 was 89.8, and NP post accuracy was 100%; a careful, well-trained annotator can achieve very high performance, indicating a high skyline.'] | [None, None, ['Post', 'Acc.', 'Binary', 'Token', 'F1'], ['Products'], ['Human*', 'P'], ['Binary', 'Post', 'F1', 'Acc.']] | 1 |
D17-1275table_5 | Product token out-of-vocabulary rates on development sets (test set for Blackhat and Nulled) of various forums with respect to training on Darkode and Hack Forums. We also show the recall of an NPlevel system on seen (Rseen) and OOV (Roov) tokens. Darkode seems to be more “general” than Hack Forums: the Darkode system generally has lower OOV rates and provides more consistent performance on OOV tokens than the Hack Forums system. | 2 | [['System', 'Binary (Darkode)'], ['System', 'Binary (HF)']] | 3 | [['Test', 'Darkode', '% OOV'], ['Test', 'Darkode', 'Rseen'], ['Test', 'Darkode', 'Roov'], ['Test', 'Hack Forums', '% OOV'], ['Test', 'Hack Forums', 'Rseen'], ['Test', 'Hack Forums', 'Roov'], ['Test', 'Blackhat', '% OOV'], ['Test', 'Blackhat', 'Rseen'], ['Test', 'Blackhat', 'Roov'], ['Test', 'Nulled', '% OOV'], ['Test', 'Nulled', 'Rseen'], ['Test', 'Nulled', 'Roov']] | [['20', '78', '62', '41', '64', '47', '42', '69', '46', '30', '72', '45'], ['50', '76', '40', '35', '75', '42', '51 70', '70', '38', '33', '83', '32']] | column | ['% OOV', 'Rseen', 'Roov', '% OOV', 'Rseen', 'Roov', '% OOV', 'Rseen', 'Roov', '% OOV', 'Rseen', 'Roov'] | ['Binary (Darkode)', 'Binary (HF)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Test || Darkode || % OOV</th> <th>Test || Darkode || Rseen</th> <th>Test || Darkode || Roov</th> <th>Test || Hack Forums || % OOV</th> <th>Test || Hack Forums || Rseen</th> <th>Test || Hack Forums || Roov</th> <th>Test || Blackhat || % OOV</th> <th>Test || Blackhat || Rseen</th> <th>Test || Blackhat || Roov</th> <th>Test || Nulled || % OOV</th> <th>Test || Nulled || Rseen</th> <th>Test || Nulled || Roov</th> </tr> </thead> <tbody> <tr> <td>System || Binary (Darkode)</td> <td>20</td> <td>78</td> <td>62</td> <td>41</td> <td>64</td> <td>47</td> <td>42</td> <td>69</td> <td>46</td> <td>30</td> <td>72</td> <td>45</td> </tr> <tr> <td>System || Binary (HF)</td> <td>50</td> <td>76</td> <td>40</td> <td>35</td> <td>75</td> <td>42</td> <td>51 70</td> <td>70</td> <td>38</td> <td>33</td> <td>83</td> <td>32</td> </tr> </tbody></table> | Table 5 | table_5 | D17-1275 | 8 | emnlp2017 | Table 5 confirms this intuition: it shows product out-of-vocabulary rates in each of the four forums relative to training on both Darkode and Hack Forums, along with recall of an NP-level system on both previously seen and OOV products. As expected, performance is substantially higher on in vocabulary products. OOV rates of a Darkode-trained system are generally lower on new forums,indicating that that forum has better all-around product coverage. A system trained on Darkode is therefore in some sense more domain-general than one trained on Hack Forums. | [1, 1, 1, 1] | ['Table 5 confirms this intuition: it shows product out-of-vocabulary rates in each of the four forums relative to training on both Darkode and Hack Forums, along with recall of an NP-level system on both previously seen and OOV products.', 'As expected, performance is substantially higher on in vocabulary products.', 'OOV rates of a Darkode-trained system are generally lower on new forums,indicating that that forum has better all-around product coverage.', 'A system trained on Darkode is therefore in some sense more domain-general than one trained on Hack Forums.'] | [['Binary (Darkode)', 'Binary (HF)', 'Rseen', 'Roov'], ['% OOV'], ['% OOV', 'Binary (Darkode)'], ['Binary (Darkode)', 'Binary (HF)']] | 1 |
D17-1276table_3 | Results on GENIA. | 1 | [['LCRF (single)'], ['LCRF (multiple)'], ['Finkel and Manning (2009)'], ['Lu and Roth (2015)'], ['This work (STATE)'], ['This work (EDGE)']] | 1 | [['P'], ['R'], ['F1'], ['w/s']] | [['77.1', '63.3', '69.5', '81.6'], ['75.9', '66.1', '70.6', '175.8'], ['75.4', '65.9', '70.3', ' -'], ['74.2', '66.7', '70.3', '931.9'], ['74', '67.7', '70.7', '110.8'], ['75.4', '66.8', '70.8', '389.2']] | column | ['P', 'R', 'F1', 'w/s'] | ['This work (STATE)', 'This work (EDGE)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P</th> <th>R</th> <th>F1</th> <th>w/s</th> </tr> </thead> <tbody> <tr> <td>LCRF (single)</td> <td>77.1</td> <td>63.3</td> <td>69.5</td> <td>81.6</td> </tr> <tr> <td>LCRF (multiple)</td> <td>75.9</td> <td>66.1</td> <td>70.6</td> <td>175.8</td> </tr> <tr> <td>Finkel and Manning (2009)</td> <td>75.4</td> <td>65.9</td> <td>70.3</td> <td>-</td> </tr> <tr> <td>Lu and Roth (2015)</td> <td>74.2</td> <td>66.7</td> <td>70.3</td> <td>931.9</td> </tr> <tr> <td>This work (STATE)</td> <td>74</td> <td>67.7</td> <td>70.7</td> <td>110.8</td> </tr> <tr> <td>This work (EDGE)</td> <td>75.4</td> <td>66.8</td> <td>70.8</td> <td>389.2</td> </tr> </tbody></table> | Table 3 | table_3 | D17-1276 | 8 | emnlp2017 | Table 3 shows the results of running the models with F1-score tuning on GENIA dataset. All models include Brown clustering features learned from PubMed abstracts. Besides the mention hypergraph baseline, we also make comparisons with the system of Finkel and Manning (2009) that can also support overlapping mentions. We see that the mention hypergraph model matches the performance of the constituency parser-based model of Finkel and Manning
(2009), while our models based on mention separators yield significantly higher scores (p < 0.05) than all other baselines (except LCRF (multiple), which we will discuss shortly). | [1, 2, 1, 1] | ['Table 3 shows the results of running the models with F1-score tuning on GENIA dataset.', 'All models include Brown clustering features learned from PubMed abstracts.', 'Besides the mention hypergraph baseline, we also make comparisons with the system of Finkel and Manning (2009) that can also support overlapping mentions.', 'We see that the mention hypergraph model matches the performance of the constituency parser-based model of Finkel and Manning\r\n(2009), while our models based on mention separators yield significantly higher scores (p < 0.05) than all other baselines (except LCRF (multiple), which we will discuss shortly).'] | [['F1'], None, ['This work (STATE)', 'This work (EDGE)', 'Finkel and Manning (2009)'], ['This work (STATE)', 'This work (EDGE)', 'LCRF (single)', 'LCRF (multiple)', 'Finkel and Manning (2009)', 'Lu and Roth (2015)', 'F1']] | 1 |
D17-1279table_1 | Overall span-level F1 results for keyphrase identification (SemEval Subtask A) and classification (SemEval Subtask B). ∗ indicates tranductive setting. + indicates not documented as either transductive or inductive. indicates score not reported or not applied. | 2 | [['Span Level', 'Gupta et.al.(unsupervised)'], ['Span Level', 'Tsai et.al. (unsupervised)'], ['Span Level', 'MULTITASK'], ['Span Level', 'Best Non-Neural SemEval+'], ['Span Level', 'Best Neural SemEval+'], ['Span Level', 'NN-CRF (supervised)'], ['Span Level', 'NN-CRF (semi)'], ['Span Level', 'NN-CRF (semi)*']] | 1 | [['Classification (dev)'], ['Classification (test)'], ['Identification']] | [['-', '9.8', '6.4'], ['-', '11.9', '8'], ['45.5', '-', '-'], ['-', '38', '51'], ['-', '44', '56'], ['48.1', '40.2', '52.1'], ['51.9', '45.3', '56.9'], ['52.1', '46.6', '57.6']] | column | ['F1', 'F1', 'F1'] | ['NN-CRF (supervised)', 'NN-CRF (semi)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Classification (dev)</th> <th>Classification (test)</th> <th>Identification</th> </tr> </thead> <tbody> <tr> <td>Span Level || Gupta et.al.(unsupervised)</td> <td>-</td> <td>9.8</td> <td>6.4</td> </tr> <tr> <td>Span Level || Tsai et.al. (unsupervised)</td> <td>-</td> <td>11.9</td> <td>8</td> </tr> <tr> <td>Span Level || MULTITASK</td> <td>45.5</td> <td>-</td> <td>-</td> </tr> <tr> <td>Span Level || Best Non-Neural SemEval+</td> <td>-</td> <td>38</td> <td>51</td> </tr> <tr> <td>Span Level || Best Neural SemEval+</td> <td>-</td> <td>44</td> <td>56</td> </tr> <tr> <td>Span Level || NN-CRF (supervised)</td> <td>48.1</td> <td>40.2</td> <td>52.1</td> </tr> <tr> <td>Span Level || NN-CRF (semi)</td> <td>51.9</td> <td>45.3</td> <td>56.9</td> </tr> <tr> <td>Span Level || NN-CRF (semi)*</td> <td>52.1</td> <td>46.6</td> <td>57.6</td> </tr> </tbody></table> | Table 1 | table_1 | D17-1279 | 6 | emnlp2017 | Table 1 reports the results of our neural sequence tagging model NN-CRF in both supervised and semi-supervised learning (ULM and graph-based), and compares them with the baselines and the state-of-the-art (best SemEval System (Augenstein et al., 2017)). Augenstein and Søgaard (2017) use a multi-task learning strategy to improve the performance of supervised keyphrase classification, but they only report dev set performance on SemEval Task 10, we also include their result here and refer it as
MULTITASK. We report results for both span identification (SemEval SubTask A) and span classification into TASK, PROCESS and MATERIAL (SemEval Subtask B). The results show that our neural sequence tagging models significantly outperforms the state of the art and both baselines. It confirms that our neural tagging model outperforms other nonneural and neural models for the SemEval ScienceIE challenge. It further shows that our system achieves significant boost from semi-supervised learning using unlabeled data. | [1, 1, 2, 1, 1, 1] | ['Table 1 reports the results of our neural sequence tagging model NN-CRF in both supervised and semi-supervised learning (ULM and graph-based), and compares them with the baselines and the state-of-the-art (best SemEval System (Augenstein et al., 2017)).', 'Augenstein and Søgaard (2017) use a multi-task learning strategy to improve the performance of supervised keyphrase classification, but they only report dev set performance on SemEval Task 10, we also include their result here and refer it as\r\nMULTITASK.', 'We report results for both span identification (SemEval SubTask A) and span classification into TASK, PROCESS and MATERIAL (SemEval Subtask B).', 'The results show that our neural sequence tagging models significantly outperforms the state of the art and both baselines.', 'It confirms that our neural tagging model outperforms other nonneural and neural models for the SemEval ScienceIE challenge.', 'It further shows that our system achieves significant boost from semi-supervised learning using unlabeled data.'] | [['NN-CRF (supervised)', 'NN-CRF (semi)'], ['Classification (dev)', 'MULTITASK'], None, ['NN-CRF (semi)', 'Identification'], ['NN-CRF (semi)', 'Best Non-Neural SemEval+', 'Best Neural SemEval+', 'Identification'], ['NN-CRF (semi)']] | 1 |
D17-1284table_2 | Results for ACE-2004: F1 is calculated for predicted mentions, and accuracy on goldmentions. Results for Wikifier and AIDA are from (Ling et al., 2015). All systems use the same mention extraction protocol showing the difference in F1 is due to linking performance. | 1 | [['AIDA'], ['Wikifier'], ['Vinculum'], ['Model C'], ['Model CDT'], ['Model CDTE']] | 1 | [['F1'], ['Accuracy']] | [['77.8', '-'], ['85.1', '-'], ['88.5', '-'], ['88.9', '93.1'], ['89.8', '93.9'], ['90.7', '94.3']] | column | ['F1', 'Accuracy'] | ['Model C', 'Model CDT', 'Model CDTE', 'F1', 'Accuracy'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>F1</th> <th>Accuracy</th> </tr> </thead> <tbody> <tr> <td>AIDA</td> <td>77.8</td> <td>-</td> </tr> <tr> <td>Wikifier</td> <td>85.1</td> <td>-</td> </tr> <tr> <td>Vinculum</td> <td>88.5</td> <td>-</td> </tr> <tr> <td>Model C</td> <td>88.9</td> <td>93.1</td> </tr> <tr> <td>Model CDT</td> <td>89.8</td> <td>93.9</td> </tr> <tr> <td>Model CDTE</td> <td>90.7</td> <td>94.3</td> </tr> </tbody></table> | Table 2 | table_2 | D17-1284 | 7 | emnlp2017 | In Table 2 we present results for our models on ACE-2004. Our model outperforms the Wikifier and Vinculum systems that only use information from Wikipedia, and AIDA, by a significant margin, indicating its possible over-fitting to the CoNLL domain. Hence, it shows our model's ability to perform accurate linking across different datasets without using domain-specific information. | [1, 1, 1] | ['In Table 2 we present results for our models on ACE-2004.', 'Our model outperforms the Wikifier and Vinculum systems that only use information from Wikipedia, and AIDA, by a significant margin, indicating its possible over-fitting to the CoNLL domain.', "Hence, it shows our model's ability to perform accurate linking across different datasets without using domain-specific information."] | [None, ['Model C', 'Model CDT', 'Model CDTE', 'F1', 'Wikifier', 'Vinculum', 'AIDA'], ['Model C', 'Model CDT', 'Model CDTE', 'Accuracy']] | 1 |
D17-1285table_2 | Test results (F1 score) on the CauseEffect subset(?) of SemEval-2010 dataset. Results are grouped as 1) Top 3 participating teams in SemEval-2010 competition; 2) Baseline BiGRU model; 3) Recent state-of-the-art treeLSTM model (Miwa and Bansal, 2016); 4) Our work. | 2 | [['Model', 'Tymoshenko and Giuliano (2010)'], ['Model', 'Tratz and Hovy (2010)'], ['Model', 'Rink and Harabagiu (2010)'], ['Model', 'BiGRU'], ['Model', 'Miwa and Bansal (2016)'], ['Model', 'Contextual similarity modeling'], ['Model', 'Relational similarity modeling']] | 1 | [['F1 Score']] | [['82.30%'], ['87.63%'], ['89.63%'], ['89.89%'], ['91.57%'], ['90.77%'], ['92.28%']] | column | ['F1 Score'] | ['Contextual similarity modeling', 'Relational similarity modeling', 'F1 Score'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>F1 Score</th> </tr> </thead> <tbody> <tr> <td>Model || Tymoshenko and Giuliano (2010)</td> <td>82.30%</td> </tr> <tr> <td>Model || Tratz and Hovy (2010)</td> <td>87.63%</td> </tr> <tr> <td>Model || Rink and Harabagiu (2010)</td> <td>89.63%</td> </tr> <tr> <td>Model || BiGRU</td> <td>89.89%</td> </tr> <tr> <td>Model || Miwa and Bansal (2016)</td> <td>91.57%</td> </tr> <tr> <td>Model || Contextual similarity modeling</td> <td>90.77%</td> </tr> <tr> <td>Model || Relational similarity modeling</td> <td>92.28%</td> </tr> </tbody></table> | Table 2 | table_2 | D17-1285 | 8 | emnlp2017 | We also evaluate the relation extraction component (Sec. 5) on Cause-Effect subset of SemEval2010 dataset. Note our causality/correlation relation extraction component is not supposed to be a general purpose one, since our system only focuses on insight extraction of biomedical/health literature. We compare our relation extraction models against previous work on the Cause Effect subset of the data, Table 2 shows our relational similarity model, without the use of sparse features or external resources such as WordNet, outperforms recent state-of-the-art treeLSTM model (Miwa and Bansal, 2016). It also shows BiGRU model is reasonably competitive on this dataset, which is why we use it in our baseline system for comparison purpose. | [2, 2, 1, 1] | ['We also evaluate the relation extraction component (Sec. 5) on Cause-Effect subset of SemEval2010 dataset.', 'Note our causality/correlation relation extraction component is not supposed to be a general purpose one, since our system only focuses on insight extraction of biomedical/health literature.', 'We compare our relation extraction models against previous work on the Cause Effect subset of the data, Table 2 shows our relational similarity model, without the use of sparse features or external resources such as WordNet, outperforms recent state-of-the-art treeLSTM model (Miwa and Bansal, 2016).', 'It also shows BiGRU model is reasonably competitive on this dataset, which is why we use it in our baseline system for comparison purpose.'] | [None, ['Contextual similarity modeling', 'Relational similarity modeling'], ['Contextual similarity modeling', 'Relational similarity modeling', 'F1 Score', 'Miwa and Bansal (2016)'], ['BiGRU', 'F1 Score']] | 1 |
D17-1291table_2 | Performance comparison of different outlier document detection methods. All results are shown as percents. Data set Method | 4 | [['Dataset', 'NYT', 'Method', 'TFIDF-COS'], ['Dataset', 'NYT', 'Method', 'P2V-COS'], ['Dataset', 'NYT', 'Method', 'UNI-KL'], ['Dataset', 'NYT', 'Method', 'TM-KL'], ['Dataset', 'NYT', 'Method', 'VMF-SF'], ['Dataset', 'NYT', 'Method', 'VMF-E'], ['Dataset', 'NYT', 'Method', 'VMF-Q'], ['Dataset', 'ARNET', 'Method', 'TFIDF-COS'], ['Dataset', 'ARNET', 'Method', 'P2V-COS'], ['Dataset', 'ARNET', 'Method', 'UNI-KL'], ['Dataset', 'ARNET', 'Method', 'TM-KL'], ['Dataset', 'ARNET', 'Method', 'VMF-SF'], ['Dataset', 'ARNET', 'Method', 'VMF-E'], ['Dataset', 'ARNET', 'Method', 'VMF-Q']] | 1 | [['MAP'], ['Rcl@1%'], ['Rcl@2%'], ['Rcl@5%']] | [['5.03', '4.73', '6.72', '14.72'], ['22.07', '23.45', '44.64', '66.18'], ['10.28', '11.92', '16.32', '31.34'], ['14.51', '16.5', '16.5', '24.67'], ['33.7', '31.03', '44.45', '62.6'], ['36.57', '35.91', '49.41', '67.56'], ['41.88', '56.99', '63.29', '79.23'], ['8.99', '15.4', '18.75', '30.23'], ['7.39', '10.51', '14.78', '24.14'], ['7.46', '14.13', '22.26', '39.4'], ['10.09', '12.04', '15.37', '20.24'], ['10.69', '12.05', '22.58', '44.51'], ['10.51', '12.67', '25.92', '45.37'], ['19.74', '22.4', '34.4', '53.87']] | column | ['MAP', 'Rcl@1%', 'Rcl@2%', 'Rcl@5%'] | ['VMF-Q'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>MAP</th> <th>Rcl@1%</th> <th>Rcl@2%</th> <th>Rcl@5%</th> </tr> </thead> <tbody> <tr> <td>Dataset || NYT || Method || TFIDF-COS</td> <td>5.03</td> <td>4.73</td> <td>6.72</td> <td>14.72</td> </tr> <tr> <td>Dataset || NYT || Method || P2V-COS</td> <td>22.07</td> <td>23.45</td> <td>44.64</td> <td>66.18</td> </tr> <tr> <td>Dataset || NYT || Method || UNI-KL</td> <td>10.28</td> <td>11.92</td> <td>16.32</td> <td>31.34</td> </tr> <tr> <td>Dataset || NYT || Method || TM-KL</td> <td>14.51</td> <td>16.5</td> <td>16.5</td> <td>24.67</td> </tr> <tr> <td>Dataset || NYT || Method || VMF-SF</td> <td>33.7</td> <td>31.03</td> <td>44.45</td> <td>62.6</td> </tr> <tr> <td>Dataset || NYT || Method || VMF-E</td> <td>36.57</td> <td>35.91</td> <td>49.41</td> <td>67.56</td> </tr> <tr> <td>Dataset || NYT || Method || VMF-Q</td> <td>41.88</td> <td>56.99</td> <td>63.29</td> <td>79.23</td> </tr> <tr> <td>Dataset || ARNET || Method || TFIDF-COS</td> <td>8.99</td> <td>15.4</td> <td>18.75</td> <td>30.23</td> </tr> <tr> <td>Dataset || ARNET || Method || P2V-COS</td> <td>7.39</td> <td>10.51</td> <td>14.78</td> <td>24.14</td> </tr> <tr> <td>Dataset || ARNET || Method || UNI-KL</td> <td>7.46</td> <td>14.13</td> <td>22.26</td> <td>39.4</td> </tr> <tr> <td>Dataset || ARNET || Method || TM-KL</td> <td>10.09</td> <td>12.04</td> <td>15.37</td> <td>20.24</td> </tr> <tr> <td>Dataset || ARNET || Method || VMF-SF</td> <td>10.69</td> <td>12.05</td> <td>22.58</td> <td>44.51</td> </tr> <tr> <td>Dataset || ARNET || Method || VMF-E</td> <td>10.51</td> <td>12.67</td> <td>25.92</td> <td>45.37</td> </tr> <tr> <td>Dataset || ARNET || Method || VMF-Q</td> <td>19.74</td> <td>22.4</td> <td>34.4</td> <td>53.87</td> </tr> </tbody></table> | Table 2 | table_2 | D17-1291 | 7 | emnlp2017 | Performance comparison. Table 2 shows performance of different outlier document detection methods. It can be observed that our method outperforms all the baselines in both data sets. In both data sets, VMF-Q can achieve a 45% to 135% increase from baselines in terms of recall by examining the top 1% outliers. Generally, performances of most methods are lower in the ARNET data set comparing to NYT, potentially because the relatively short document lengths and more technical terminologies in ARNET. | [2, 1, 1, 1, 1] | ['Performance comparison.', 'Table 2 shows performance of different outlier document detection methods.', 'It can be observed that our method outperforms all the baselines in both data sets.', 'In both data sets, VMF-Q can achieve a 45% to 135% increase from baselines in terms of recall by examining the top 1% outliers.', 'Generally, performances of most methods are lower in the ARNET data set comparing to NYT, potentially because the relatively short document lengths and more technical terminologies in ARNET.'] | [None, None, ['VMF-SF', 'VMF-E', 'VMF-Q', 'Dataset', 'NYT', 'ARNET'], ['VMF-Q', 'Dataset', 'NYT', 'ARNET'], ['VMF-Q', 'Dataset', 'NYT', 'ARNET']] | 1 |
D17-1292table_9 | Human evaluation on explanation chains generated by symbolic and neural reasoners. | 2 | [['Reasoners', 'Accuracy (%)']] | 1 | [['SYMB'], ['NEUR']] | [['42.5', '57.5']] | row | ['Accuracy (%)'] | ['Accuracy (%)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>SYMB</th> <th>NEUR</th> </tr> </thead> <tbody> <tr> <td>Reasoners || Accuracy (%)</td> <td>42.5</td> <td>57.5</td> </tr> </tbody></table> | Table 9 | table_9 | D17-1292 | 9 | emnlp2017 | We also conduct a human assessment on the explanation chains produced by the two reasoners, asking people to choose more convincing explanation chains for each feature-target pair. Table 9 shows their relative preferences. | [1, 1] | ['We also conduct a human assessment on the explanation chains produced by the two reasoners, asking people to choose more convincing explanation chains for each feature-target pair.', 'Table 9 shows their relative preferences.'] | [['Reasoners', 'Accuracy (%)'], None] | 1 |
D17-1293table_2 | Result of thread-subtitle matching. | 2 | [['Model', 'BOW'], ['Model', 'Word2Vec'], ['Model', 'Para2Vec'], ['Model', 'HDV'], ['Model', 'CDR']] | 2 | [['People and Network', 'P@1'], ['People and Network', 'P@3'], ['People and Network', 'P@5'], ['Introduction to MOOC', 'P@1'], ['Introduction to MOOC', 'P@3'], ['Introduction to MOOC', 'P@5']] | [['0.437', '0.718', '0.806', '0.449', '0.811', '0.909'], ['0.485', '0.699', '0.816', '0.453', '0.826', '0.89'], ['0.408', '0.612', '0.728', '0.504', '0.823', '0.894'], ['0.466', '0.621', '0.777', '0.496', '0.819', '0.913'], ['0.505', '0.689', '0.786', '0.52', '0.854', '0.941']] | column | ['P@1', 'P@3', 'P@5', 'P@1', 'P@3', 'P@5'] | ['CDR'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>People and Network || P@1</th> <th>People and Network || P@3</th> <th>People and Network || P@5</th> <th>Introduction to MOOC || P@1</th> <th>Introduction to MOOC || P@3</th> <th>Introduction to MOOC || P@5</th> </tr> </thead> <tbody> <tr> <td>Model || BOW</td> <td>0.437</td> <td>0.718</td> <td>0.806</td> <td>0.449</td> <td>0.811</td> <td>0.909</td> </tr> <tr> <td>Model || Word2Vec</td> <td>0.485</td> <td>0.699</td> <td>0.816</td> <td>0.453</td> <td>0.826</td> <td>0.89</td> </tr> <tr> <td>Model || Para2Vec</td> <td>0.408</td> <td>0.612</td> <td>0.728</td> <td>0.504</td> <td>0.823</td> <td>0.894</td> </tr> <tr> <td>Model || HDV</td> <td>0.466</td> <td>0.621</td> <td>0.777</td> <td>0.496</td> <td>0.819</td> <td>0.913</td> </tr> <tr> <td>Model || CDR</td> <td>0.505</td> <td>0.689</td> <td>0.786</td> <td>0.52</td> <td>0.854</td> <td>0.941</td> </tr> </tbody></table> | Table 2 | table_2 | D17-1293 | 5 | emnlp2017 | Result Firstly we use all the data to learn word embeddings by models. Then the learned word vectors are utilized to calculate the similarity between threads and subtitles, and rank the subtitles. Table 2 reports the results of thread-subtitle matching. We can notice that there are some anomalies in P@3 and P@5 results. It may be for the reason of dataset. In the first MOOC (people and network), video subtitles contain relatively less words, and therefore it is hard to get effective representations. Overall, the proposed models can achieve better performance than baselines, and we highlight the Precision@1 results. Compared to HDV which also considers the streaming documents, our model is better at every task. This indicates our model can effectively capture the latent similarity. | [2, 2, 1, 1, 2, 1, 1, 1, 2] | ['Result Firstly we use all the data to learn word embeddings by models.', 'Then the learned word vectors are utilized to calculate the similarity between threads and subtitles, and rank the subtitles.', 'Table 2 reports the results of thread-subtitle matching.', 'We can notice that there are some anomalies in P@3 and P@5 results.', 'It may be for the reason of dataset.', 'In the first MOOC (people and network), video subtitles contain relatively less words, and therefore it is hard to get effective representations.', 'Overall, the proposed models can achieve better performance than baselines, and we highlight the Precision@1 results.', 'Compared to HDV which also considers the streaming documents, our model is better at every task.', 'This indicates our model can effectively capture the latent similarity.'] | [None, ['Word2Vec'], None, ['P@3', 'P@5'], None, ['People and Network', 'P@3', 'P@5'], ['CDR', 'People and Network', 'P@1'], ['HDV', 'CDR'], None] | 1 |
D17-1294table_2 | Results of ablation tests for the coarsegrained classifier. | 2 | [['Features/Models', 'All'], ['Features/Models', 'All - Unigrams'], ['Features/Models', 'All - Bigrams'], ['Features/Models', 'All - Rel. Location'], ['Features/Models', 'All - Topic Models'], ['Features/Models', 'All - Productions'], ['Features/Models', 'All - Nonterminals'], ['Features/Models', 'All - Max. Depth'], ['Features/Models', 'All - Avg. Depth'], ['Features/Models', 'Phrase Inclusion - Baseline'], ['Features/Models', 'Paragraph Vec. - 50 Dimensions'], ['Features/Models', 'Paragraph Vec. - 100 Dimensions']] | 1 | [['Precision'], ['Recall'], ['F1']] | [['0.862', '0.641', '0.735'], ['0.731', '0.487', '0.585'], ['0.885', '0.59', '0.708'], ['0.889', '0.615', '0.727'], ['0.852', '0.59', '0.697'], ['0.957', '0.564', '0.71'], ['0.913', '0.538', '0.677'], ['0.857', '0.615', '0.716'], ['0.857', '0.615', '0.716'], ['0.425', '0.797', '0.554'], ['0.667', '0.211', '0.32'], ['0.667', '0.158', '0.255']] | column | ['Precision', 'Recall', 'F1'] | ['All', 'F1'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Precision</th> <th>Recall</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Features/Models || All</td> <td>0.862</td> <td>0.641</td> <td>0.735</td> </tr> <tr> <td>Features/Models || All - Unigrams</td> <td>0.731</td> <td>0.487</td> <td>0.585</td> </tr> <tr> <td>Features/Models || All - Bigrams</td> <td>0.885</td> <td>0.59</td> <td>0.708</td> </tr> <tr> <td>Features/Models || All - Rel. Location</td> <td>0.889</td> <td>0.615</td> <td>0.727</td> </tr> <tr> <td>Features/Models || All - Topic Models</td> <td>0.852</td> <td>0.59</td> <td>0.697</td> </tr> <tr> <td>Features/Models || All - Productions</td> <td>0.957</td> <td>0.564</td> <td>0.71</td> </tr> <tr> <td>Features/Models || All - Nonterminals</td> <td>0.913</td> <td>0.538</td> <td>0.677</td> </tr> <tr> <td>Features/Models || All - Max. Depth</td> <td>0.857</td> <td>0.615</td> <td>0.716</td> </tr> <tr> <td>Features/Models || All - Avg. Depth</td> <td>0.857</td> <td>0.615</td> <td>0.716</td> </tr> <tr> <td>Features/Models || Phrase Inclusion - Baseline</td> <td>0.425</td> <td>0.797</td> <td>0.554</td> </tr> <tr> <td>Features/Models || Paragraph Vec. - 50 Dimensions</td> <td>0.667</td> <td>0.211</td> <td>0.32</td> </tr> <tr> <td>Features/Models || Paragraph Vec. - 100 Dimensions</td> <td>0.667</td> <td>0.158</td> <td>0.255</td> </tr> </tbody></table> | Table 2 | table_2 | D17-1294 | 5 | emnlp2017 | We performed ablation tests excluding one feature at a time from the coarse-grained classifier. The results of these tests are presented in Table 2 as precision, recall, and F1 scores for the positive class, i.e., the opt-out class. Using the F1 scores as the primary evaluation metric, it appears that all features help in classification. The unigram, topic distribution, nonterminal, and modal verb and optout phrase features contribute the most to performance. Including all the features results in an F1 score of 0.735. Ablation test without unigram features resulted in the lowest F1 score of 0.585, and by analyzing features with higher logistic regression weights, we found n-grams such as unsubscribe to have intuitively high weights. We also found the production rule “S→SBAR, VP” to have a high weight, indicating that presence of subordinate clauses (SBARs) help in classification. | [2, 1, 1, 1, 1, 1, 2] | ['We performed ablation tests excluding one feature at a time from the coarse-grained classifier.', 'The results of these tests are presented in Table 2 as precision, recall, and F1 scores for the positive class, i.e., the opt-out class.', 'Using the F1 scores as the primary evaluation metric, it appears that all features help in classification.', 'The unigram, topic distribution, nonterminal, and modal verb and optout phrase features contribute the most to performance.', 'Including all the features results in an F1 score of 0.735.', 'Ablation test without unigram features resulted in the lowest F1 score of 0.585, and by analyzing features with higher logistic regression weights, we found n-grams such as unsubscribe to have intuitively high weights.', 'We also found the production rule “S→SBAR, VP” to have a high weight, indicating that presence of subordinate clauses (SBARs) help in classification.'] | [None, ['Precision', 'Recall', 'F1', 'All'], ['F1', 'All'], ['All - Unigrams', 'All - Topic Models', 'All - Nonterminals', 'F1'], ['All', 'F1'], ['All - Unigrams', 'F1'], None] | 1 |
D17-1296table_4 | Experiment results on the development and test data of English Switchboard data. | 2 | [['Method', 'CRF'], ['Method', 'Bi-LSTM'], ['Method', 'greedy'], ['Method', 'greedy + beam'], ['Method', 'greedy + scheduled']] | 2 | [['Dev', 'P'], ['Dev', 'R'], ['Dev', 'F1'], ['Test', 'P'], ['Test', 'R'], ['Test', 'F1']] | [['93.9', '78.3', '85.4', '91.7', '75.1', '82.6'], ['94.1', '79.3', '86.1', '91.7', '80.6', '85.8'], ['91.4', '83.7', '87.4', '91.1', '83.3', '87.1'], ['93.6', '83.6', '88.3', '92.8', '82.7', '87.5'], ['92.3', '84.3', '88.1', '91.1', '84.1', '87.5']] | column | ['P', 'R', 'F1', 'P', 'R', 'F1'] | ['greedy + beam', 'greedy + scheduled', 'F1'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Dev || P</th> <th>Dev || R</th> <th>Dev || F1</th> <th>Test || P</th> <th>Test || R</th> <th>Test || F1</th> </tr> </thead> <tbody> <tr> <td>Method || CRF</td> <td>93.9</td> <td>78.3</td> <td>85.4</td> <td>91.7</td> <td>75.1</td> <td>82.6</td> </tr> <tr> <td>Method || Bi-LSTM</td> <td>94.1</td> <td>79.3</td> <td>86.1</td> <td>91.7</td> <td>80.6</td> <td>85.8</td> </tr> <tr> <td>Method || greedy</td> <td>91.4</td> <td>83.7</td> <td>87.4</td> <td>91.1</td> <td>83.3</td> <td>87.1</td> </tr> <tr> <td>Method || greedy + beam</td> <td>93.6</td> <td>83.6</td> <td>88.3</td> <td>92.8</td> <td>82.7</td> <td>87.5</td> </tr> <tr> <td>Method || greedy + scheduled</td> <td>92.3</td> <td>84.3</td> <td>88.1</td> <td>91.1</td> <td>84.1</td> <td>87.5</td> </tr> </tbody></table> | Table 4 | table_4 | D17-1296 | 7 | emnlp2017 | We build two baseline systems using CRF and Bi-LSTM, respectively. The hand-crafted discrete features of CRF refer to those in Ferguson et al.(2015). For the Bi-LSTM model, the token embedding is the same with our transition-based method. Table 4 shows the result of our model on both the development and test sets. Beam search improves the F-score form 87.1% to 87.5%, which is consistent with the finding of Buckman et al.(2016) on the LSTM parser of (Dyer et al., 2015) (improvements by about 0.3 point). Scheduled sampling achieves the same improvements compared to beam-search. Because of high training speed, we conduct subsequent experiments based on scheduled sampling. | [2, 2, 2, 1, 1, 1, 2] | ['We build two baseline systems using CRF and Bi-LSTM, respectively.', 'The hand-crafted discrete features of CRF refer to those in Ferguson et al.(2015).', 'For the Bi-LSTM model, the token embedding is the same with our transition-based method.', 'Table 4 shows the result of our model on both the development and test sets.', 'Beam search improves the F-score form 87.1% to 87.5%, which is consistent with the finding of Buckman et al.(2016) on the LSTM parser of (Dyer et al., 2015) (improvements by about 0.3 point).', 'Scheduled sampling achieves the same improvements compared to beam-search.', 'Because of high training speed, we conduct subsequent experiments based on scheduled sampling.'] | [['CRF', 'Bi-LSTM'], None, ['Bi-LSTM'], None, ['greedy + beam', 'Test', 'F1'], ['greedy + scheduled', 'Test', 'F1'], ['greedy + scheduled', 'F1']] | 1 |
D17-1296table_6 | Test result of our transition-based model using DPS files for training. | 2 | [['Method', 'Our'], ['Method', 'Bi-LSTM'], ['Method', 'M3N * (Qian and Liu, 2013)'], ['Method', 'CRF']] | 1 | [['P'], ['R'], ['F1']] | [['93.1', '83.5', '88.1'], ['92.4', '82', '86.9'], ['90.6', '78.7', '84.2'], ['91.8', '77.2', '83.9']] | column | ['P', 'R', 'F1'] | ['Our'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P</th> <th>R</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Method || Our</td> <td>93.1</td> <td>83.5</td> <td>88.1</td> </tr> <tr> <td>Method || Bi-LSTM</td> <td>92.4</td> <td>82</td> <td>86.9</td> </tr> <tr> <td>Method || M3N * (Qian and Liu, 2013)</td> <td>90.6</td> <td>78.7</td> <td>84.2</td> </tr> <tr> <td>Method || CRF</td> <td>91.8</td> <td>77.2</td> <td>83.9</td> </tr> </tbody></table> | Table 6 | table_6 | D17-1296 | 7 | emnlp2017 | As described in section 3.1, to directly compare with the transition-based parsing methods (Honnibal and Johnson, 2014; Wu et al., 2015), we only use MRG files, which are less than the DPS files. In fact, many methods, such as Qian and Liu (2013), have used all the DPS files as training data. We are curious about the performance of our system using all the DPS files. Following the experimental settings of Johnson and Charniak (2004), the corpus is split as follows: main training consisting of all sw[23]*.dps files, development training consisting of all sw4[5-9]*.dps files and test training consisting of all sw4[0-1]*.mrg files. Table 6 shows the result on the DPS files. The result of M3N∗ comes from our experiments with the toolkit4 released by Qian and Liu (2013), which use the same data set and pre-processing. Our model achieves a 88.1% F-score by using more training data, obtaining 0.6 point improvement compared with the system training on MRG files. The performance is far better than the sequence labeling methods that use DPS files for training. | [2, 2, 1, 2, 1, 2, 1, 1] | ['As described in section 3.1, to directly compare with the transition-based parsing methods (Honnibal and Johnson, 2014; Wu et al., 2015), we only use MRG files, which are less than the DPS files.', 'In fact, many methods, such as Qian and Liu (2013), have used all the DPS files as training data.', 'We are curious about the performance of our system using all the DPS files.', 'Following the experimental settings of Johnson and Charniak (2004), the corpus is split as follows: main training consisting of all sw[23]*.dps files, development training consisting of all sw4[5-9]*.dps files and test training consisting of all sw4[0-1]*.mrg files.', 'Table 6 shows the result on the DPS files.', 'The result of M3N∗ comes from our experiments with the toolkit4 released by Qian and Liu (2013), which use the same data set and pre-processing.', 'Our model achieves a 88.1% F-score by using more training data, obtaining 0.6 point improvement compared with the system training on MRG files.', 'The performance is far better than the sequence labeling methods that use DPS files for training.'] | [None, None, ['Our'], None, None, ['M3N*'], ['Our'], ['Our']] | 1 |
D17-1296table_7 | performance on Chinese annotated data The result of M3N∗ comes from our experiments with the toolkit4 released by Qian and Liu (2013), which use the same data set and pre-processing. Our model achieves a 88.1% F-score by using more training data, obtaining 0.6 point improvement compared with the system training on MRG files. The performance is far better than the sequence labeling methods that use DPS files for training. | 2 | [['Method', 'Our'], ['Method', 'Bi-LSTM'], ['Method', 'CRF']] | 2 | [['Dev', 'P'], ['Dev', 'R'], ['Dev', 'F1'], ['Test', 'P'], ['Test', 'R'], ['Test', 'F1']] | [['68.9', '40.4', '50.9', '77.2', '37.7', '50.6'], ['60.1', '41.3', '48.9', '65.3', '38.2', '48.2'], ['73.7', '33.5', '46.1', '77.7', '32', '45.3']] | column | ['P', 'R', 'F1', 'P', 'R', 'F1'] | ['Our', 'F1'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Dev || P</th> <th>Dev || R</th> <th>Dev || F1</th> <th>Test || P</th> <th>Test || R</th> <th>Test || F1</th> </tr> </thead> <tbody> <tr> <td>Method || Our</td> <td>68.9</td> <td>40.4</td> <td>50.9</td> <td>77.2</td> <td>37.7</td> <td>50.6</td> </tr> <tr> <td>Method || Bi-LSTM</td> <td>60.1</td> <td>41.3</td> <td>48.9</td> <td>65.3</td> <td>38.2</td> <td>48.2</td> </tr> <tr> <td>Method || CRF</td> <td>73.7</td> <td>33.5</td> <td>46.1</td> <td>77.7</td> <td>32</td> <td>45.3</td> </tr> </tbody></table> | Table 7 | table_7 | D17-1296 | 7 | emnlp2017 | Table 7 shows the results of Chinese disfluency detection. Our model obtains a 2.4 point improvement compared with the baseline Bi-LSTM model and a 5.3 point compared with the baseline CRF model. The performance on Chinese is much lower than that on English. Apart from the smaller training set, the main reason is that the proportion of repair type disflueny is much higher. | [1, 1, 2, 2] | ['Table 7 shows the results of Chinese disfluency detection.', 'Our model obtains a 2.4 point improvement compared with the baseline Bi-LSTM model and a 5.3 point compared with the baseline CRF model.', 'The performance on Chinese is much lower than that on English.', 'Apart from the smaller training set, the main reason is that the proportion of repair type disflueny is much higher.'] | [None, ['Our', 'Test', 'F1', 'Bi-LSTM', 'CRF'], None, None] | 1 |
D17-1297table_6 | Error-type performance before and after re-ranking on the FCE test set (largest impact highlighted in bold; bottom part of the table displays negative effects on performance). | 2 | [['Type', 'M:ADV'], ['Type', 'M:VERB'], ['Type', 'R:NOUN:NUM'], ['Type', 'R:NOUN:POSS'], ['Type', 'R:OTHER'], ['Type', 'R:PRON'], ['Type', 'R:VERB:FORM'], ['Type', 'R:VERB:SVA'], ['Type', 'R:VERB:TENSE'], ['Type', 'U:ADV'], ['Type', 'U:DET'], ['Type', 'U:NOUN'], ['Type', 'U:PREP'], ['Type', 'U:PRON'], ['Type', 'U:PUNCT'], ['Type', 'U:VERB:TENSE'], ['Type', 'M:PREP'], ['Type', 'M:VERB:FORM'], ['Type', 'R:ADJ'], ['Type', 'R:CONTR'], ['Type', 'R:WO']] | 2 | [['CAMB16SMT', 'F0.5'], ['CAMB16SMT + LSTMcamb', 'F0.5']] | [['25', '31.25'], ['25.42', '29.85'], ['56.6', '62.5'], ['35.71', '55.56'], ['34.99', '38.75'], ['26.88', '33.33'], ['53.62', '58.06'], ['58.38', '69.4'], ['31.94', '36.29'], ['13.51', '22.73'], ['46.27', '55.3'], ['10.1', '15.72'], ['47.62', '53.4'], ['30.77', '39.33'], ['51.22', '58.38'], ['28.41', '41.67'], ['43.69', '39.43'], ['50', '38.46'], ['45.45', '37.67'], ['50', '27.78'], ['53.63', '48.74']] | column | ['F0.5', 'F0.5'] | ['R:NOUN:POSS', 'R:VERB:SVA', 'U:ADV', 'U:DET', 'U:VERB:TENSE', 'R:CONTR'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>CAMB16SMT || F0.5</th> <th>CAMB16SMT + LSTMcamb || F0.5</th> </tr> </thead> <tbody> <tr> <td>Type || M:ADV</td> <td>25</td> <td>31.25</td> </tr> <tr> <td>Type || M:VERB</td> <td>25.42</td> <td>29.85</td> </tr> <tr> <td>Type || R:NOUN:NUM</td> <td>56.6</td> <td>62.5</td> </tr> <tr> <td>Type || R:NOUN:POSS</td> <td>35.71</td> <td>55.56</td> </tr> <tr> <td>Type || R:OTHER</td> <td>34.99</td> <td>38.75</td> </tr> <tr> <td>Type || R:PRON</td> <td>26.88</td> <td>33.33</td> </tr> <tr> <td>Type || R:VERB:FORM</td> <td>53.62</td> <td>58.06</td> </tr> <tr> <td>Type || R:VERB:SVA</td> <td>58.38</td> <td>69.4</td> </tr> <tr> <td>Type || R:VERB:TENSE</td> <td>31.94</td> <td>36.29</td> </tr> <tr> <td>Type || U:ADV</td> <td>13.51</td> <td>22.73</td> </tr> <tr> <td>Type || U:DET</td> <td>46.27</td> <td>55.3</td> </tr> <tr> <td>Type || U:NOUN</td> <td>10.1</td> <td>15.72</td> </tr> <tr> <td>Type || U:PREP</td> <td>47.62</td> <td>53.4</td> </tr> <tr> <td>Type || U:PRON</td> <td>30.77</td> <td>39.33</td> </tr> <tr> <td>Type || U:PUNCT</td> <td>51.22</td> <td>58.38</td> </tr> <tr> <td>Type || U:VERB:TENSE</td> <td>28.41</td> <td>41.67</td> </tr> <tr> <td>Type || M:PREP</td> <td>43.69</td> <td>39.43</td> </tr> <tr> <td>Type || M:VERB:FORM</td> <td>50</td> <td>38.46</td> </tr> <tr> <td>Type || R:ADJ</td> <td>45.45</td> <td>37.67</td> </tr> <tr> <td>Type || R:CONTR</td> <td>50</td> <td>27.78</td> </tr> <tr> <td>Type || R:WO</td> <td>53.63</td> <td>48.74</td> </tr> </tbody></table> | Table 6 | table_6 | D17-1297 | 9 | emnlp2017 | Table 6 presents the performance for a subset of error types that are affected the most re-ranking CAMB16SMT on before and after the FCE test set. The error types are interpreted as follows: Missing error; Replace error; Unnecessary error. The largest improvement is observed in replacement errors referring to possessive nouns (R:NOUN:POSS) and verb agreement (R:VERB:SVA); and in unnecessary errors referring to adverbs (U:ADV), determiners (U:DET), pronouns (U:PRON), and verb tense (U:VERB:TENSE). | [1, 2, 1] | ['Table 6 presents the performance for a subset of error types that are affected the most re-ranking CAMB16SMT on before and after the FCE test set.', 'The error types are interpreted as follows: Missing error; Replace error; Unnecessary error.', 'The largest improvement is observed in replacement errors referring to possessive nouns (R:NOUN:POSS) and verb agreement (R:VERB:SVA); and in unnecessary errors referring to adverbs (U:ADV), determiners (U:DET), pronouns (U:PRON), and verb tense (U:VERB:TENSE).'] | [['CAMB16SMT', 'CAMB16SMT + LSTMcamb'], None, ['R:NOUN:POSS', 'R:VERB:SVA', 'U:ADV', 'U:DET', 'U:VERB:TENSE', 'R:CONTR', 'CAMB16SMT + LSTMcamb', 'F0.5']] | 1 |
D17-1298table_1 | AESW development/test set correction results. GLEU and M 2 differences on test are statistically significant via paired bootstrap resampling (Koehn, 2004; Graham et al., 2014) at the 0.05 level, resampling the full set 50 times. | 2 | [['Model', 'No Change'], ['Model', 'SMT-DIFFS+M2'], ['Model', 'SMT-DIFFS+BLEU'], ['Model', 'WORD+BI-DIFFS'], ['Model', 'CHAR+BI-DIFFS'], ['Model', 'SMT+BLEU'], ['Model', 'WORD+BI'], ['Model', 'CHARCNN'], ['Model', 'CHAR+BI'], ['Model', 'WORD+DOM'], ['Model', 'WORD+BI+DOM'], ['Model', 'CHARCNN+BI+DOM'], ['Model', 'CHARCNN+DOM'], ['Model', 'CHAR+BI+DOM']] | 2 | [['GLUE', 'Dev'], ['GLUE', 'Test'], ['M2', 'Dev'], ['M2', 'Test']] | [['89.68', '89.45', '0', '0'], ['90.44', '-', '38.55', '-'], ['90.9', '-', '37.66', '-'], ['91.18', '-', '38.88', '-'], ['91.28', '-', '40.11', '-'], ['90.95', '90.7', '38.99', '38.31'], ['91.34', '91.05', '43.61', '42.78'], ['91.23', '90.96', '42.02', '41.21'], ['91.46', '91.22', '44.67', '44.62'], ['91.25', '-', '43.12', '-'], ['91.45', '-', '44.33', '-'], ['91.15', '-', '40.79', '-'], ['91.35', '-', '43.94', '-'], ['91.64', '91.39', '47.25', '46.72']] | column | ['correlation', 'correlation', 'correlation', 'correlation'] | ['CHAR+BI+DOM'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>GLUE || Dev</th> <th>GLUE || Test</th> <th>M2 || Dev</th> <th>M2 || Test</th> </tr> </thead> <tbody> <tr> <td>Model || No Change</td> <td>89.68</td> <td>89.45</td> <td>0</td> <td>0</td> </tr> <tr> <td>Model || SMT-DIFFS+M2</td> <td>90.44</td> <td>-</td> <td>38.55</td> <td>-</td> </tr> <tr> <td>Model || SMT-DIFFS+BLEU</td> <td>90.9</td> <td>-</td> <td>37.66</td> <td>-</td> </tr> <tr> <td>Model || WORD+BI-DIFFS</td> <td>91.18</td> <td>-</td> <td>38.88</td> <td>-</td> </tr> <tr> <td>Model || CHAR+BI-DIFFS</td> <td>91.28</td> <td>-</td> <td>40.11</td> <td>-</td> </tr> <tr> <td>Model || SMT+BLEU</td> <td>90.95</td> <td>90.7</td> <td>38.99</td> <td>38.31</td> </tr> <tr> <td>Model || WORD+BI</td> <td>91.34</td> <td>91.05</td> <td>43.61</td> <td>42.78</td> </tr> <tr> <td>Model || CHARCNN</td> <td>91.23</td> <td>90.96</td> <td>42.02</td> <td>41.21</td> </tr> <tr> <td>Model || CHAR+BI</td> <td>91.46</td> <td>91.22</td> <td>44.67</td> <td>44.62</td> </tr> <tr> <td>Model || WORD+DOM</td> <td>91.25</td> <td>-</td> <td>43.12</td> <td>-</td> </tr> <tr> <td>Model || WORD+BI+DOM</td> <td>91.45</td> <td>-</td> <td>44.33</td> <td>-</td> </tr> <tr> <td>Model || CHARCNN+BI+DOM</td> <td>91.15</td> <td>-</td> <td>40.79</td> <td>-</td> </tr> <tr> <td>Model || CHARCNN+DOM</td> <td>91.35</td> <td>-</td> <td>43.94</td> <td>-</td> </tr> <tr> <td>Model || CHAR+BI+DOM</td> <td>91.64</td> <td>91.39</td> <td>47.25</td> <td>46.72</td> </tr> </tbody></table> | Table 1 | table_1 | D17-1298 | 3 | emnlp2017 | Table 1 shows the full set of experimental results on the AESW development and test data. The CHAR+BI+DOM model is stronger than the WORD+BI+DOM and CHARCNN+DOM models by 2.9 M2 (0.2 GLEU) and 3.3 M2 (0.3 GLEU), respectively. The sequence-to-sequence models were also more effective than the SMT models, as shown in Table 1. We find that training with target diffs is beneficial across all models, with an increase of about 5 M2 points for the WORD+BI model, for example. Adding +DOM information slightly improves effectiveness across models. | [1, 1, 1, 1, 1] | ['Table 1 shows the full set of experimental results on the AESW development and test data.', 'The CHAR+BI+DOM model is stronger than the WORD+BI+DOM and CHARCNN+DOM models by 2.9 M2 (0.2 GLEU) and 3.3 M2 (0.3 GLEU), respectively.', 'The sequence-to-sequence models were also more effective than the SMT models, as shown in Table 1.', 'We find that training with target diffs is beneficial across all models, with an increase of about 5 M2 points for the WORD+BI model, for example.', 'Adding +DOM information slightly improves effectiveness across models.'] | [None, ['CHAR+BI+DOM', 'WORD+BI+DOM', 'CHARCNN+DOM', 'Dev'], None, ['WORD+BI-DIFFS', 'WORD+BI', 'Dev'], ['CHAR+BI+DOM']] | 1 |
D17-1299table_1 | Sentence-level formality quantifying evaluation (Spearman’s rho) among different models with different vector spaces. | 1 | [['SimDiff'], ['SVM'], ['PCA'], ['DENSIFIER'], ['baseline']] | 1 | [['LSA'], ['W2V']] | [['0.66', '0.654'], ['0.657', '0.585'], ['0.656', '0.663'], ['0.664', '0.644'], ['0.54', '0.54']] | column | ['rho', 'rho'] | ['W2V'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>LSA</th> <th>W2V</th> </tr> </thead> <tbody> <tr> <td>SimDiff</td> <td>0.66</td> <td>0.654</td> </tr> <tr> <td>SVM</td> <td>0.657</td> <td>0.585</td> </tr> <tr> <td>PCA</td> <td>0.656</td> <td>0.663</td> </tr> <tr> <td>DENSIFIER</td> <td>0.664</td> <td>0.644</td> </tr> <tr> <td>baseline</td> <td>0.54</td> <td>0.54</td> </tr> </tbody></table> | Table 1 | table_1 | D17-1299 | 3 | emnlp2017 | Table 1 shows that all models based on the vector space achieve similar performance in terms of Spearman's ρ (except SVM-W2V which yields lower performance). The baseline method based on unigram models was outperformed by 0.1+ point. So we select DENSIFIER-LSA as a representative for our FSMT system. | [1, 1, 1] | ["Table 1 shows that all models based on the vector space achieve similar performance in terms of Spearman's ρ (except SVM-W2V which yields lower performance).", 'The baseline method based on unigram models was outperformed by 0.1+ point.', 'So we select DENSIFIER-LSA as a representative for our FSMT system.'] | [['SVM', 'W2V'], ['baseline'], ['DENSIFIER', 'LSA']] | 1 |
D17-1301table_1 | Evaluation of translation quality. “Init” denotes Initialization of encoder (“enc”), decoder (“dec”), or both (“enc+dec”), and “Auxi” denotes Auxiliary Context. “†” indicates statistically significant difference (P < 0.01) from the baseline NEMATUS. | 2 | [['System', 'MOSES'], ['System', 'NEMATUS'], ['System', '+Initenc'], ['System', '+Initdec'], ['System', '+Initenc+dec'], ['System', '+Auxi'], ['System', '+Gating Auxi'], ['System', '+Initenc+dec+Gating Auxi']] | 1 | [['MT05'], ['MT06'], ['MT08'], ['Ave.'], ['delta']] | [['33.08', '32.69', '23.78', '28.24', '–'], ['34.35', '35.75', '25.39', '30.57', '–'], ['36.05', '36.44†', '26.65†', '31.55', '+0.98'], ['36.27', '36.69†', '27.11†', '31.90', '+1.33'], ['36.34', '36.82†', '27.18†', '32.00', '+1.43'], ['35.26', '36.47†', '26.12†', '31.30', '+0.73'], ['36.64', '37.63†', '26.85†', '32.24', '+1.67'], ['36.89', '37.76†', '27.57†', '32.67', '+2.10']] | column | ['BLEU', 'BLEU', 'BLEU', 'BLEU', 'BLEU'] | ['+Initenc+dec+Gating Auxi'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>MT05</th> <th>MT06</th> <th>MT08</th> <th>Ave.</th> <th>delta</th> </tr> </thead> <tbody> <tr> <td>System || MOSES</td> <td>33.08</td> <td>32.69</td> <td>23.78</td> <td>28.24</td> <td>–</td> </tr> <tr> <td>System || NEMATUS</td> <td>34.35</td> <td>35.75</td> <td>25.39</td> <td>30.57</td> <td>–</td> </tr> <tr> <td>System || +Initenc</td> <td>36.05</td> <td>36.44†</td> <td>26.65†</td> <td>31.55</td> <td>+0.98</td> </tr> <tr> <td>System || +Initdec</td> <td>36.27</td> <td>36.69†</td> <td>27.11†</td> <td>31.90</td> <td>+1.33</td> </tr> <tr> <td>System || +Initenc+dec</td> <td>36.34</td> <td>36.82†</td> <td>27.18†</td> <td>32.00</td> <td>+1.43</td> </tr> <tr> <td>System || +Auxi</td> <td>35.26</td> <td>36.47†</td> <td>26.12†</td> <td>31.30</td> <td>+0.73</td> </tr> <tr> <td>System || +Gating Auxi</td> <td>36.64</td> <td>37.63†</td> <td>26.85†</td> <td>32.24</td> <td>+1.67</td> </tr> <tr> <td>System || +Initenc+dec+Gating Auxi</td> <td>36.89</td> <td>37.76†</td> <td>27.57†</td> <td>32.67</td> <td>+2.10</td> </tr> </tbody></table> | Table 1 | table_1 | D17-1301 | 4 | emnlp2017 | Table 1 shows the translation performance in terms of BLEU score. Clearly, the proposed approaches significantly outperforms baseline in all cases. | [1, 1] | ['Table 1 shows the translation performance in terms of BLEU score.', 'Clearly, the proposed approaches significantly outperforms baseline in all cases.'] | [None, ['+Initenc+dec+Gating Auxi']] | 1 |
D17-1309table_6 | Preordering results for English → Japanese. FRS (in [0, 100]) is the fuzzy reordering score (Talbot et al., 2011). | 2 | [['Model', 'Nakagawa (2015)'], ['Model', 'Small FF'], ['Model', 'Small FF + POS tags'], ['Model', 'Small FF + Tagger input fts.']] | 1 | [['FRS'], ['Size']] | [['81.6', '-'], ['75.2', '0.5MB'], ['81.3', '1.3MB'], ['76.6', '3.7MB']] | column | ['FRS', 'Size'] | ['Small FF + POS tags'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>FRS</th> <th>Size</th> </tr> </thead> <tbody> <tr> <td>Model || Nakagawa (2015)</td> <td>81.6</td> <td>-</td> </tr> <tr> <td>Model || Small FF</td> <td>75.2</td> <td>0.5MB</td> </tr> <tr> <td>Model || Small FF + POS tags</td> <td>81.3</td> <td>1.3MB</td> </tr> <tr> <td>Model || Small FF + Tagger input fts.</td> <td>76.6</td> <td>3.7MB</td> </tr> </tbody></table> | Table 6 | table_6 | D17-1309 | 5 | emnlp2017 | Table 6 shows results with or without using the predicted POS tags in the preorderer, as well as including the features used by the tagger in the reorderer directly and only training the downstream task. The preorderer that includes a separate network for POS tagging and then extracts features over the predicted tags is more accurate and smaller than the model that includes all the features that contribute to a POS tag in the reorderer directly. This pipeline processes 7k tokens/second when taking pretokenized text as input, with the POS tagger accounting for 23% of the computation time. | [1, 1, 2] | ['Table 6 shows results with or without using the predicted POS tags in the preorderer, as well as including the features used by the tagger in the reorderer directly and only training the downstream task.', 'The preorderer that includes a separate network for POS tagging and then extracts features over the predicted tags is more accurate and smaller than the model that includes all the features that contribute to a POS tag in the reorderer directly.', 'This pipeline processes 7k tokens/second when taking pretokenized text as input, with the POS tagger accounting for 23% of the computation time.'] | [['Small FF + POS tags', 'Small FF + Tagger input fts.'], ['Small FF + POS tags', 'FRS'], ['Small FF + POS tags']] | 1 |
D17-1317table_5 | Model performance on the Politifact validation set. | 1 | [['Majority Baseline'], ['Naive Bayes'], ['MaxEnt'], ['LSTM']] | 2 | [['2-CLASS', 'text'], ['2-CLASS', '+LIWC'], ['6-CLASS', 'text'], ['6-CLASS', '+LIWC']] | [['0.39', '-', '0.6', '-'], ['0.44', '0.58', '0.16', '0.21'], ['0.55', '0.58', '0.2', '0.21'], ['0.58', '0.57', '0.21', '0.22']] | column | ['F1', 'F1', 'F1', 'F1'] | ['LSTM'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>2-CLASS || text</th> <th>2-CLASS || +LIWC</th> <th>6-CLASS || text</th> <th>6-CLASS || +LIWC</th> </tr> </thead> <tbody> <tr> <td>Majority Baseline</td> <td>0.39</td> <td>-</td> <td>0.6</td> <td>-</td> </tr> <tr> <td>Naive Bayes</td> <td>0.44</td> <td>0.58</td> <td>0.16</td> <td>0.21</td> </tr> <tr> <td>MaxEnt</td> <td>0.55</td> <td>0.58</td> <td>0.2</td> <td>0.21</td> </tr> <tr> <td>LSTM</td> <td>0.58</td> <td>0.57</td> <td>0.21</td> <td>0.22</td> </tr> </tbody></table> | Table 5 | table_5 | D17-1317 | 5 | emnlp2017 | Table 5 summarizes the performance on the development set. We report macro averaged F1 score in all tables. The LSTM outperforms the other models when only using text as input; however the other two models improve substantially with adding LIWC features, particularly in the case of the multinomial naive Bayes model. In contrast, the LIWC features do not improve the neural model much, indicating that some of this lexical information is perhaps redundant to what the model was already learning from text. | [1, 2, 1, 1] | ['Table 5 summarizes the performance on the development set.', 'We report macro averaged F1 score in all tables.', 'The LSTM outperforms the other models when only using text as input; however the other two models improve substantially with adding LIWC features, particularly in the case of the multinomial naive Bayes model.', 'In contrast, the LIWC features do not improve the neural model much, indicating that some of this lexical information is perhaps redundant to what the model was already learning from text.'] | [None, None, ['LSTM', '2-CLASS', 'text', '+LIWC', 'Naive Bayes'], ['+LIWC', 'text']] | 1 |
D17-1318table_1 | Topics evaluation: accuracy in word intrusion task. The table reports the accuracy values on the first 4 and 8 key concepts in the clusters. | 2 | [['Method', 'Vanilla-LDA'], ['Method', 'Key concept-LDA'], ['Method', 'Graph-based Clusters'], ['Method', 'k-means Clusters'], ['Method', 'Key concept Clusters']] | 1 | [['Acc.@4'], ['Acc.@8']] | [['0.22', '0.35'], ['0.29', '0.36'], ['0.46', '0.44'], ['0.72', '0.67'], ['0.86', '0.67']] | column | ['Acc.@4', 'Acc.@8'] | ['Key concept Clusters'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Acc.@4</th> <th>Acc.@8</th> </tr> </thead> <tbody> <tr> <td>Method || Vanilla-LDA</td> <td>0.22</td> <td>0.35</td> </tr> <tr> <td>Method || Key concept-LDA</td> <td>0.29</td> <td>0.36</td> </tr> <tr> <td>Method || Graph-based Clusters</td> <td>0.46</td> <td>0.44</td> </tr> <tr> <td>Method || k-means Clusters</td> <td>0.72</td> <td>0.67</td> </tr> <tr> <td>Method || Key concept Clusters</td> <td>0.86</td> <td>0.67</td> </tr> </tbody></table> | Table 1 | table_1 | D17-1318 | 4 | emnlp2017 | As shown in Table 1, our system outperforms the other methods with an accuracy of 0.86 in the word-intrusion task with four key concepts in each cluster, while it decreases to 0.67 if we extend the evaluation to include eight key concepts. | [1] | ['As shown in Table 1, our system outperforms the other methods with an accuracy of 0.86 in the word-intrusion task with four key concepts in each cluster, while it decreases to 0.67 if we extend the evaluation to include eight key concepts.'] | [['Key concept Clusters', 'Acc.@4', 'Acc.@8']] | 1 |
D17-1323table_2 | Number of violated constraints, mean amplified bias, and test performance before and after calibration using RBA. The test performances of vSRL and MLC are measured by top-1 semantic role accuracy and top-1 mean average precision, respectively. | 3 | [['Method', 'vSRL: Development Set', 'CRF'], ['Method', 'vSRL: Development Set', 'CRF + RBA']] | 1 | [['Viol.'], ['Amp. bias'], ['Perf.(%)']] | [['154', '0.05', '24.07'], ['107', '0.024', '23.97']] | column | ['Viol.', 'Amp. bias', 'Perf.(%)'] | ['CRF + RBA'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Viol.</th> <th>Amp. bias</th> <th>Perf.(%)</th> </tr> </thead> <tbody> <tr> <td>Method || vSRL: Development Set || CRF</td> <td>154</td> <td>0.05</td> <td>24.07</td> </tr> <tr> <td>Method || vSRL: Development Set || CRF + RBA</td> <td>107</td> <td>0.024</td> <td>23.97</td> </tr> </tbody></table> | Table 2 | table_2 | D17-1323 | 9 | emnlp2017 | Our quantitative results are summarized in the Table 2. On the development set, the number of verbs whose bias exceed the original bias by over 5% decreases 30.5% (Viol.). Overall, we are able to significantly reduce bias amplification in vSRL by 52% on the development set (Amp. bias). | [1, 1, 1] | ['Our quantitative results are summarized in the Table 2.', 'On the development set, the number of verbs whose bias exceed the original bias by over 5% decreases 30.5% (Viol.).', 'Overall, we are able to significantly reduce bias amplification in vSRL by 52% on the development set (Amp. bias).'] | [None, ['CRF + RBA', 'Viol.'], ['CRF + RBA', 'Amp. bias']] | 1 |
D18-1001table_3 | Results on the test sets of the Trustpilot dataset, +DEMO setting. Main is the accuracy on sentiment analysis. Priv. is the privacy measure (i.e. the inverse accuracy of the attacker: higher is better, see Section 4.2). The baselines are most-frequent class classifiers. The values reported for the defense methods indicate absolute differences with the standard training regime (no defense implemented) for both metrics. | 2 | [['Corpus', 'Germany'], ['Corpus', 'baseline'], ['Corpus', 'Denmark'], ['Corpus', 'baseline'], ['Corpus', 'France'], ['Corpus', 'baseline'], ['Corpus', 'UK'], ['Corpus', 'baseline'], ['Corpus', 'US'], ['Corpus', 'baseline']] | 3 | [['base model', 'Standard', 'Main'], ['base model', 'Standard', 'Priv.'], ['defense method', 'M-Detask.', 'Main'], ['defense method', 'M-Detask.', 'Priv.'], ['defense method', 'A-Gener.', 'Main'], ['defense method', 'A-Gener.', 'Priv.'], ['defense method', 'Decl. alpha = 0.1', 'Main'], ['defense method', 'Decl. alpha = 0.1', 'Priv.']] | [['85.1', '32.2', '-0.6', '-0.3', '-1.3', '0.6', '-0.8', '1.9'], ['78.6', '36.9', '', '', '', '', '', ''], ['82.6', '28.1', '-0.2', '4.4', '-0.1', '6', '-0.3', '7.6'], ['70.4', '40', '', '', '', '', '', ''], ['75.1', '41.1', '-0.8', '0.7', '-1.4', '-6.4', '-1.5', '-18.2'], ['69.2', '44.4', '', '', '', '', '', ''], ['87', '39.3', '-0.5', '0.9', '-0.2', '0.2', '-0.1', '0.3'], ['77.1', '42.2', '', '', '', '', '', ''], ['85', '33.9', '-0.1', '2.6', '-0.2', '1.8', '0.7', '2.2'], ['79.4', '36.4', '', '', '', '', '', '']] | column | ['Main', 'Priv.', 'Main', 'Priv.', 'Main', 'Priv.', 'Main', 'Priv.'] | ['defense method'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>base model || Standard || Main</th> <th>base model || Standard || Priv.</th> <th>defense method || M-Detask. || Main</th> <th>defense method || M-Detask. || Priv.</th> <th>defense method || A-Gener. || Main</th> <th>defense method || A-Gener. || Priv.</th> <th>defense method || Decl. alpha = 0.1 || Main</th> <th>defense method || Decl. alpha = 0.1 || Priv.</th> </tr> </thead> <tbody> <tr> <td>Corpus || Germany</td> <td>85.1</td> <td>32.2</td> <td>-0.6</td> <td>-0.3</td> <td>-1.3</td> <td>0.6</td> <td>-0.8</td> <td>1.9</td> </tr> <tr> <td>Corpus || baseline</td> <td>78.6</td> <td>36.9</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>Corpus || Denmark</td> <td>82.6</td> <td>28.1</td> <td>-0.2</td> <td>4.4</td> <td>-0.1</td> <td>6</td> <td>-0.3</td> <td>7.6</td> </tr> <tr> <td>Corpus || baseline</td> <td>70.4</td> <td>40</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>Corpus || France</td> <td>75.1</td> <td>41.1</td> <td>-0.8</td> <td>0.7</td> <td>-1.4</td> <td>-6.4</td> <td>-1.5</td> <td>-18.2</td> </tr> <tr> <td>Corpus || baseline</td> <td>69.2</td> <td>44.4</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>Corpus || UK</td> <td>87</td> <td>39.3</td> <td>-0.5</td> <td>0.9</td> <td>-0.2</td> <td>0.2</td> <td>-0.1</td> <td>0.3</td> </tr> <tr> <td>Corpus || baseline</td> <td>77.1</td> <td>42.2</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>Corpus || US</td> <td>85</td> <td>33.9</td> <td>-0.1</td> <td>2.6</td> <td>-0.2</td> <td>1.8</td> <td>0.7</td> <td>2.2</td> </tr> <tr> <td>Corpus || baseline</td> <td>79.4</td> <td>36.4</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> </tbody></table> | Table 3 | table_3 | D18-1001 | 7 | emnlp2018 | Effect of defenses. We report results for the main task accuracy and the representation privacy in Table 3 for the +DEMO setting and in Table 4 for the RAW setting. Recall that the privacy measure(Priv.) is computed by 1 - X where X is the average accuracy of the attacker on gender and age predictions. When this privacy metric is higher, it is more difficult to exploit the hidden representation of the network to recover information about x. The ‘Standard’ columns contain the accuracy and privacy of the base model described in Section 2. The next columns present the absolute variation in accuracy and privacy for the three defense methods presented in Section 3: Multidetasking, Adversarial Generation, and Declustering. We also report for each corpus the most frequent class baseline for the main task accuracy, and the privacy of the most frequent class baselines on private variables (i.e. the upper bound for privacy). The three modified training methods designed as defenses have a positive effect on privacy. Despite a model selection based on accuracy, they lead to an improvement in privacy on all datasets, except on the France subcorpus. In most cases,we observe only a small decrease in accuracy,or even an improvement at times (e.g. multidetasking on the Germany dataset, RAW setting), thus improving the tradeoff between the utility and the privacy of the text representations. | [2, 1, 2, 2, 1, 1, 2, 2, 1, 1] | ['Effect of defenses.', 'We report results for the main task accuracy and the representation privacy in Table 3 for the +DEMO setting and in Table 4 for the RAW setting.', 'Recall that the privacy measure(Priv.) is computed by 1 - X where X is the average accuracy of the attacker on gender and age predictions.', 'When this privacy metric is higher, it is more difficult to exploit the hidden representation of the network to recover information about x.', 'The ‘Standard’ columns contain the accuracy and privacy of the base model described in Section 2.', 'The next columns present the absolute variation in accuracy and privacy for the three defense methods presented in Section 3: Multidetasking, Adversarial Generation, and Declustering.', 'We also report for each corpus the most frequent class baseline for the main task accuracy, and the privacy of the most frequent class baselines on private variables (i.e. the upper bound for privacy).', 'The three modified training methods designed as defenses have a positive effect on privacy.', 'Despite a model selection based on accuracy, they lead to an improvement in privacy on all datasets, except on the France subcorpus.', 'In most cases,we observe only a small decrease in accuracy,or even an improvement at times (e.g. multidetasking on the Germany dataset, RAW setting), thus improving the tradeoff between the utility and the privacy of the text representations.'] | [None, None, ['Priv.'], ['Priv.'], ['Standard', 'Main', 'Priv.'], ['M-Detask.', 'A-Gener.', 'Decl. alpha = 0.1', 'Main', 'Priv.'], None, ['M-Detask.', 'A-Gener.', 'Decl. alpha = 0.1', 'Priv.'], ['M-Detask.', 'A-Gener.', 'Decl. alpha = 0.1', 'Priv.', 'Germany', 'Denmark', 'UK', 'US'], ['M-Detask.', 'A-Gener.', 'Decl. alpha = 0.1', 'Main']] | 1 |
D18-1001table_4 | Results on the test sets of the Trustpilot dataset, RAW setting. See Section 4.2 and caption of Table 3 for details about the metrics. | 2 | [['Corpus', 'Germany'], ['Corpus', 'baseline'], ['Corpus', 'Denmark'], ['Corpus', 'baseline'], ['Corpus', 'France'], ['Corpus', 'baseline'], ['Corpus', 'UK'], ['Corpus', 'baseline'], ['Corpus', 'US'], ['Corpus', 'baseline']] | 3 | [['base model', 'Standard', 'Main'], ['base model', 'Standard', 'Priv.'], ['defense method', 'M-Detask.', 'Main'], ['defense method', 'M-Detask.', 'Priv.'], ['defense method', 'A-Gener.', 'Main'], ['defense method', 'A-Gener.', 'Priv.'], ['defense method', 'Decl. alpha = 0.1', 'Main'], ['defense method', 'Decl. alpha = 0.1', 'Priv.']] | [['85.5', '32.1', '0.3', '0.5', '-0.8', '0.9', '-1.7', '2.2'], ['78.6', '36.9', '', '', '', '', '', ''], ['82.3', '37.3', '-0.6', '0.6', '-0.1', '-0.3', '-0.2', '-0.1'], ['70.4', '40', '', '', '', '', '', ''], ['72.7', '40.6', '1.8', '-0.1', '1.9', '-0.4', '-0.3', '-0.1'], ['69.2', '44.4', '', '', '', '', '', ''], ['86.9', '40.1', '-0.2', '1', '0', '1.2', '0', '0'], ['77.1', '42.2', '', '', '', '', '', ''], ['84.5', '36.1', '-1.1', '0.2', '0.5', '0.1', '0.3', '0.5'], ['79.4', '36.4', '', '', '', '', '', '']] | column | ['Main', 'Priv.', 'Main', 'Priv.', 'Main', 'Priv.', 'Main', 'Priv.'] | ['defense method'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>base model || Standard || Main</th> <th>base model || Standard || Priv.</th> <th>defense method || M-Detask. || Main</th> <th>defense method || M-Detask. || Priv.</th> <th>defense method || A-Gener. || Main</th> <th>defense method || A-Gener. || Priv.</th> <th>defense method || Decl. alpha = 0.1 || Main</th> <th>defense method || Decl. alpha = 0.1 || Priv.</th> </tr> </thead> <tbody> <tr> <td>Corpus || Germany</td> <td>85.5</td> <td>32.1</td> <td>0.3</td> <td>0.5</td> <td>-0.8</td> <td>0.9</td> <td>-1.7</td> <td>2.2</td> </tr> <tr> <td>Corpus || baseline</td> <td>78.6</td> <td>36.9</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>Corpus || Denmark</td> <td>82.3</td> <td>37.3</td> <td>-0.6</td> <td>0.6</td> <td>-0.1</td> <td>-0.3</td> <td>-0.2</td> <td>-0.1</td> </tr> <tr> <td>Corpus || baseline</td> <td>70.4</td> <td>40</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>Corpus || France</td> <td>72.7</td> <td>40.6</td> <td>1.8</td> <td>-0.1</td> <td>1.9</td> <td>-0.4</td> <td>-0.3</td> <td>-0.1</td> </tr> <tr> <td>Corpus || baseline</td> <td>69.2</td> <td>44.4</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>Corpus || UK</td> <td>86.9</td> <td>40.1</td> <td>-0.2</td> <td>1</td> <td>0</td> <td>1.2</td> <td>0</td> <td>0</td> </tr> <tr> <td>Corpus || baseline</td> <td>77.1</td> <td>42.2</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>Corpus || US</td> <td>84.5</td> <td>36.1</td> <td>-1.1</td> <td>0.2</td> <td>0.5</td> <td>0.1</td> <td>0.3</td> <td>0.5</td> </tr> <tr> <td>Corpus || baseline</td> <td>79.4</td> <td>36.4</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> </tbody></table> | Table 4 | table_4 | D18-1001 | 7 | emnlp2018 | Effect of defenses. We report results for the main task accuracy and the representation privacy in Table 3 for the +DEMO setting and in Table 4 for the RAW setting. Recall that the privacy measure(Priv.) is computed by 1 - X where X is the average accuracy of the attacker on gender and age predictions. When this privacy metric is higher, it is more difficult to exploit the hidden representation of the network to recover information about x. The ‘Standard’ columns contain the accuracy and privacy of the base model described in Section 2. The next columns present the absolute variation in accuracy and privacy for the three defense methods presented in Section 3: Multidetasking, Adversarial Generation, and Declustering. We also report for each corpus the most frequent class baseline for the main task accuracy, and the privacy of the most frequent class baselines on private variables (i.e. the upper bound for privacy). The three modified training methods designed as defenses have a positive effect on privacy. Despite a model selection based on accuracy, they lead to an improvement in privacy on all datasets, except on the France subcorpus. In most cases,we observe only a small decrease in accuracy,or even an improvement at times (e.g. multidetasking on the Germany dataset, RAW setting), thus improving the tradeoff between the utility and the privacy of the text representations. | [2, 1, 2, 2, 1, 1, 2, 2, 1, 1] | ['Effect of defenses.', 'We report results for the main task accuracy and the representation privacy in Table 3 for the +DEMO setting and in Table 4 for the RAW setting.', 'Recall that the privacy measure(Priv.) is computed by 1 - X where X is the average accuracy of the attacker on gender and age predictions.', 'When this privacy metric is higher, it is more difficult to exploit the hidden representation of the network to recover information about x.', 'The ‘Standard’ columns contain the accuracy and privacy of the base model described in Section 2.', 'The next columns present the absolute variation in accuracy and privacy for the three defense methods presented in Section 3: Multidetasking, Adversarial Generation, and Declustering.', 'We also report for each corpus the most frequent class baseline for the main task accuracy, and the privacy of the most frequent class baselines on private variables (i.e. the upper bound for privacy).', 'The three modified training methods designed as defenses have a positive effect on privacy.', 'Despite a model selection based on accuracy, they lead to an improvement in privacy on all datasets, except on the France subcorpus.', 'In most cases,we observe only a small decrease in accuracy,or even an improvement at times (e.g. multidetasking on the Germany dataset, RAW setting), thus improving the tradeoff between the utility and the privacy of the text representations.'] | [None, None, ['Priv.'], ['Priv.'], ['Standard', 'Main', 'Priv.'], ['M-Detask.', 'A-Gener.', 'Decl. alpha = 0.1', 'Main', 'Priv.'], None, ['M-Detask.', 'A-Gener.', 'Decl. alpha = 0.1', 'Priv.'], ['M-Detask.', 'A-Gener.', 'Decl. alpha = 0.1', 'Priv.', 'Germany', 'Denmark', 'UK', 'US'], ['M-Detask.', 'A-Gener.', 'Decl. alpha = 0.1', 'Main']] | 1 |
D18-1002table_4 | Results of different adversarial configurations. Sentiment/Mention: main task accuracy. Race/Gender/Age: protected attribute recovery difference from 50% rate by the attacker (values below 50% are as informative as those above it). ∆: the difference between the attacker score and the corresponding adversary’s accuracy. The bold numbers are the best oblivious classifiers within each configuration. | 2 | [['Method', 'No Adversary Baseline'], ['Method', 'Standard Adversary'], ['Method', 'Adv-Capacity'], ['Method', 'Adv-Capacity'], ['Method', 'Adv-Capacity'], ['Method', 'Adv-Capacity'], ['Method', 'Adv-Capacity'], ['Method', 'lambda'], ['Method', 'lambda'], ['Method', 'lambda'], ['Method', 'lambda'], ['Method', 'lambda'], ['Method', 'Ensemble'], ['Method', 'Ensemble'], ['Method', 'Ensemble']] | 2 | [['Parameter', '-'], ['DIAL', 'Sentiment'], ['DIAL', 'Race'], ['DIAL', 'delta'], ['PAN16', 'Mention'], ['PAN16', 'Gender'], ['PAN16', 'delta'], ['PAN16', 'Mention'], ['PAN16', 'Age'], ['PAN16', 'delta']] | [['67.4', '14.5', '-', '77.5', '10.1', '-', '74.7', '9.4', '-'], ['64.7', '6.0', '5.0', '75.6', '8.5', '8.0', '72.5', '7.3', '6.9'], ['64.1', '6.7', '5.2', '73.8', '8.1', '6.7', '71.4', '4.3', '4.1'], ['63.4', '7.1', '4.9', '75.2', '8.9', '7.0', '71.6', '6.3', '4.0'], ['65.2', '8.1', '6.9', '76.1', '6.7', '6.4', '71.9', '6.0', '5.7'], ['63.9', '6.2', '3.7', '74.5', '5.6', '1.6', '73.0', '10.2', '9.6'], ['65.0', '7.1', '4.8', '75.7', '5.4', '4.2', '71.9', '9.8', '7.3'], ['63.9', '6.8', '6.2', '75.6', '7.8', '6.8', '73.1', '4.8', '3.4'], ['64.9', '7.4', '5.4', '75.6', '4.9', '2.4', '72.5', '6.8', '5.8'], ['64.2', '7.3', '5.9', '76.0', '-7.2', '6.7', '72.1', '8.5', '7.7'], ['65.8', '10.2', '10.1', '73.7', '6.4', '6.1', '72.5', '-6.3', '5.2'], ['50.0', '-', '-', '73.6', '6.5', '5.7', '69.0', '3.2', '2.9'], ['62.4', '7.4', '5.4', '74.8', '6.4', '5.0', '72.8', '8.8', '8.3'], ['66.5', '6.5', '5.0', '75.3', '4.9', '3.1', '72.1', '6.7', '6.0'], ['63.8', '4.8', '2.6', '74.3', '4.1', '3.0', '70.1', '5.7', '5.4']] | column | ['accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy'] | ['Method'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Parameter || -</th> <th>DIAL || Sentiment</th> <th>DIAL || Race</th> <th>DIAL || delta</th> <th>PAN16 || Mention</th> <th>PAN16 || Gender</th> <th>PAN16 || delta</th> <th>PAN16 || Mention</th> <th>PAN16 || Age</th> <th>PAN16 || delta</th> </tr> </thead> <tbody> <tr> <td>Method || No Adversary Baseline</td> <td>-</td> <td>67.4</td> <td>14.5</td> <td>-</td> <td>77.5</td> <td>10.1</td> <td>-</td> <td>74.7</td> <td>9.4</td> <td>-</td> </tr> <tr> <td>Method || Standard Adversary</td> <td>(300/1.0/1)</td> <td>64.7</td> <td>6.0</td> <td>5.0</td> <td>75.6</td> <td>8.5</td> <td>8.0</td> <td>72.5</td> <td>7.3</td> <td>6.9</td> </tr> <tr> <td>Method || Adv-Capacity</td> <td>500</td> <td>64.1</td> <td>6.7</td> <td>5.2</td> <td>73.8</td> <td>8.1</td> <td>6.7</td> <td>71.4</td> <td>4.3</td> <td>4.1</td> </tr> <tr> <td>Method || Adv-Capacity</td> <td>1000</td> <td>63.4</td> <td>7.1</td> <td>4.9</td> <td>75.2</td> <td>8.9</td> <td>7.0</td> <td>71.6</td> <td>6.3</td> <td>4.0</td> </tr> <tr> <td>Method || Adv-Capacity</td> <td>2000</td> <td>65.2</td> <td>8.1</td> <td>6.9</td> <td>76.1</td> <td>6.7</td> <td>6.4</td> <td>71.9</td> <td>6.0</td> <td>5.7</td> </tr> <tr> <td>Method || Adv-Capacity</td> <td>5000</td> <td>63.9</td> <td>6.2</td> <td>3.7</td> <td>74.5</td> <td>5.6</td> <td>1.6</td> <td>73.0</td> <td>10.2</td> <td>9.6</td> </tr> <tr> <td>Method || Adv-Capacity</td> <td>8000</td> <td>65.0</td> <td>7.1</td> <td>4.8</td> <td>75.7</td> <td>5.4</td> <td>4.2</td> <td>71.9</td> <td>9.8</td> <td>7.3</td> </tr> <tr> <td>Method || lambda</td> <td>0.5</td> <td>63.9</td> <td>6.8</td> <td>6.2</td> <td>75.6</td> <td>7.8</td> <td>6.8</td> <td>73.1</td> <td>4.8</td> <td>3.4</td> </tr> <tr> <td>Method || lambda</td> <td>1.5</td> <td>64.9</td> <td>7.4</td> <td>5.4</td> <td>75.6</td> <td>4.9</td> <td>2.4</td> <td>72.5</td> <td>6.8</td> <td>5.8</td> </tr> <tr> <td>Method || lambda</td> <td>2.0</td> <td>64.2</td> <td>7.3</td> <td>5.9</td> <td>76.0</td> <td>-7.2</td> <td>6.7</td> <td>72.1</td> <td>8.5</td> <td>7.7</td> </tr> <tr> <td>Method || lambda</td> <td>3.0</td> <td>65.8</td> <td>10.2</td> <td>10.1</td> <td>73.7</td> <td>6.4</td> <td>6.1</td> <td>72.5</td> <td>-6.3</td> <td>5.2</td> </tr> <tr> <td>Method || lambda</td> <td>5.0</td> <td>50.0</td> <td>-</td> <td>-</td> <td>73.6</td> <td>6.5</td> <td>5.7</td> <td>69.0</td> <td>3.2</td> <td>2.9</td> </tr> <tr> <td>Method || Ensemble</td> <td>2</td> <td>62.4</td> <td>7.4</td> <td>5.4</td> <td>74.8</td> <td>6.4</td> <td>5.0</td> <td>72.8</td> <td>8.8</td> <td>8.3</td> </tr> <tr> <td>Method || Ensemble</td> <td>3</td> <td>66.5</td> <td>6.5</td> <td>5.0</td> <td>75.3</td> <td>4.9</td> <td>3.1</td> <td>72.1</td> <td>6.7</td> <td>6.0</td> </tr> <tr> <td>Method || Ensemble</td> <td>5</td> <td>63.8</td> <td>4.8</td> <td>2.6</td> <td>74.3</td> <td>4.1</td> <td>3.0</td> <td>70.1</td> <td>5.7</td> <td>5.4</td> </tr> </tbody></table> | Table 4 | table_4 | D18-1002 | 7 | emnlp2018 | All methods are effective to some extent, Table 4 summarizes the results. Increasing the capacity of the adversarial network helped reduce the protected attribute’s leakage, though different capacities work best on each setup. On the Sentiment/Race task, none of the higher dimensional adversaries worked better than the 300-dim one, on the PAN16 dataset it did. On PAN16/Gender the 8000-dim adversary performed best, and on PAN16/Age, the 500-dim one. Increasing the weight of the adversary through the lambda parameter also has a positive effect on the result (except on the Sentiment/Race pair). However, too large lambda values make training unstable, and require many more epochs for the main-task to stabilize around a satisfying accuracy. The adversarial ensemble method with 2 adversaries achieves 57.4% on Sentiment/Race, as opposed to 56.0% with a single one, but when using 5 different adversaries, we achieve 54.8%. On the PAN16 dataset larger ensembles are more effective. | [1, 1, 1, 1, 1, 2, 1, 1] | ['All methods are effective to some extent, Table 4 summarizes the results.', 'Increasing the capacity of the adversarial network helped reduce the protected attribute’s leakage, though different capacities work best on each setup.', 'On the Sentiment/Race task, none of the higher dimensional adversaries worked better than the 300-dim one, on the PAN16 dataset it did.', 'On PAN16/Gender the 8000-dim adversary performed best, and on PAN16/Age, the 500-dim one.', 'Increasing the weight of the adversary through the lambda parameter also has a positive effect on the result (except on the Sentiment/Race pair).', 'However, too large lambda values make training unstable, and require many more epochs for the main-task to stabilize around a satisfying accuracy.', 'The adversarial ensemble method with 2 adversaries achieves 57.4% on Sentiment/Race, as opposed to 56.0% with a single one, but when using 5 different adversaries, we achieve 54.8%.', 'On the PAN16 dataset larger ensembles are more effective.'] | [None, ['Adv-Capacity'], ['Adv-Capacity', 'Standard Adversary', 'Sentiment', 'Race', 'PAN16'], ['PAN16', 'Gender', 'Age'], ['lambda', 'Standard Adversary', 'PAN16'], ['lambda'], ['Ensemble', 'Standard Adversary', 'Sentiment', 'Race'], ['Ensemble', 'PAN16']] | 1 |
D18-1006table_1 | Results on the prediction task (test set). | 1 | [['ProLocal'], ['QRN'], ['EntNet'], ['ProGlobal'], ['ProStruct']] | 1 | [['Precision'], ['Recall'], ['F1']] | [['77.4', '22.9', '35.3'], ['55.5', '31.3', '40.0'], ['50.2', '33.5', '40.2'], ['46.7', '52.4', '49.4'], ['74.2', '42.1', '53.7']] | column | ['Precision', 'Recall', 'F1'] | ['ProStruct'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Precision</th> <th>Recall</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>ProLocal</td> <td>77.4</td> <td>22.9</td> <td>35.3</td> </tr> <tr> <td>QRN</td> <td>55.5</td> <td>31.3</td> <td>40.0</td> </tr> <tr> <td>EntNet</td> <td>50.2</td> <td>33.5</td> <td>40.2</td> </tr> <tr> <td>ProGlobal</td> <td>46.7</td> <td>52.4</td> <td>49.4</td> </tr> <tr> <td>ProStruct</td> <td>74.2</td> <td>42.1</td> <td>53.7</td> </tr> </tbody></table> | Table 1 | table_1 | D18-1006 | 8 | emnlp2018 | 7.1 Comparison with Baselines. We compare our model (which make use of world knowledge) with the four baseline systems on the ProPara dataset. Table 1 shows the precision, recall, and F1 for all models on the the test partition. PROSTRUCT significantly outperforms the baselines, suggesting that world knowledge helps ProStruct avoid spurious predictions. This hypothesis is supported by the fact that the ProGlobal model has the highest recall and worst precision, indicating that it is over-generating state change predictions. Conversely, the ProLocal model has the highest precision, but its recall is much lower, likely because it makes predictions for individual sentences, and thus has no access to information in surrounding sentences that may suggest a state change is occurring. | [2, 2, 1, 1, 1, 1] | ['7.1 Comparison with Baselines.', 'We compare our model (which make use of world knowledge) with the four baseline systems on the ProPara dataset.', 'Table 1 shows the precision, recall, and F1 for all models on the the test partition.', 'PROSTRUCT significantly outperforms the baselines, suggesting that world knowledge helps ProStruct avoid spurious predictions.', 'This hypothesis is supported by the fact that the ProGlobal model has the highest recall and worst precision, indicating that it is over-generating state change predictions.', 'Conversely, the ProLocal model has the highest precision, but its recall is much lower, likely because it makes predictions for individual sentences, and thus has no access to information in surrounding sentences that may suggest a state change is occurring.'] | [None, ['ProStruct', 'ProLocal', 'QRN', 'EntNet', 'ProGlobal'], ['ProStruct', 'ProLocal', 'QRN', 'EntNet', 'ProGlobal', 'Precision', 'Recall', 'F1'], ['ProStruct', 'ProLocal', 'QRN', 'EntNet', 'ProGlobal'], ['ProGlobal', 'Recall', 'Precision'], ['ProLocal', 'Precision', 'Recall']] | 1 |
D18-1010table_2 | Wikipage retrieval evaluation on dev. “rate”: claim proportion, e.g., x%, if its gold passages are fully retrieved (for “SUPPORT” and “REFUTE” only); “acc ceiling”: x%·(#S+#R)+#N , the upper bound of accuracy for three classes if the coverage x% satisfies. | 2 | [['k', '1'], ['k', '5'], ['k', '10'], ['k', '25'], ['k', '50'], ['k', '100']] | 2 | [['(Thorne et al. 2018)', 'rate'], ['(Thorne et al. 2018)', 'acc ceiling'], ['ours', 'rate'], ['ours', 'acc ceiling']] | [['25.31', '50.21', '76.58', '84.38'], ['55.3', '70.2', '89.63', '93.08'], ['65.86', '77.24', '91.19', '94.12'], ['75.92', '83.95', '92.81', '95.2'], ['82.49', '90.13', '93.36', '95.57'], ['86.59', '91.06', '94.19', '96.12']] | column | ['rate', 'acc ceiling', 'rate', 'acc ceiling'] | ['ours'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>(Thorne et al. 2018) || rate</th> <th>(Thorne et al. 2018) || acc ceiling</th> <th>ours || rate</th> <th>ours || acc ceiling</th> </tr> </thead> <tbody> <tr> <td>k || 1</td> <td>25.31</td> <td>50.21</td> <td>76.58</td> <td>84.38</td> </tr> <tr> <td>k || 5</td> <td>55.3</td> <td>70.2</td> <td>89.63</td> <td>93.08</td> </tr> <tr> <td>k || 10</td> <td>65.86</td> <td>77.24</td> <td>91.19</td> <td>94.12</td> </tr> <tr> <td>k || 25</td> <td>75.92</td> <td>83.95</td> <td>92.81</td> <td>95.2</td> </tr> <tr> <td>k || 50</td> <td>82.49</td> <td>90.13</td> <td>93.36</td> <td>95.57</td> </tr> <tr> <td>k || 100</td> <td>86.59</td> <td>91.06</td> <td>94.19</td> <td>96.12</td> </tr> </tbody></table> | Table 2 | table_2 | D18-1010 | 7 | emnlp2018 | Performance of passage retrieval. Table 2 compares our wikipage retriever with the one in (Thorne et al., 2018), which used a document retriever from DrQA (Chen et al., 2017). Our document retrieval module surpasses the competitor by a big margin in terms of the coverage of gold passages: 89.63% vs. 55.30% (k = 5 in all experiments). Its powerfulness should be attributed to:. (i) Entity mention detection in the claims. (ii) As wiki titles are entities, we have a bi-channel way to match the claim with the wiki page: one with the title, the other with the page body, as shown in Algorithm 1. | [2, 1, 1, 2, 2, 2] | ['Performance of passage retrieval.', 'Table 2 compares our wikipage retriever with the one in (Thorne et al., 2018), which used a document retriever from DrQA (Chen et al., 2017).', 'Our document retrieval module surpasses the competitor by a big margin in terms of the coverage of gold passages: 89.63% vs. 55.30% (k = 5 in all experiments).', 'Its powerfulness should be attributed to:.', '(i) Entity mention detection in the claims.', '(ii) As wiki titles are entities, we have a bi-channel way to match the claim with the wiki page: one with the title, the other with the page body, as shown in Algorithm 1.'] | [None, ['ours', '(Thorne et al. 2018)'], ['ours', '(Thorne et al. 2018)', 'rate'], None, None, None] | 1 |
D18-1010table_3 | Performance on dev and test of FEVER. TWOWINGOS outperforms prior systems if vanilla CNN parameters are shared by evidence identification and claim verification subsystems. It gains more if fine-grained representations are adopted in both subtasks. | 4 | [['system', 'dev', 'MLP', '-'], ['system', 'dev', 'Decomp-Att', '-'], ['system', 'dev', 'TWOWINGOS', 'coarse & coarse'], ['system', 'dev', 'TWOWINGOS', 'pipeline'], ['system', 'dev', 'TWOWINGOS', 'diff-CNN'], ['system', 'dev', 'TWOWINGOS', 'share-CNN'], ['system', 'dev', 'TWOWINGOS', 'coarse & fine (single)'], ['system', 'dev', 'TWOWINGOS', 'coarse & fine (two)'], ['system', 'dev', 'TWOWINGOS', 'fine & sent-wise'], ['system', 'dev', 'TWOWINGOS', 'fine & coarse'], ['system', 'dev', 'TWOWINGOS', 'fine & fine (two)'], ['system', 'test', '(Thorne et al. 2018)', '-'], ['system', 'test', 'TWOWINGOS', '-']] | 2 | [['claim verification', 'NOSCOREEV'], ['claim verification', 'SCOREEV'], ['evidence identification', 'recall'], ['evidence identification', 'precision'], ['evidence identification', 'F1']] | [['41.86', '19.04', '44.22', '10.44', '16.89'], ['52.09', '32.57', '44.22', '10.44', '16.89'], ['', '', '', '', ''], ['35.72', '22.26', '53.75', '29.42', '33.80'], ['39.22', '21.04', '46.88', '43.01', '44.86'], ['72.32', '50.12', '45.55', '40.77', '43.03'], ['75.65', '52.65', '45.81', '42.53', '44.11'], ['78.77', '53.64', '45.78', '39.23', '42.25'], ['71.02', '53.43', '52.70', '48.31', '50.40'], ['71.48', '53.17', '52.75', '47.30', '49.87'], ['78.90', '56.16', '53.81', '47.73', '50.59'], ['50.91', '31.87', '45.89', '10.79', '17.47'], ['75.99', '54.33', '49.91', '44.68', '47.15']] | column | ['NOSCOREEV', 'SCOREEV', 'recall', 'precision', 'F1'] | ['TWOWINGOS'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>claim verification || NOSCOREEV</th> <th>claim verification || SCOREEV</th> <th>evidence identification || recall</th> <th>evidence identification || precision</th> <th>evidence identification || F1</th> </tr> </thead> <tbody> <tr> <td>system || dev || MLP || -</td> <td>41.86</td> <td>19.04</td> <td>44.22</td> <td>10.44</td> <td>16.89</td> </tr> <tr> <td>system || dev || Decomp-Att || -</td> <td>52.09</td> <td>32.57</td> <td>44.22</td> <td>10.44</td> <td>16.89</td> </tr> <tr> <td>system || dev || TWOWINGOS || coarse & coarse</td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>system || dev || TWOWINGOS || pipeline</td> <td>35.72</td> <td>22.26</td> <td>53.75</td> <td>29.42</td> <td>33.80</td> </tr> <tr> <td>system || dev || TWOWINGOS || diff-CNN</td> <td>39.22</td> <td>21.04</td> <td>46.88</td> <td>43.01</td> <td>44.86</td> </tr> <tr> <td>system || dev || TWOWINGOS || share-CNN</td> <td>72.32</td> <td>50.12</td> <td>45.55</td> <td>40.77</td> <td>43.03</td> </tr> <tr> <td>system || dev || TWOWINGOS || coarse & fine (single)</td> <td>75.65</td> <td>52.65</td> <td>45.81</td> <td>42.53</td> <td>44.11</td> </tr> <tr> <td>system || dev || TWOWINGOS || coarse & fine (two)</td> <td>78.77</td> <td>53.64</td> <td>45.78</td> <td>39.23</td> <td>42.25</td> </tr> <tr> <td>system || dev || TWOWINGOS || fine & sent-wise</td> <td>71.02</td> <td>53.43</td> <td>52.70</td> <td>48.31</td> <td>50.40</td> </tr> <tr> <td>system || dev || TWOWINGOS || fine & coarse</td> <td>71.48</td> <td>53.17</td> <td>52.75</td> <td>47.30</td> <td>49.87</td> </tr> <tr> <td>system || dev || TWOWINGOS || fine & fine (two)</td> <td>78.90</td> <td>56.16</td> <td>53.81</td> <td>47.73</td> <td>50.59</td> </tr> <tr> <td>system || test || (Thorne et al. 2018) || -</td> <td>50.91</td> <td>31.87</td> <td>45.89</td> <td>10.79</td> <td>17.47</td> </tr> <tr> <td>system || test || TWOWINGOS || -</td> <td>75.99</td> <td>54.33</td> <td>49.91</td> <td>44.68</td> <td>47.15</td> </tr> </tbody></table> | Table 3 | table_3 | D18-1010 | 8 | emnlp2018 | Performance on FEVER. Table 3 lists the performances of baselines and the TWOWINGOS variants on FEVER (dev&test). From the dev block, we observe that:. TWOWINGOS (from "share-CNN") surpasses prior systems in big margins. Overall,fine-grained schemes in each subtask contribute more than the coarse-grained counterparts;. In the three setups - "pipeline", "diff-CNN" and "share-CNN" - of coarse-coarse, "pipeline" gets better scores than (Thorne et al., 2018) in terms of evidence identification. "Share-CNN" has comparable F1 as "diff-CNN" while gaining a lot on NOSCOREEV (72.32 vs. 39.22) and SCOREEV (50.12 vs. 21.04). This clearly shows that the claim verification gains much knowledge transferred from the evidence identification module. Both "diff-CNN" and "share-CNN" perform better than "pipeline" (except for the slight inferiority at SCOREEV: 21.04 vs. 22.26). Two-channel fine-grained representations show more effective than the single-channel counterpart in claim verification (NOSCOREEV: 78.77 vs. 75.65, SCOREEV: 53.64 vs. 52.65). As we expected, evidence sentences should collaborate in inferring the truth value of the claims. Two-channel setup enables an evidence candidate aware of other candidates as well as the claim. In the last three rows of dev, there is no clear difference among their evidence identification scores. Recall that "sent-wise" is essentially an ensemble system over each (sentence, claim) entailment result. "Coarse-grained", instead, first sums up all sentence representation, then performs (Σ(sentence), claim) reasoning. We can also treat this "sum up" as an ensemble. Their comparison shows that these two kinds of tricks do not make much difference. If we adopt "two-channel fine-grained representation" in claim verification, big improvements are observed in both NOSCOREEV (+7.42%) and SCOREEV (+3%). In the test block, our system (fine & fine (two)) beats the prior top system across all measurements by big margins F1: 47.15 vs. 17.47; SCOREEV: 54.33 vs. 31.87; NOSCOREEV: 75.99 vs. 50.91. In both dev and test blocks, we can observe that our evidence identification module consistently obtains balanced recall and precision. In contrast, the pipeline system by Thorne et al. (2018) has much higher recall than precision (45.89 vs. 10.79). It is worth mentioning that the SCOREEV metric is highly influenced by the recall value, since SCOREEV is computed on the claim instances whose evidences are fully retrieved, regardless of the precision. So, ideally, a system can set all sentences as evidence, so that SCOREEV can be promoted to be equal to NOSCOREEV. | [2, 1, 1, 1, 2, 1, 1, 2, 1, 1, 2, 2, 1, 2, 0, 0, 0, 2, 1, 1, 1, 2, 2] | ['Performance on FEVER.', 'Table 3 lists the performances of baselines and the TWOWINGOS variants on FEVER (dev&test).', 'From the dev block, we observe that:.', 'TWOWINGOS (from "share-CNN") surpasses prior systems in big margins.', 'Overall,fine-grained schemes in each subtask contribute more than the coarse-grained counterparts;.', 'In the three setups - "pipeline", "diff-CNN" and "share-CNN" - of coarse-coarse, "pipeline" gets better scores than (Thorne et al., 2018) in terms of evidence identification.', '"Share-CNN" has comparable F1 as "diff-CNN" while gaining a lot on NOSCOREEV (72.32 vs. 39.22) and SCOREEV (50.12 vs. 21.04).', 'This clearly shows that the claim verification gains much knowledge transferred from the evidence identification module.', 'Both "diff-CNN" and "share-CNN" perform better than "pipeline" (except for the slight inferiority at SCOREEV: 21.04 vs. 22.26).', 'Two-channel fine-grained representations show more effective than the single-channel counterpart in claim verification (NOSCOREEV: 78.77 vs. 75.65, SCOREEV: 53.64 vs. 52.65).', 'As we expected, evidence sentences should collaborate in inferring the truth value of the claims.', 'Two-channel setup enables an evidence candidate aware of other candidates as well as the claim.', 'In the last three rows of dev, there is no clear difference among their evidence identification scores.', 'Recall that "sent-wise" is essentially an ensemble system over each (sentence, claim) entailment result.', '"Coarse-grained", instead, first sums up all sentence representation, then performs (Σ(sentence), claim) reasoning.', 'We can also treat this "sum up" as an ensemble.', 'Their comparison shows that these two kinds of tricks do not make much difference.', 'If we adopt "two-channel fine-grained representation" in claim verification, big improvements are observed in both NOSCOREEV (+7.42%) and SCOREEV (+3%).', 'In the test block, our system (fine & fine (two)) beats the prior top system across all measurements by big margins F1: 47.15 vs. 17.47; SCOREEV: 54.33 vs. 31.87; NOSCOREEV: 75.99 vs. 50.91.', 'In both dev and test blocks, we can observe that our evidence identification module consistently obtains balanced recall and precision.', ' In contrast, the pipeline system by Thorne et al. (2018) has much higher recall than precision (45.89 vs. 10.79).', 'It is worth mentioning that the SCOREEV metric is highly influenced by the recall value, since SCOREEV is computed on the claim instances whose evidences are fully retrieved, regardless of the precision.', 'So, ideally, a system can set all sentences as evidence, so that SCOREEV can be promoted to be equal to NOSCOREEV.'] | [None, ['TWOWINGOS', 'MLP', 'Decomp-Att'], None, ['share-CNN', 'MLP', 'Decomp-Att'], None, ['pipeline', 'diff-CNN', 'share-CNN', '(Thorne et al. 2018)', 'evidence identification'], ['share-CNN', 'diff-CNN', 'NOSCOREEV', 'SCOREEV', 'F1'], ['claim verification', 'evidence identification'], ['pipeline', 'diff-CNN', 'share-CNN'], ['coarse & fine (two)', 'coarse & fine (single)', 'claim verification', 'NOSCOREEV', 'SCOREEV'], None, ['coarse & fine (two)', 'fine & fine (two)'], ['fine & sent-wise', 'fine & coarse', 'fine & fine (two)', 'evidence identification'], ['fine & sent-wise'], None, None, None, ['claim verification', 'SCOREEV', 'NOSCOREEV'], ['test', '(Thorne et al. 2018)', 'TWOWINGOS', 'fine & fine (two)', 'F1', 'SCOREEV', 'NOSCOREEV'], ['dev', 'test', 'evidence identification', 'TWOWINGOS', 'recall', 'precision'], ['(Thorne et al. 2018)', 'recall', 'precision'], ['SCOREEV', 'recall', 'precision'], ['SCOREEV', 'NOSCOREEV']] | 1 |
D18-1013table_2 | Performance on the COCO Karpathy test split. Symbols, ∗ and †, are defined similarly. Our model outperforms the current state-of-the-art Up-Down substantially in terms of SPICE. | 2 | [['COCO', 'HardAtt (Xu et al. 2015)'], ['COCO', 'ATT-FCN (You et al. 2016)'], ['COCO', 'SCA-CNN (Chen et al. 2017)'], ['COCO', 'LSTM-A (Yao et al. 2017)'], ['COCO', 'SCN-LSTM (Gan et al. 2017)'], ['COCO', 'Skeleton (Wang et al. 2017)'], ['COCO', 'AdaAtt (Lu et al. 2017)'], ['COCO', 'NBT (Lu et al. 2018)'], ['COCO', 'DRL (Ren et al. 2017b)'], ['COCO', 'TD-M-ATT (Chen et al. 2018)'], ['COCO', 'SCST (Rennie et al. 2017)'], ['COCO', 'SR-PL (Liu et al. 2018)'], ['COCO', 'Up-Down (Anderson et al. 2018)'], ['COCO', 'simNet']] | 1 | [['SPICE'], ['CIDEr'], ['METEOR'], ['ROUGE-L'], ['BLEU-4']] | [['-', '-', '0.230', '-', '0.250'], ['-', '-', '0.243', '-', '0.304'], ['-', '0.952', '0.250', '0.531', '0.311'], ['0.186', '1.002', '0.254', '0.540', '0.326'], ['-', '1.012', '0.257', '-', '0.330'], ['-', '1.069', '0.268', '0.552', '0.336'], ['0.195', '1.085', '0.266', '0.549', '0.332'], ['0.201', '1.072', '0.271', '-', '0.347'], ['-', '0.937', '0.251', '0.525', '0.304'], ['-', '1.116', '0.268', '0.555', '0.336'], ['-', '1.140', '0.267', '0.557', '0.342'], ['0.210', '1.171', '0.274', '0.570', '0.358'], ['0.214', '1.201', '0.277', '0.569', '0.363'], ['0.220', '1.135', '0.283', '0.564', '0.332']] | column | ['SPICE', 'CIDEr', 'METEOR', 'ROUGE-L', 'BLEU-4'] | ['simNet'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>SPICE</th> <th>CIDEr</th> <th>METEOR</th> <th>ROUGE-L</th> <th>BLEU-4</th> </tr> </thead> <tbody> <tr> <td>COCO || HardAtt (Xu et al. 2015)</td> <td>-</td> <td>-</td> <td>0.230</td> <td>-</td> <td>0.250</td> </tr> <tr> <td>COCO || ATT-FCN (You et al. 2016)</td> <td>-</td> <td>-</td> <td>0.243</td> <td>-</td> <td>0.304</td> </tr> <tr> <td>COCO || SCA-CNN (Chen et al. 2017)</td> <td>-</td> <td>0.952</td> <td>0.250</td> <td>0.531</td> <td>0.311</td> </tr> <tr> <td>COCO || LSTM-A (Yao et al. 2017)</td> <td>0.186</td> <td>1.002</td> <td>0.254</td> <td>0.540</td> <td>0.326</td> </tr> <tr> <td>COCO || SCN-LSTM (Gan et al. 2017)</td> <td>-</td> <td>1.012</td> <td>0.257</td> <td>-</td> <td>0.330</td> </tr> <tr> <td>COCO || Skeleton (Wang et al. 2017)</td> <td>-</td> <td>1.069</td> <td>0.268</td> <td>0.552</td> <td>0.336</td> </tr> <tr> <td>COCO || AdaAtt (Lu et al. 2017)</td> <td>0.195</td> <td>1.085</td> <td>0.266</td> <td>0.549</td> <td>0.332</td> </tr> <tr> <td>COCO || NBT (Lu et al. 2018)</td> <td>0.201</td> <td>1.072</td> <td>0.271</td> <td>-</td> <td>0.347</td> </tr> <tr> <td>COCO || DRL (Ren et al. 2017b)</td> <td>-</td> <td>0.937</td> <td>0.251</td> <td>0.525</td> <td>0.304</td> </tr> <tr> <td>COCO || TD-M-ATT (Chen et al. 2018)</td> <td>-</td> <td>1.116</td> <td>0.268</td> <td>0.555</td> <td>0.336</td> </tr> <tr> <td>COCO || SCST (Rennie et al. 2017)</td> <td>-</td> <td>1.140</td> <td>0.267</td> <td>0.557</td> <td>0.342</td> </tr> <tr> <td>COCO || SR-PL (Liu et al. 2018)</td> <td>0.210</td> <td>1.171</td> <td>0.274</td> <td>0.570</td> <td>0.358</td> </tr> <tr> <td>COCO || Up-Down (Anderson et al. 2018)</td> <td>0.214</td> <td>1.201</td> <td>0.277</td> <td>0.569</td> <td>0.363</td> </tr> <tr> <td>COCO || simNet</td> <td>0.220</td> <td>1.135</td> <td>0.283</td> <td>0.564</td> <td>0.332</td> </tr> </tbody></table> | Table 2 | table_2 | D18-1013 | 6 | emnlp2018 | Table 2 shows the results on COCO. Among the directly comparable models, our model is arguably the best and outperforms the existing models except in terms of BLEU-4. Most encouragingly, our model is also competitive with Up-Down (Ander-son et al. 2018), which uses much larger dataset, Visual Genome (Krishna et al. 2017), with dense annotations to train the object detector, and directly optimizes CIDEr. Especially, our model outperforms the state-of-the-art substantially in SPICE and METEOR. | [1, 1, 2, 1] | ['Table 2 shows the results on COCO.', 'Among the directly comparable models, our model is arguably the best and outperforms the existing models except in terms of BLEU-4.', 'Most encouragingly, our model is also competitive with Up-Down (Ander-son et al. 2018), which uses much larger dataset, Visual Genome (Krishna et al. 2017), with dense annotations to train the object detector, and directly optimizes CIDEr.', 'Especially, our model outperforms the state-of-the-art substantially in SPICE and METEOR.'] | [['COCO'], ['simNet', 'HardAtt (Xu et al. 2015)', 'ATT-FCN (You et al. 2016)', 'SCA-CNN (Chen et al. 2017)', 'LSTM-A (Yao et al. 2017)', 'SCN-LSTM (Gan et al. 2017)', 'Skeleton (Wang et al. 2017)', 'AdaAtt (Lu et al. 2017)', 'NBT (Lu et al. 2018)', 'DRL (Ren et al. 2017b)', 'TD-M-ATT (Chen et al. 2018)', 'SCST (Rennie et al. 2017)', 'BLEU-4'], ['simNet', 'Up-Down (Anderson et al. 2018)'], ['simNet', 'Up-Down (Anderson et al. 2018)', 'SPICE', 'METEOR']] | 1 |
D18-1013table_6 | Performance on the online COCO evaluation server. The SPICE metric is unavailable for our model, thus not reported. c5 means evaluating against 5 references, and c40 means evaluating against 40 references. The symbol ∗ denotes directly optimizing CIDEr. The symbol † denotes model ensemble. The symbol ‡ denotes using extra data for training, thus not directly comparable. Our submission does not use the three aforementioned techniques. Nonetheless, our model is second only to Up-Down and surpasses almost all the other models in published work, especially when 40 references are considered. | 2 | [['COCO', 'HardAtt (Xu et al. 2015)'], ['COCO', 'ATT-FCN (You et al. 2016)'], ['COCO', 'SCA-CNN (Chen et al. 2017)'], ['COCO', 'LSTM-A (Yao et al. 2017)'], ['COCO', 'SCN-LSTM (Gan et al. 2017)'], ['COCO', 'AdaAtt (Lu et al. 2017)'], ['COCO', 'TD-M-ATT (Chen et al. 2018)'], ['COCO', 'SCST (Rennie et al. 2017)'], ['COCO', 'Up-Down (Anderson et al. 2018)'], ['COCO', 'simNet']] | 2 | [['BLEU-1', 'c5'], ['BLEU-1', 'c40'], ['BLEU-2', 'c5'], ['BLEU-2', 'c40'], ['BLEU-3', 'c5'], ['BLEU-3', 'c40'], ['BLEU-4', 'c5'], ['BLEU-4', 'c40'], ['METEOR', 'c5'], ['METEOR', 'c40'], ['ROUGE-L', 'c5'], ['ROUGE-L', 'c40'], ['CIDEr', 'c5'], ['CIDEr', 'c40']] | [['0.705', '0.881', '0.528', '0.779', '0.383', '0.658', '0.277', '0.537', '0.241', '0.322', '0.516', '0.654', '0.865', '0.893'], ['0.731', '0.900', '0.565', '0.815', '0.424', '0.709', '0.316', '0.599', '0.250', '0.335', '0.535', '0.682', '0.943', '0.958'], ['0.712', '0.894', '0.542', '0.802', '0.404', '0.691', '0.302', '0.579', '0.244', '0.331', '0.524', '0.674', '0.912', '0.921'], ['0.739', '0.919', '0.575', '0.842', '0.436', '0.740', '0.330', '0.632', '0.256', '0.350', '0.542', '0.700', '0.984', '1.003'], ['0.740', '0.917', '0.575', '0.839', '0.436', '0.739', '0.331', '0.631', '0.257', '0.348', '0.543', '0.696', '1.003', '1.013'], ['0.748', '0.920', '0.584', '0.845', '0.444', '0.744', '0.336', '0.637', '0.264', '0.359', '0.550', '0.705', '1.042', '1.059'], ['0.757', '0.913', '0.591', '0.836', '0.441', '0.726', '0.324', '0.609', '0.259', '0.342', '0.547', '0.689', '1.059', '1.090'], ['0.781', '0.937', '0.619', '0.860', '0.470', '0.759', '0.352', '0.645', '0.270', '0.355', '0.563', '0.707', '1.147', '1.167'], ['0.802', '0.952', '0.641', '0.888', '0.491', '0.794', '0.369', '0.685', '0.276', '0.367', '0.571', '0.724', '1.179', '1.205'], ['0.766', '0.941', '0.605', '0.874', '0.462', '0.778', '0.350', '0.671', '0.267', '0.362', '0.558', '0.716', '1.087', '1.111']] | column | ['BLEU-1', 'BLEU-1', 'BLEU-2', 'BLEU-2', 'BLEU-3', 'BLEU-3', 'BLEU-4', 'BLEU-4', 'METEOR', 'METEOR', 'ROUGE-L', 'ROUGE-L', 'CIDEr', 'CIDEr'] | ['simNet'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>BLEU-1 || c5</th> <th>BLEU-1 || c40</th> <th>BLEU-2 || c5</th> <th>BLEU-2 || c40</th> <th>BLEU-3 || c5</th> <th>BLEU-3 || c40</th> <th>BLEU-4 || c5</th> <th>BLEU-4 || c40</th> <th>METEOR || c5</th> <th>METEOR || c40</th> <th>ROUGE-L || c5</th> <th>ROUGE-L || c40</th> <th>CIDEr || c5</th> <th>CIDEr || c40</th> </tr> </thead> <tbody> <tr> <td>COCO || HardAtt (Xu et al. 2015)</td> <td>0.705</td> <td>0.881</td> <td>0.528</td> <td>0.779</td> <td>0.383</td> <td>0.658</td> <td>0.277</td> <td>0.537</td> <td>0.241</td> <td>0.322</td> <td>0.516</td> <td>0.654</td> <td>0.865</td> <td>0.893</td> </tr> <tr> <td>COCO || ATT-FCN (You et al. 2016)</td> <td>0.731</td> <td>0.900</td> <td>0.565</td> <td>0.815</td> <td>0.424</td> <td>0.709</td> <td>0.316</td> <td>0.599</td> <td>0.250</td> <td>0.335</td> <td>0.535</td> <td>0.682</td> <td>0.943</td> <td>0.958</td> </tr> <tr> <td>COCO || SCA-CNN (Chen et al. 2017)</td> <td>0.712</td> <td>0.894</td> <td>0.542</td> <td>0.802</td> <td>0.404</td> <td>0.691</td> <td>0.302</td> <td>0.579</td> <td>0.244</td> <td>0.331</td> <td>0.524</td> <td>0.674</td> <td>0.912</td> <td>0.921</td> </tr> <tr> <td>COCO || LSTM-A (Yao et al. 2017)</td> <td>0.739</td> <td>0.919</td> <td>0.575</td> <td>0.842</td> <td>0.436</td> <td>0.740</td> <td>0.330</td> <td>0.632</td> <td>0.256</td> <td>0.350</td> <td>0.542</td> <td>0.700</td> <td>0.984</td> <td>1.003</td> </tr> <tr> <td>COCO || SCN-LSTM (Gan et al. 2017)</td> <td>0.740</td> <td>0.917</td> <td>0.575</td> <td>0.839</td> <td>0.436</td> <td>0.739</td> <td>0.331</td> <td>0.631</td> <td>0.257</td> <td>0.348</td> <td>0.543</td> <td>0.696</td> <td>1.003</td> <td>1.013</td> </tr> <tr> <td>COCO || AdaAtt (Lu et al. 2017)</td> <td>0.748</td> <td>0.920</td> <td>0.584</td> <td>0.845</td> <td>0.444</td> <td>0.744</td> <td>0.336</td> <td>0.637</td> <td>0.264</td> <td>0.359</td> <td>0.550</td> <td>0.705</td> <td>1.042</td> <td>1.059</td> </tr> <tr> <td>COCO || TD-M-ATT (Chen et al. 2018)</td> <td>0.757</td> <td>0.913</td> <td>0.591</td> <td>0.836</td> <td>0.441</td> <td>0.726</td> <td>0.324</td> <td>0.609</td> <td>0.259</td> <td>0.342</td> <td>0.547</td> <td>0.689</td> <td>1.059</td> <td>1.090</td> </tr> <tr> <td>COCO || SCST (Rennie et al. 2017)</td> <td>0.781</td> <td>0.937</td> <td>0.619</td> <td>0.860</td> <td>0.470</td> <td>0.759</td> <td>0.352</td> <td>0.645</td> <td>0.270</td> <td>0.355</td> <td>0.563</td> <td>0.707</td> <td>1.147</td> <td>1.167</td> </tr> <tr> <td>COCO || Up-Down (Anderson et al. 2018)</td> <td>0.802</td> <td>0.952</td> <td>0.641</td> <td>0.888</td> <td>0.491</td> <td>0.794</td> <td>0.369</td> <td>0.685</td> <td>0.276</td> <td>0.367</td> <td>0.571</td> <td>0.724</td> <td>1.179</td> <td>1.205</td> </tr> <tr> <td>COCO || simNet</td> <td>0.766</td> <td>0.941</td> <td>0.605</td> <td>0.874</td> <td>0.462</td> <td>0.778</td> <td>0.350</td> <td>0.671</td> <td>0.267</td> <td>0.362</td> <td>0.558</td> <td>0.716</td> <td>1.087</td> <td>1.111</td> </tr> </tbody></table> | Table 6 | table_6 | D18-1013 | 13 | emnlp2018 | A Supplementary Material. A.1 Results on COCO Evaluation Server. Table 6 shows the performance on the online COCO evaluation server. We put it in the appendix because the results are incomplete and the SPICE metric is not available for our submission, which correlates the best with human evaluation. The SPICE metrics are only available at the leaderboard on the COCO dataset website, which, unfortunately, has not been updated for more than a year. Our submission does not directly optimize CIDEr, use model ensemble, or use extra training data. The three techniques typically result in orthogonal improvements (Lu et al. 2017; Rennie et al. 2017; Anderson et al. 2018). Moreover, the SPICE results are missing, in which the proposed model has the most advantage. Nonetheless, our model is second only to Up-Down (Anderson et al. 2018) and surpasses almost all the other models in published work, especially when 40 references are considered. | [2, 2, 1, 2, 0, 2, 0, 2, 1] | ['A Supplementary Material.', 'A.1 Results on COCO Evaluation Server.', 'Table 6 shows the performance on the online COCO evaluation server.', 'We put it in the appendix because the results are incomplete and the SPICE metric is not available for our submission, which correlates the best with human evaluation.', 'The SPICE metrics are only available at the leaderboard on the COCO dataset website, which, unfortunately, has not been updated for more than a year.', 'Our submission does not directly optimize CIDEr, use model ensemble, or use extra training data.', 'The three techniques typically result in orthogonal improvements (Lu et al. 2017; Rennie et al. 2017; Anderson et al. 2018).', 'Moreover, the SPICE results are missing, in which the proposed model has the most advantage.', 'Nonetheless, our model is second only to Up-Down (Anderson et al. 2018) and surpasses almost all the other models in published work, especially when 40 references are considered.'] | [None, None, ['COCO'], None, None, None, None, ['simNet'], ['simNet', 'Up-Down (Anderson et al. 2018)', 'c40', 'HardAtt (Xu et al. 2015)', 'ATT-FCN (You et al. 2016)', 'SCA-CNN (Chen et al. 2017)', 'LSTM-A (Yao et al. 2017)', 'SCN-LSTM (Gan et al. 2017)', 'AdaAtt (Lu et al. 2017)', 'TD-M-ATT (Chen et al. 2018)', 'SCST (Rennie et al. 2017)']] | 1 |
D18-1015table_1 | Performance comparisons of different methods on DiDeMo. The best performance for each metric entry is highlighted in boldface. R@1 IoU=1 19.40 13.10 18.35 19.88 28.10 24.28 27.52 28.23 | 2 | [['Method', 'MFP'], ['Method', 'MCN-VGG16'], ['Method', 'MCN-Flow'], ['Method', 'MCN-Fusion'], ['Method', 'MCN-Fusion+TEF'], ['Method', 'TGN-VGG16'], ['Method', 'TGN-Flow'], ['Method', 'TGN-Fusion']] | 2 | [['R@1', 'IoU=1'], ['R@5', 'IoU=1'], ['mIoU', '-']] | [['19.40', '66.38', '26.65'], ['13.10', '44.82', '25.13'], ['18.35', '56,25', '31.46'], ['19.88', '62.39', '33.51'], ['28.10', '78.21', '41.08'], ['24.28', '71.43', '38.62'], ['27.52', '76.94', '42.84'], ['28.23', '79.26', '42.97']] | column | ['R@1', 'R@5', 'mIoU'] | ['TGN-VGG16', 'TGN-Flow', 'TGN-Fusion'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>R@1 || IoU=1</th> <th>R@5 || IoU=1</th> <th>mIoU || -</th> </tr> </thead> <tbody> <tr> <td>Method || MFP</td> <td>19.40</td> <td>66.38</td> <td>26.65</td> </tr> <tr> <td>Method || MCN-VGG16</td> <td>13.10</td> <td>44.82</td> <td>25.13</td> </tr> <tr> <td>Method || MCN-Flow</td> <td>18.35</td> <td>56,25</td> <td>31.46</td> </tr> <tr> <td>Method || MCN-Fusion</td> <td>19.88</td> <td>62.39</td> <td>33.51</td> </tr> <tr> <td>Method || MCN-Fusion+TEF</td> <td>28.10</td> <td>78.21</td> <td>41.08</td> </tr> <tr> <td>Method || TGN-VGG16</td> <td>24.28</td> <td>71.43</td> <td>38.62</td> </tr> <tr> <td>Method || TGN-Flow</td> <td>27.52</td> <td>76.94</td> <td>42.84</td> </tr> <tr> <td>Method || TGN-Fusion</td> <td>28.23</td> <td>79.26</td> <td>42.97</td> </tr> </tbody></table> | Table 1 | table_1 | D18-1015 | 7 | emnlp2018 | 4.3 Experimental Results and Analysis. 4.3.1 Comparisons with State-of-the-Arts. Experiments on DiDeMo. Table 1 illustrates the performance comparisons on the DiDeMo dataset. In addition to MCN, we also compare with the baseline Moment Frequency Prior (MFP) in (Hendricks et al., 2017), which selects segments corresponding to the positions of videos in the training dataset with most annotations. First, TGN with different features can significantly outperforms the "prior baseline" MFP, which retrieves segments corresponding to the most common start and end points in the dataset. Second, it can be observed that with the same visual features, specifically VGG16 and optical flow, TGN significantly outperforms MCN. And the performance of TGN with optical flow is better than that with VGG16. One possible reason is that the videos in DiDeMo are relatively short, which only contain a single event. In such a case, the action information plays a more critical role. This finding is also consistent with (Hendricks et al., 2017). By fusing the results obtained by VGG16 and optical flow together, the performance can be further boosted, as demonstrated by TGN-Fusion and MCN-Fusion. Third, MCN introduces the temporal endpoint feature (TEF) as prior knowledge, which indicates when a segment occurs in a video. With TEF, the performance of MCN can be significantly improved. However, it is still inferior to our proposed TGN. | [2, 2, 2, 1, 1, 1, 1, 1, 2, 2, 0, 1, 2, 1, 1] | ['4.3 Experimental Results and Analysis.', '4.3.1 Comparisons with State-of-the-Arts.', 'Experiments on DiDeMo.', 'Table 1 illustrates the performance comparisons on the DiDeMo dataset.', 'In addition to MCN, we also compare with the baseline Moment Frequency Prior (MFP) in (Hendricks et al., 2017), which selects segments corresponding to the positions of videos in the training dataset with most annotations.', 'First, TGN with different features can significantly outperforms the "prior baseline" MFP, which retrieves segments corresponding to the most common start and end points in the dataset.', 'Second, it can be observed that with the same visual features, specifically VGG16 and optical flow, TGN significantly outperforms MCN.', 'And the performance of TGN with optical flow is better than that with VGG16.', 'One possible reason is that the videos in DiDeMo are relatively short, which only contain a single event.', 'In such a case, the action information plays a more critical role.', 'This finding is also consistent with (Hendricks et al., 2017).', 'By fusing the results obtained by VGG16 and optical flow together, the performance can be further boosted, as demonstrated by TGN-Fusion and MCN-Fusion.', 'Third, MCN introduces the temporal endpoint feature (TEF) as prior knowledge, which indicates when a segment occurs in a video.', 'With TEF, the performance of MCN can be significantly improved.', 'However, it is still inferior to our proposed TGN.'] | [None, None, None, None, ['MFP', 'MCN-VGG16', 'MCN-Flow', 'MCN-Fusion', 'MCN-Fusion+TEF'], ['MFP', 'TGN-VGG16', 'TGN-Flow', 'TGN-Fusion'], ['TGN-VGG16', 'TGN-Flow', 'MCN-VGG16', 'MCN-Flow'], ['TGN-Flow', 'TGN-VGG16'], None, None, None, ['TGN-Fusion', 'MCN-Fusion'], ['MCN-Fusion+TEF'], ['MCN-Fusion+TEF'], ['TGN-Fusion', 'MCN-Fusion+TEF']] | 1 |
D18-1017table_2 | NER results for named entities on the original WeiboNER dataset (Peng and Dredze, 2015). There are three blocks. The first two blocks contain the main and simplified models proposed by Peng and Dredze (2015) and Peng and Dredze (2016), respectively. The last block lists the performance of our proposed model. | 2 | [['Models', 'CRF (Peng and Dredze 2015)'], ['Models', 'CRF+word (Peng and Dredze 2015)'], ['Models', 'CRF+character (Peng and Dredze 2015)'], ['Models', 'CRF+character+position (Peng and Dredze 2015)'], ['Models', 'Joint(cp) (main) (Peng and Dredze 2015)'], ['Models', 'Pipeline Seg.Repr.+NER (Peng and Dredze 2016)'], ['Models', 'Jointly Train Char.Emb (Peng and Dredze 2016)'], ['Models', 'Jointly Train LSTM Hidden (Peng and Dredze 2016)'], ['Models', 'Jointly Train LSTM+Emb (main) (Peng and Dredze 2016)'], ['Models', 'BiLSTM+CRF+adversarial+self-attention']] | 1 | [['P(%)'], ['R(%)'], ['F1(%)']] | [['56.98', '25.26', '35.00'], ['64.94', '25.77', '36.90'], ['57.89', '34.02', '42.86'], ['57.26', '34.53', '43.09'], ['57.98', '35.57', '44.09'], ['64.22', '36.08', '46.20'], ['63.16', '37.11', '46.75'], ['63.03', '38.66', '47.92'], ['63.33', '39.18', '48.41'], ['55.72', '50.68', '53.08']] | column | ['P(%)', 'R(%)', 'F1(%)'] | ['BiLSTM+CRF+adversarial+self-attention'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P(%)</th> <th>R(%)</th> <th>F1(%)</th> </tr> </thead> <tbody> <tr> <td>Models || CRF (Peng and Dredze 2015)</td> <td>56.98</td> <td>25.26</td> <td>35.00</td> </tr> <tr> <td>Models || CRF+word (Peng and Dredze 2015)</td> <td>64.94</td> <td>25.77</td> <td>36.90</td> </tr> <tr> <td>Models || CRF+character (Peng and Dredze 2015)</td> <td>57.89</td> <td>34.02</td> <td>42.86</td> </tr> <tr> <td>Models || CRF+character+position (Peng and Dredze 2015)</td> <td>57.26</td> <td>34.53</td> <td>43.09</td> </tr> <tr> <td>Models || Joint(cp) (main) (Peng and Dredze 2015)</td> <td>57.98</td> <td>35.57</td> <td>44.09</td> </tr> <tr> <td>Models || Pipeline Seg.Repr.+NER (Peng and Dredze 2016)</td> <td>64.22</td> <td>36.08</td> <td>46.20</td> </tr> <tr> <td>Models || Jointly Train Char.Emb (Peng and Dredze 2016)</td> <td>63.16</td> <td>37.11</td> <td>46.75</td> </tr> <tr> <td>Models || Jointly Train LSTM Hidden (Peng and Dredze 2016)</td> <td>63.03</td> <td>38.66</td> <td>47.92</td> </tr> <tr> <td>Models || Jointly Train LSTM+Emb (main) (Peng and Dredze 2016)</td> <td>63.33</td> <td>39.18</td> <td>48.41</td> </tr> <tr> <td>Models || BiLSTM+CRF+adversarial+self-attention</td> <td>55.72</td> <td>50.68</td> <td>53.08</td> </tr> </tbody></table> | Table 2 | table_2 | D18-1017 | 6 | emnlp2018 | 4.3.1 Evaluation on WeiboNER. We compare our proposed model with the latest models on WeiboNER dataset. Table 2 shows the experimental results for named entities on the original WeiboNER dataset. In the first block of Table 2, we give the performance of the main model and baselines proposed by Peng and Dredze (2015). They propose a CRF-based model to jointly train the embeddings with NER task, which achieves better results than pipeline models. In addition, they consider the position of each character in a word to train character and position embeddings. In the second block of Table 2, we report the performance of the main model and baselines proposed by Peng and Dredze (2016). Aiming to incorporate word boundary information into the NER task, they propose an integrated model that can joint training CWS task, improving the F1 score from 46.20% to 48.41% as compared with pipeline model (Pipeline Seg.Repr.+NER). In the last block of Table 2, we give the experimental result of our proposed model (BiLSTM+CRF+adversarial+self-attention). We can observe that our proposed model significantly outperforms other models. Compared with the model proposed by Peng and Dredze (2016), our method gains 4.67% improvement in F1 score. Interestingly, WeiboNER dataset and MSR dataset are different domains. The WeiboNER dataset is social media domain, while the MSR dataset can be regard as news domain. The improvement of performance indicates that our proposed adversarial transfer learning framework may not only learn task-shared word boundary information from CWS task but also tackle the domain adaptation problem. | [2, 2, 1, 1, 2, 2, 1, 1, 1, 1, 1, 2, 0, 2] | ['4.3.1 Evaluation on WeiboNER.', 'We compare our proposed model with the latest models on WeiboNER dataset.', 'Table 2 shows the experimental results for named entities on the original WeiboNER dataset.', 'In the first block of Table 2, we give the performance of the main model and baselines proposed by Peng and Dredze (2015).', 'They propose a CRF-based model to jointly train the embeddings with NER task, which achieves better results than pipeline models.', 'In addition, they consider the position of each character in a word to train character and position embeddings.', 'In the second block of Table 2, we report the performance of the main model and baselines proposed by Peng and Dredze (2016).', 'Aiming to incorporate word boundary information into the NER task, they propose an integrated model that can joint training CWS task, improving the F1 score from 46.20% to 48.41% as compared with pipeline model (Pipeline Seg.Repr.+NER).', 'In the last block of Table 2, we give the experimental result of our proposed model (BiLSTM+CRF+adversarial+self-attention).', 'We can observe that our proposed model significantly outperforms other models.', 'Compared with the model proposed by Peng and Dredze (2016), our method gains 4.67% improvement in F1 score.', 'Interestingly, WeiboNER dataset and MSR dataset are different domains.', 'The WeiboNER dataset is social media domain, while the MSR dataset can be regard as news domain.', 'The improvement of performance indicates that our proposed adversarial transfer learning framework may not only learn task-shared word boundary information from CWS task but also tackle the domain adaptation problem.'] | [None, None, None, ['CRF (Peng and Dredze 2015)', 'CRF+word (Peng and Dredze 2015)', 'CRF+character (Peng and Dredze 2015)', 'CRF+character+position (Peng and Dredze 2015)', 'Joint(cp) (main) (Peng and Dredze 2015)'], ['CRF (Peng and Dredze 2015)'], ['CRF+word (Peng and Dredze 2015)', 'CRF+character (Peng and Dredze 2015)', 'CRF+character+position (Peng and Dredze 2015)', 'Joint(cp) (main) (Peng and Dredze 2015)'], ['Pipeline Seg.Repr.+NER (Peng and Dredze 2016)', 'Jointly Train Char.Emb (Peng and Dredze 2016)', 'Jointly Train LSTM Hidden (Peng and Dredze 2016)', 'Jointly Train LSTM+Emb (main) (Peng and Dredze 2016)'], ['F1(%)', 'Pipeline Seg.Repr.+NER (Peng and Dredze 2016)', 'Jointly Train LSTM+Emb (main) (Peng and Dredze 2016)'], ['BiLSTM+CRF+adversarial+self-attention'], ['BiLSTM+CRF+adversarial+self-attention', 'CRF (Peng and Dredze 2015)', 'CRF+word (Peng and Dredze 2015)', 'CRF+character (Peng and Dredze 2015)', 'CRF+character+position (Peng and Dredze 2015)', 'Joint(cp) (main) (Peng and Dredze 2015)', 'Pipeline Seg.Repr.+NER (Peng and Dredze 2016)', 'Jointly Train Char.Emb (Peng and Dredze 2016)', 'Jointly Train LSTM Hidden (Peng and Dredze 2016)', 'Jointly Train LSTM+Emb (main) (Peng and Dredze 2016)'], ['BiLSTM+CRF+adversarial+self-attention', 'Jointly Train LSTM+Emb (main) (Peng and Dredze 2016)', 'F1(%)'], None, None, None] | 1 |
D18-1017table_3 | Experimental results on the updated WeiboNER dataset (He and Sun, 2017a). There are two blocks. The first block is the performance of latest models. The second block reports the performance of our proposed model. With the limited length of the page, we use “adv” to denote “adversarial”. | 2 | [['Models', 'Peng and Dredze (2015)'], ['Models', 'Peng and Dredze (2016)'], ['Models', 'He and Sun (2017a)'], ['Models', 'He and Sun (2017b)'], ['Models', 'BiLSTM+CRF+adv+self-attention']] | 2 | [['Named Entity', 'P(%)'], ['Named Entity', 'R(%)'], ['Named Entity', 'F1(%)'], ['Nominal Mention', 'P(%)'], ['Nominal Mention', 'R(%)'], ['Nominal Mention', 'F1(%)'], ['Overall', 'F1(%)']] | [['74.78', '39.81', '51.96', '71.92', '53.03', '61.05', '56.05'], ['66.67', '47.22', '55.28', '74.48', '54.55', '62.97', '58.99'], ['66.93', '40.67', '50.6', '66.46', '53.57', '59.32', '54.82'], ['61.68', '48.82', '54.5', '74.13', '53.54', '62.17', '58.23'], ['59.51', '50', '54.34', '71.43', '47.9', '57.35', '58.7']] | column | ['P(%)', 'R(%)', 'F1(%)', 'P(%)', 'R(%)', 'F1(%)', 'F1(%)'] | ['BiLSTM+CRF+adv+self-attention'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Named Entity || P(%)</th> <th>Named Entity || R(%)</th> <th>Named Entity || F1(%)</th> <th>Nominal Mention || P(%)</th> <th>Nominal Mention || R(%)</th> <th>Nominal Mention || F1(%)</th> <th>Overall || F1(%)</th> </tr> </thead> <tbody> <tr> <td>Models || Peng and Dredze (2015)</td> <td>74.78</td> <td>39.81</td> <td>51.96</td> <td>71.92</td> <td>53.03</td> <td>61.05</td> <td>56.05</td> </tr> <tr> <td>Models || Peng and Dredze (2016)</td> <td>66.67</td> <td>47.22</td> <td>55.28</td> <td>74.48</td> <td>54.55</td> <td>62.97</td> <td>58.99</td> </tr> <tr> <td>Models || He and Sun (2017a)</td> <td>66.93</td> <td>40.67</td> <td>50.6</td> <td>66.46</td> <td>53.57</td> <td>59.32</td> <td>54.82</td> </tr> <tr> <td>Models || He and Sun (2017b)</td> <td>61.68</td> <td>48.82</td> <td>54.5</td> <td>74.13</td> <td>53.54</td> <td>62.17</td> <td>58.23</td> </tr> <tr> <td>Models || BiLSTM+CRF+adv+self-attention</td> <td>59.51</td> <td>50</td> <td>54.34</td> <td>71.43</td> <td>47.9</td> <td>57.35</td> <td>58.7</td> </tr> </tbody></table> | Table 3 | table_3 | D18-1017 | 7 | emnlp2018 | We also conduct an experiment on the updated WeiboNER dataset. Table 3 lists the performance of the latest models and our proposed model on the updated dataset. In the first block of Table 3, we report the performance of the latest models. The model proposed by Peng and Dredze (2015) achieves F1 score of 56.05% on overall performance. He and Sun (2017b) propose an unified model for Chinese NER task to exploit the data from out-of-domain corpus and in-domain unlabelled texts. The unified model improves the F1 score from 54.82% to 58.23% compared with the model proposed by He and Sun (2017a). In the second block of Table 3, we give the result of our proposed model. It can be observed that our proposed model achieves a very competitive performance. Compared with the latest model proposed by He and Sun (2017b), our model improves the F1 score from 58.23% to 58.70% on overall performance. The improvement demonstrates the effectiveness of our proposed model. | [2, 1, 1, 1, 2, 1, 1, 1, 1, 2] | ['We also conduct an experiment on the updated WeiboNER dataset.', 'Table 3 lists the performance of the latest models and our proposed model on the updated dataset.', 'In the first block of Table 3, we report the performance of the latest models.', 'The model proposed by Peng and Dredze (2015) achieves F1 score of 56.05% on overall performance.', 'He and Sun (2017b) propose an unified model for Chinese NER task to exploit the data from out-of-domain corpus and in-domain unlabelled texts.', 'The unified model improves the F1 score from 54.82% to 58.23% compared with the model proposed by He and Sun (2017a).', 'In the second block of Table 3, we give the result of our proposed model.', 'It can be observed that our proposed model achieves a very competitive performance.', 'Compared with the latest model proposed by He and Sun (2017b), our model improves the F1 score from 58.23% to 58.70% on overall performance.', 'The improvement demonstrates the effectiveness of our proposed model.'] | [None, ['Peng and Dredze (2015)', 'Peng and Dredze (2016)', 'He and Sun (2017a)', 'He and Sun (2017b)', 'BiLSTM+CRF+adv+self-attention'], ['Peng and Dredze (2015)', 'Peng and Dredze (2016)', 'He and Sun (2017a)', 'He and Sun (2017b)'], ['Peng and Dredze (2015)', 'Overall', 'F1(%)'], ['He and Sun (2017b)'], ['He and Sun (2017b)', 'He and Sun (2017a)', 'Overall', 'F1(%)'], ['BiLSTM+CRF+adv+self-attention'], ['BiLSTM+CRF+adv+self-attention'], ['BiLSTM+CRF+adv+self-attention', 'He and Sun (2017b)', 'Overall', 'F1(%)'], ['BiLSTM+CRF+adv+self-attention']] | 1 |
D18-1017table_4 | Results on SighanNER dataset. There are two blocks. The first block reports the result of previous methods. The second block gives the performance of our proposed model. | 2 | [['Models', 'Chen et al. (2006)'], ['Models', 'Zhou et al. (2006)'], ['Models', 'Luo and Yang (2016)'], ['Models', 'BiLSTM+CRF+adversarial+self-attention']] | 1 | [['P(%)'], ['R(%)'], ['F1(%)']] | [['91.22', '81.71', '86.2'], ['88.94', '84.2', '86.51'], ['91.3', '87.22', '89.21'], ['91.73', '89.58', '90.64']] | column | ['P(%)', 'R(%)', 'F1(%)'] | ['BiLSTM+CRF+adversarial+self-attention'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P(%)</th> <th>R(%)</th> <th>F1(%)</th> </tr> </thead> <tbody> <tr> <td>Models || Chen et al. (2006)</td> <td>91.22</td> <td>81.71</td> <td>86.2</td> </tr> <tr> <td>Models || Zhou et al. (2006)</td> <td>88.94</td> <td>84.2</td> <td>86.51</td> </tr> <tr> <td>Models || Luo and Yang (2016)</td> <td>91.3</td> <td>87.22</td> <td>89.21</td> </tr> <tr> <td>Models || BiLSTM+CRF+adversarial+self-attention</td> <td>91.73</td> <td>89.58</td> <td>90.64</td> </tr> </tbody></table> | Table 4 | table_4 | D18-1017 | 7 | emnlp2018 | 4.3.2 Evaluation on SighanNER. Table 4 lists the comparisons on SighanNER dataset. We observe that our proposed model achieves new state-of-the-art performance. In the first block, we give the performance of previous methods for Chinese NER task on SighanNER dataset. Chen et al. (2006) propose a character-based CRF model for Chinese NER task. Zhou et al. (2006) introduce a pipeline model, which first segments the text with character-level CRF model and then applies word-level CRF to tag. Luo and Yang (2016) first train a word segmenter and then use word segmentation as additional features for sequence tagging. Although the model achieves competitive performance, giving the F1 score of 89.21%, it suffers from the error propagation problem. In the second block, we report the result of our proposed model. Compared with the state-of-the-art model proposed by Luo and Yang (2016), our method improves the F1 score from 89.21% to 90.64% without any additional features, which demonstrates the effectiveness of our proposed model. | [2, 1, 1, 1, 2, 2, 2, 1, 1, 1] | ['4.3.2 Evaluation on SighanNER.', 'Table 4 lists the comparisons on SighanNER dataset.', 'We observe that our proposed model achieves new state-of-the-art performance.', 'In the first block, we give the performance of previous methods for Chinese NER task on SighanNER dataset.', 'Chen et al. (2006) propose a character-based CRF model for Chinese NER task.', 'Zhou et al. (2006) introduce a pipeline model, which first segments the text with character-level CRF model and then applies word-level CRF to tag.', 'Luo and Yang (2016) first train a word segmenter and then use word segmentation as additional features for sequence tagging.', 'Although the model achieves competitive performance, giving the F1 score of 89.21%, it suffers from the error propagation problem.', 'In the second block, we report the result of our proposed model.', 'Compared with the state-of-the-art model proposed by Luo and Yang (2016), our method improves the F1 score from 89.21% to 90.64% without any additional features, which demonstrates the effectiveness of our proposed model.'] | [None, None, ['BiLSTM+CRF+adversarial+self-attention'], ['Chen et al. (2006)', 'Zhou et al. (2006)', 'Luo and Yang (2016)'], ['Chen et al. (2006)'], ['Zhou et al. (2006)'], None, ['Luo and Yang (2016)', 'F1(%)'], ['BiLSTM+CRF+adversarial+self-attention'], ['BiLSTM+CRF+adversarial+self-attention', 'Luo and Yang (2016)', 'F1(%)']] | 1 |
D18-1017table_5 | Comparison between our proposed model and simplified models on SighanNER dataset and original WeiboNER dataset. | 2 | [['Models', 'BiLSTM+CRF'], ['Models', 'BiLSTM+CRF+transfer'], ['Models', 'BiLSTM+CRF+adversarial'], ['Models', 'BiLSTM+CRF+self-attention'], ['Models', 'BiLSTM+CRF+adversarial+self-attention']] | 2 | [['SighanNER', 'P(%)'], ['SighanNER', 'R(%)'], ['SighanNER', 'F1(%)'], ['WeiboNER', 'P(%)'], ['WeiboNER', 'R(%)'], ['WeiboNER', 'F1(%)']] | [['89.84', '88.42', '89.13', '58.99', '44.93', '51.01'], ['90.60', '89.19', '89.89', '60.00', '46.03', '52.09'], ['90.52', '89.56', '90.04', '61.94', '45.48', '52.45'], ['90.62', '88.81', '89.71', '57.81', '47.67', '52.25'], ['91.73', '89.58', '90.64', '55.72', '50.68', '53.08']] | column | ['P(%)', 'R(%)', 'F1(%)', 'P(%)', 'R(%)', 'F1(%)'] | ['BiLSTM+CRF+adversarial+self-attention'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>SighanNER || P(%)</th> <th>SighanNER || R(%)</th> <th>SighanNER || F1(%)</th> <th>WeiboNER || P(%)</th> <th>WeiboNER || R(%)</th> <th>WeiboNER || F1(%)</th> </tr> </thead> <tbody> <tr> <td>Models || BiLSTM+CRF</td> <td>89.84</td> <td>88.42</td> <td>89.13</td> <td>58.99</td> <td>44.93</td> <td>51.01</td> </tr> <tr> <td>Models || BiLSTM+CRF+transfer</td> <td>90.60</td> <td>89.19</td> <td>89.89</td> <td>60.00</td> <td>46.03</td> <td>52.09</td> </tr> <tr> <td>Models || BiLSTM+CRF+adversarial</td> <td>90.52</td> <td>89.56</td> <td>90.04</td> <td>61.94</td> <td>45.48</td> <td>52.45</td> </tr> <tr> <td>Models || BiLSTM+CRF+self-attention</td> <td>90.62</td> <td>88.81</td> <td>89.71</td> <td>57.81</td> <td>47.67</td> <td>52.25</td> </tr> <tr> <td>Models || BiLSTM+CRF+adversarial+self-attention</td> <td>91.73</td> <td>89.58</td> <td>90.64</td> <td>55.72</td> <td>50.68</td> <td>53.08</td> </tr> </tbody></table> | Table 5 | table_5 | D18-1017 | 8 | emnlp2018 | Table 5 provides the experimental results of our proposed model and baseline as well as its simplified models on SighanNER dataset and WeiboNER dataset. The simplified models are described as follows:. BiLSTM+CRF: The model is used as strong baseline in our work, which is trained using Chinese NER training data. BiLSTM+CRF+transfer: We apply transfer learning to BiLSTM+CRF model without adversarial loss and self-attention mechanism. BiLSTM+CRF+adversarial: Compared with BiLSTM+CRF+transfer model, the BiLSTM+CRF+adversarial model incorporates adversarial training. BiLSTM+CRF+self-attention: The model integrates the self-attention mechanism based on BiLSTM+CRF model. From the experimental results of Table 5, we have following observations:. Effectiveness of transfer learning. BiLSTM+CRF+transfer improves F1 score from 89.13% to 89.89% as compared with BiLSTM+CRF on SighanNER dataset and achieves 1.08% improvement on WeiboNER dataset, which indicates the word boundary information from CWS is very effective for Chinese NER task. Effectiveness of adversarial training. By introducing adversarial training, BiLSTM+CRF+adversarial boosts the performance as compared with BiLSTM+CRF+transfer model, showing 0.15% and 0.36% improvement on SighanNER dataset and WeiboNER dataset, respectively. It proves that adversarial training can prevent specific features of CWS task from creeping into shared space. Effectiveness of self-attention mechanism. When compared with BiLSTM+CRF, the BiLSTM+CRF+self-attention significantly improves the performance on the two different datasets with the help of information learned from self-attention, which verifies that the self-attention mechanism is effective for Chinese NER task. We observe that our proposed adversarial transfer learning framework and self-attention lead to noticeable improvements over the baseline, improving F1 score from 51.01% to 53.08% on WeiboNER dataset and giving 1.51% improvement on SighanNER dataset. | [1, 2, 2, 2, 2, 2, 1, 2, 1, 2, 1, 2, 2, 1, 1] | ['Table 5 provides the experimental results of our proposed model and baseline as well as its simplified models on SighanNER dataset and WeiboNER dataset.', 'The simplified models are described as follows:.', 'BiLSTM+CRF: The model is used as strong baseline in our work, which is trained using Chinese NER training data.', 'BiLSTM+CRF+transfer: We apply transfer learning to BiLSTM+CRF model without adversarial loss and self-attention mechanism.', 'BiLSTM+CRF+adversarial: Compared with BiLSTM+CRF+transfer model, the BiLSTM+CRF+adversarial model incorporates adversarial training.', 'BiLSTM+CRF+self-attention: The model integrates the self-attention mechanism based on BiLSTM+CRF model.', 'From the experimental results of Table 5, we have following observations:.', 'Effectiveness of transfer learning.', 'BiLSTM+CRF+transfer improves F1 score from 89.13% to 89.89% as compared with BiLSTM+CRF on SighanNER dataset and achieves 1.08% improvement on WeiboNER dataset, which indicates the word boundary information from CWS is very effective for Chinese NER task.', 'Effectiveness of adversarial training.', 'By introducing adversarial training, BiLSTM+CRF+adversarial boosts the performance as compared with BiLSTM+CRF+transfer model, showing 0.15% and 0.36% improvement on SighanNER dataset and WeiboNER dataset, respectively.', 'It proves that adversarial training can prevent specific features of CWS task from creeping into shared space.', 'Effectiveness of self-attention mechanism.', 'When compared with BiLSTM+CRF, the BiLSTM+CRF+self-attention significantly improves the performance on the two different datasets with the help of information learned from self-attention, which verifies that the self-attention mechanism is effective for Chinese NER task.', 'We observe that our proposed adversarial transfer learning framework and self-attention lead to noticeable improvements over the baseline, improving F1 score from 51.01% to 53.08% on WeiboNER dataset and giving 1.51% improvement on SighanNER dataset.'] | [['BiLSTM+CRF', 'BiLSTM+CRF+transfer', 'BiLSTM+CRF+adversarial', 'BiLSTM+CRF+self-attention', 'BiLSTM+CRF+adversarial+self-attention', 'SighanNER', 'WeiboNER'], None, ['BiLSTM+CRF'], ['BiLSTM+CRF+transfer'], ['BiLSTM+CRF+adversarial', 'BiLSTM+CRF+transfer'], ['BiLSTM+CRF+self-attention', 'BiLSTM+CRF'], None, None, ['BiLSTM+CRF+transfer', 'BiLSTM+CRF', 'SighanNER', 'F1(%)', 'WeiboNER'], None, ['BiLSTM+CRF+adversarial', 'BiLSTM+CRF+transfer', 'SighanNER', 'WeiboNER', 'F1(%)'], None, None, ['BiLSTM+CRF+self-attention', 'BiLSTM+CRF', 'SighanNER', 'WeiboNER'], ['BiLSTM+CRF+adversarial+self-attention', 'BiLSTM+CRF', 'WeiboNER', 'SighanNER', 'F1(%)']] | 1 |
D18-1020table_2 | Tagging accuracies (%) on UD test sets. For each language, we show test accuracy (“acc.”) when only using labeled data and the change in test accuracy (“UL∆”) when adding unlabeled data. Results for NCRF and NCRF-AE are from Zhang et al. (2017), though results are not strictly comparable because we used pretrained word embeddings for all languages on Wikipedia. Bold is highest in each column, excluding the NCRF variants. Italic is the best accuracy including the unlabeled data. | 1 | [['NCRF'], ['NCRF-AE'], ['BiGRU baseline'], ['VSL-G'], ['VSL-GG-Flat'], ['VSL-GG-Hier']] | 2 | [['French', 'acc.'], ['French', 'ULΔ'], ['German', 'acc.'], ['German', 'ULΔ'], ['Indonesian', 'acc.'], ['Indonesian', 'ULΔ'], ['Spanish', 'acc.'], ['Spanish', 'ULΔ'], ['Russian', 'acc.'], ['Russian', 'ULΔ'], ['Croatian', 'acc.'], ['Croatian', 'ULΔ']] | [['93.4', '-', '90.4', '-', '88.4', '-', '91.2', '-', '86.6', '-', '86.1', '-'], ['93.7', '+0.2', '90.8', '+0.2', '89.1', '+0.3', '91.7', '+0.5', '87.8', '+1.1', '87.9', '+1.2'], ['95.9', '-', '92.6', '-', '92.2', '-', '94.7', '-', '95.2', '-', '95.6', '-'], ['96.1', '+0.0', '92.8', '+0.0', '92.3', '+0.0', '94.8', '+0.1', '95.3', '+0.0', '95.6', '+0.1'], ['96.1', '+0.0', '93.0', '+0.1', '92.4', '+0.1', '95.0', '+0.1', '95.5', '+0.1', '95.8', '+0.1'], ['96.4', '+0.1', '93.3', '+0.1', '92.8', '+0.1', '95.3', '+0.2', '95.9', '+0.1', '96.3', '+0.2']] | column | ['acc.', 'ULΔ', 'acc.', 'ULΔ', 'acc.', 'ULΔ', 'acc.', 'ULΔ', 'acc.', 'ULΔ', 'acc.', 'ULΔ'] | ['VSL-G'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>French || acc.</th> <th>French || ULΔ</th> <th>German || acc.</th> <th>German || ULΔ</th> <th>Indonesian || acc.</th> <th>Indonesian || ULΔ</th> <th>Spanish || acc.</th> <th>Spanish || ULΔ</th> <th>Russian || acc.</th> <th>Russian || ULΔ</th> <th>Croatian || acc.</th> <th>Croatian || ULΔ</th> </tr> </thead> <tbody> <tr> <td>NCRF</td> <td>93.4</td> <td>-</td> <td>90.4</td> <td>-</td> <td>88.4</td> <td>-</td> <td>91.2</td> <td>-</td> <td>86.6</td> <td>-</td> <td>86.1</td> <td>-</td> </tr> <tr> <td>NCRF-AE</td> <td>93.7</td> <td>+0.2</td> <td>90.8</td> <td>+0.2</td> <td>89.1</td> <td>+0.3</td> <td>91.7</td> <td>+0.5</td> <td>87.8</td> <td>+1.1</td> <td>87.9</td> <td>+1.2</td> </tr> <tr> <td>BiGRU baseline</td> <td>95.9</td> <td>-</td> <td>92.6</td> <td>-</td> <td>92.2</td> <td>-</td> <td>94.7</td> <td>-</td> <td>95.2</td> <td>-</td> <td>95.6</td> <td>-</td> </tr> <tr> <td>VSL-G</td> <td>96.1</td> <td>+0.0</td> <td>92.8</td> <td>+0.0</td> <td>92.3</td> <td>+0.0</td> <td>94.8</td> <td>+0.1</td> <td>95.3</td> <td>+0.0</td> <td>95.6</td> <td>+0.1</td> </tr> <tr> <td>VSL-GG-Flat</td> <td>96.1</td> <td>+0.0</td> <td>93.0</td> <td>+0.1</td> <td>92.4</td> <td>+0.1</td> <td>95.0</td> <td>+0.1</td> <td>95.5</td> <td>+0.1</td> <td>95.8</td> <td>+0.1</td> </tr> <tr> <td>VSL-GG-Hier</td> <td>96.4</td> <td>+0.1</td> <td>93.3</td> <td>+0.1</td> <td>92.8</td> <td>+0.1</td> <td>95.3</td> <td>+0.2</td> <td>95.9</td> <td>+0.1</td> <td>96.3</td> <td>+0.2</td> </tr> </tbody></table> | Table 2 | table_2 | D18-1020 | 7 | emnlp2018 | Table 2 shows our results on the UD datasets. The trends are broadly consistent with those of Table 1a and 1b. The best performing models use hierarchical structure in the latent variables. There are some differences across languages. For French, German, Indonesian and Russian, VSLG does not show improvement when using unlabeled data. This may be resolved with better tuning, since the model actually shows improvement on the dev set. Note that results reported by Zhang et al. (2017) and ours are not strictly comparable as their word embeddings were only pretrained on the UD training sets while ours were pretrained on Wikipedia. Nonetheless, they also mentioned that using embeddings pretrained on larger unlabeled data did not help. We include these results to show that our baselines are indeed strong compared to prior results reported in the literature. | [1, 2, 2, 1, 1, 1, 1, 2, 1] | ['Table 2 shows our results on the UD datasets.', 'The trends are broadly consistent with those of Table 1a and 1b.', 'The best performing models use hierarchical structure in the latent variables.', 'There are some differences across languages.', 'For French, German, Indonesian and Russian, VSLG does not show improvement when using unlabeled data.', 'This may be resolved with better tuning, since the model actually shows improvement on the dev set.', 'Note that results reported by Zhang et al. (2017) and ours are not strictly comparable as their word embeddings were only pretrained on the UD training sets while ours were pretrained on Wikipedia.', 'Nonetheless, they also mentioned that using embeddings pretrained on larger unlabeled data did not help.', 'We include these results to show that our baselines are indeed strong compared to prior results reported in the literature.'] | [None, None, None, ['French', 'German', 'Indonesian', 'Spanish', 'Russian', 'Croatian'], ['French', 'German', 'Indonesian', 'Russian', 'VSL-G', 'ULΔ'], ['VSL-GG-Flat', 'VSL-GG-Hier'], ['NCRF', 'NCRF-AE', 'VSL-G'], None, ['BiGRU baseline']] | 1 |
D18-1020table_3 | Twitter and NER dev results (%), UD averaged test accuracies (%) for two choices of attaching the classification loss to latent variables in the VSLGG-Hier model. All previous results for VSL-GG-Hier used the classification loss on y. | 1 | [['classifier on y'], ['classifier on z']] | 2 | [['Twitter', 'acc.'], ['Twitter', 'ULΔ'], ['NER', 'acc.'], ['NER', 'ULΔ'], ['UD average', 'acc.'], ['UD average', 'ULΔ']] | [['91.6', '+0.3', '88.4', '+0.2', '95.0', '+0.1'], ['91.1', '+0.2', '87.8', '+0.1', '94.4', '+0.0']] | column | ['acc.', 'ULΔ', 'acc.', 'ULΔ', 'acc.', 'ULΔ'] | ['classifier on y', 'classifier on z'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Twitter || acc.</th> <th>Twitter || ULΔ</th> <th>NER || acc.</th> <th>NER || ULΔ</th> <th>UD average || acc.</th> <th>UD average || ULΔ</th> </tr> </thead> <tbody> <tr> <td>classifier on y</td> <td>91.6</td> <td>+0.3</td> <td>88.4</td> <td>+0.2</td> <td>95.0</td> <td>+0.1</td> </tr> <tr> <td>classifier on z</td> <td>91.1</td> <td>+0.2</td> <td>87.8</td> <td>+0.1</td> <td>94.4</td> <td>+0.0</td> </tr> </tbody></table> | Table 3 | table_3 | D18-1020 | 7 | emnlp2018 | 6.1 Effect of Position of Classification Loss. We investigate the effect of attaching the classifier to different latent variables. In particular, for the VSL-GG-Hier model, we compare the attachment of the classifier between z and y. See Figure 2. The results in Table 3 suggest that attaching the reconstruction and classification losses to the same latent variable (z) harms accuracy although attaching the classifier to z effectively gives the classifier an extra layer. | [2, 2, 2, 2, 1] | ['6.1 Effect of Position of Classification Loss.', 'We investigate the effect of attaching the classifier to different latent variables.', 'In particular, for the VSL-GG-Hier model, we compare the attachment of the classifier between z and y.', 'See Figure 2.', 'The results in Table 3 suggest that attaching the reconstruction and classification losses to the same latent variable (z) harms accuracy although attaching the classifier to z effectively gives the classifier an extra layer.'] | [None, None, None, None, ['classifier on z', 'acc.']] | 1 |
D18-1023table_4 | Results using bilingual lexicons with varying sizes (40,000, 10,000, 2,000, 1,000, 500, 250) and three languages. CorrNet W+N+C+L is the proposed approach with all the cluster types. | 2 | [['40000', 'multiCCA'], ['40000', 'multiCluster'], ['40000', 'CorrNet W'], ['40000', 'CorrNet W+N+C+L'], ['10000', 'multiCCA'], ['10000', 'multiCluster'], ['10000', 'CorrNet W'], ['10000', 'CorrNet W+N+C+L'], ['2000', 'multiCCA'], ['2000', 'multiCluster'], ['2000', 'CorrNet W'], ['2000', 'CorrNet W+N+C+L'], ['1000', 'multiCCA'], ['1000', 'multiCluster'], ['1000', 'CorrNet W'], ['1000', 'CorrNet W+N+C+L'], ['500', 'multiCCA'], ['500', 'multiCluster'], ['500', 'CorrNet W'], ['500', 'CorrNet W+N+C+L'], ['250', 'multiCCA'], ['250', 'multiCluster'], ['250', 'CorrNet W'], ['250', 'CorrNet W+N+C+L']] | 2 | [['QVEC', 'Monolingual'], ['QVEC', 'Multilingual'], ['QVEC-CCA', 'Monolingual'], ['QVEC-CCA', 'Multilingual']] | [['10.8', '8.5', '63.8', '43.9'], ['10.8', '9.1', '63.6', '45.8'], ['14.8', '11.3', '63.6', '43.4'], ['16.2', '12.4', '67.3', '45.4'], ['9.8', '6.5', '63.6', '42.3'], ['10.6', '9.5', '62.4', '44.7'], ['14.8', '11.3', '63.4', '43.0'], ['15.7', '12.4', '68.0', '45.1'], ['9.9', '6.2', '63.6', '40.9'], ['10.5', '9.3', '62.5', '44.8'], ['14.5', '7.1', '62.0', '39.2'], ['14.5', '11.4', '68.0', '44.8'], ['12.3', '6.9', '63.5', '38.2'], ['10.5', '9.3', '62.5', '44.8'], ['13.7', '9.4', '63.0', '40.0'], ['13.6', '10.5', '66.4', '43.0'], ['12.3', '5.5', '63.5', '36.0'], ['10.5', '9.3', '62.6', '44.7'], ['13.3', '9.1', '62.8', '39.4'], ['13.4', '9.5', '66.2', '42.7'], ['12.3', '5.3', '63.5', '35.0'], ['10.5', '9.2', '62.7', '44.9'], ['13.8', '9.3', '62.5', '39.3'], ['13.9', '9.8', '65.9', '42.2']] | column | ['QVEC', 'QVEC', 'QVEC-CCA', 'QVEC-CCA'] | ['CorrNet W+N+C+L'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>QVEC || Monolingual</th> <th>QVEC || Multilingual</th> <th>QVEC-CCA || Monolingual</th> <th>QVEC-CCA || Multilingual</th> </tr> </thead> <tbody> <tr> <td>40000 || multiCCA</td> <td>10.8</td> <td>8.5</td> <td>63.8</td> <td>43.9</td> </tr> <tr> <td>40000 || multiCluster</td> <td>10.8</td> <td>9.1</td> <td>63.6</td> <td>45.8</td> </tr> <tr> <td>40000 || CorrNet W</td> <td>14.8</td> <td>11.3</td> <td>63.6</td> <td>43.4</td> </tr> <tr> <td>40000 || CorrNet W+N+C+L</td> <td>16.2</td> <td>12.4</td> <td>67.3</td> <td>45.4</td> </tr> <tr> <td>10000 || multiCCA</td> <td>9.8</td> <td>6.5</td> <td>63.6</td> <td>42.3</td> </tr> <tr> <td>10000 || multiCluster</td> <td>10.6</td> <td>9.5</td> <td>62.4</td> <td>44.7</td> </tr> <tr> <td>10000 || CorrNet W</td> <td>14.8</td> <td>11.3</td> <td>63.4</td> <td>43.0</td> </tr> <tr> <td>10000 || CorrNet W+N+C+L</td> <td>15.7</td> <td>12.4</td> <td>68.0</td> <td>45.1</td> </tr> <tr> <td>2000 || multiCCA</td> <td>9.9</td> <td>6.2</td> <td>63.6</td> <td>40.9</td> </tr> <tr> <td>2000 || multiCluster</td> <td>10.5</td> <td>9.3</td> <td>62.5</td> <td>44.8</td> </tr> <tr> <td>2000 || CorrNet W</td> <td>14.5</td> <td>7.1</td> <td>62.0</td> <td>39.2</td> </tr> <tr> <td>2000 || CorrNet W+N+C+L</td> <td>14.5</td> <td>11.4</td> <td>68.0</td> <td>44.8</td> </tr> <tr> <td>1000 || multiCCA</td> <td>12.3</td> <td>6.9</td> <td>63.5</td> <td>38.2</td> </tr> <tr> <td>1000 || multiCluster</td> <td>10.5</td> <td>9.3</td> <td>62.5</td> <td>44.8</td> </tr> <tr> <td>1000 || CorrNet W</td> <td>13.7</td> <td>9.4</td> <td>63.0</td> <td>40.0</td> </tr> <tr> <td>1000 || CorrNet W+N+C+L</td> <td>13.6</td> <td>10.5</td> <td>66.4</td> <td>43.0</td> </tr> <tr> <td>500 || multiCCA</td> <td>12.3</td> <td>5.5</td> <td>63.5</td> <td>36.0</td> </tr> <tr> <td>500 || multiCluster</td> <td>10.5</td> <td>9.3</td> <td>62.6</td> <td>44.7</td> </tr> <tr> <td>500 || CorrNet W</td> <td>13.3</td> <td>9.1</td> <td>62.8</td> <td>39.4</td> </tr> <tr> <td>500 || CorrNet W+N+C+L</td> <td>13.4</td> <td>9.5</td> <td>66.2</td> <td>42.7</td> </tr> <tr> <td>250 || multiCCA</td> <td>12.3</td> <td>5.3</td> <td>63.5</td> <td>35.0</td> </tr> <tr> <td>250 || multiCluster</td> <td>10.5</td> <td>9.2</td> <td>62.7</td> <td>44.9</td> </tr> <tr> <td>250 || CorrNet W</td> <td>13.8</td> <td>9.3</td> <td>62.5</td> <td>39.3</td> </tr> <tr> <td>250 || CorrNet W+N+C+L</td> <td>13.9</td> <td>9.8</td> <td>65.9</td> <td>42.2</td> </tr> </tbody></table> | Table 4 | table_4 | D18-1023 | 8 | emnlp2018 | For following experiments, we use MultiCluster and MultiCCA as baselines 14. Table 4 shows the results. We observe that both MultiCCA and CorrNet approaches are sensitive to the size of the bilingual lexicons. Our approach on the other hand can maintain high performance, even when the size of bilingual lexicons is reduced to 250. The performances of MultiCluster based on various sizes of bilingual dictionary are close because it jointly trains the embedding of multiple languages from scratch and by default takes advantage of identical strings among all the languages. | [1, 1, 1, 1, 1] | ['For following experiments, we use MultiCluster and MultiCCA as baselines 14.', 'Table 4 shows the results.', 'We observe that both MultiCCA and CorrNet approaches are sensitive to the size of the bilingual lexicons.', 'Our approach on the other hand can maintain high performance, even when the size of bilingual lexicons is reduced to 250.', 'The performances of MultiCluster based on various sizes of bilingual dictionary are close because it jointly trains the embedding of multiple languages from scratch and by default takes advantage of identical strings among all the languages.'] | [['multiCCA', 'multiCluster'], None, ['multiCCA', 'CorrNet W'], ['CorrNet W+N+C+L', '250'], ['multiCluster', '40000', '10000', '2000', '1000', '500', '250']] | 1 |
D18-1023table_7 | Comparison on Cross-lingual Direct Transfer: name tagging performance (F-score, %) when the tagger was trained on 1-2 source languages and tested on a target language. | 4 | [['Train', 'Amh', 'Test', 'Tig'], ['Train', 'Tig', 'Test', 'Amh'], ['Train', 'Eng', 'Test', 'Uig'], ['Train', 'Tur', 'Test', 'Uig'], ['Train', 'Eng+Tur', 'Test', 'Uig'], ['Train', 'Eng', 'Test', 'Tur'], ['Train', 'Uig', 'Test', 'Tur'], ['Train', 'Eng+Uig', 'Test', 'Tur']] | 2 | [['MultiCCA', '-'], ['MultiCluster', '-'], ['CorrNet', 'W'], ['CorrNet', 'W+N+C+L']] | [['15.5', '29.7', '28.3', '33.7'], ['11.1', '24.7', '12.8', '23.3'], ['4.8', '9.1', '13.3', '15.5'], ['0.4', '11.4', '19.8', '25.0'], ['8.3', '10.5', '17.3', '23.3'], ['17.6', '21.4', '18.3', '22.4'], ['6.9', '12.8', '13.2', '10.7'], ['20.4', '23.3', '14.5', '27.0']] | column | ['F-score', 'F-score', 'F-score', 'F-score'] | ['W+N+C+L'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>MultiCCA || -</th> <th>MultiCluster || -</th> <th>CorrNet || W</th> <th>CorrNet || W+N+C+L</th> </tr> </thead> <tbody> <tr> <td>Train || Amh || Test || Tig</td> <td>15.5</td> <td>29.7</td> <td>28.3</td> <td>33.7</td> </tr> <tr> <td>Train || Tig || Test || Amh</td> <td>11.1</td> <td>24.7</td> <td>12.8</td> <td>23.3</td> </tr> <tr> <td>Train || Eng || Test || Uig</td> <td>4.8</td> <td>9.1</td> <td>13.3</td> <td>15.5</td> </tr> <tr> <td>Train || Tur || Test || Uig</td> <td>0.4</td> <td>11.4</td> <td>19.8</td> <td>25.0</td> </tr> <tr> <td>Train || Eng+Tur || Test || Uig</td> <td>8.3</td> <td>10.5</td> <td>17.3</td> <td>23.3</td> </tr> <tr> <td>Train || Eng || Test || Tur</td> <td>17.6</td> <td>21.4</td> <td>18.3</td> <td>22.4</td> </tr> <tr> <td>Train || Uig || Test || Tur</td> <td>6.9</td> <td>12.8</td> <td>13.2</td> <td>10.7</td> </tr> <tr> <td>Train || Eng+Uig || Test || Tur</td> <td>20.4</td> <td>23.3</td> <td>14.5</td> <td>27.0</td> </tr> </tbody></table> | Table 7 | table_7 | D18-1023 | 9 | emnlp2018 | Cross-lingual direct transfer. In this setting, we train a name tagger on one or two languages using multilingual embeddings and test it on a new language without any annotated data. Table 7 shows the performance. For most testing languages, our approach achieves better performance than MultiCCA and MultiCluster. The closer that the languages are, such as Amharic and Tigrinya, the better performance is achieved. | [2, 2, 1, 1, 1] | ['Cross-lingual direct transfer.', 'In this setting, we train a name tagger on one or two languages using multilingual embeddings and test it on a new language without any annotated data.', 'Table 7 shows the performance.', 'For most testing languages, our approach achieves better performance than MultiCCA and MultiCluster.', 'The closer that the languages are, such as Amharic and Tigrinya, the better performance is achieved.'] | [None, ['Train', 'Test'], None, ['CorrNet', 'W+N+C+L', 'MultiCCA', 'MultiCluster'], ['CorrNet', 'W+N+C+L', 'Amh', 'Tig']] | 1 |
D18-1027table_4 | Results on the hypernym discovery task. | 4 | [['Train data', '-', 'Model', 'BestUns'], ['Train data', 'EN', 'Model', 'VecMap'], ['Train data', 'EN', 'Model', 'VecMapμ'], ['Train data', 'EN', 'Model', 'MUSE'], ['Train data', 'EN', 'Model', 'MUSEμ'], ['Train data', 'EN+500', 'Model', 'VecMap'], ['Train data', 'EN+500', 'Model', 'VecMapμ'], ['Train data', 'EN+500', 'Model', 'MUSE'], ['Train data', 'EN+500', 'Model', 'MUSEμ'], ['Train data', 'EN+1k', 'Model', 'VecMap'], ['Train data', 'EN+1k', 'Model', 'VecMapμ'], ['Train data', 'EN+1k', 'Model', 'MUSE'], ['Train data', 'EN+1k', 'Model', 'MUSEμ'], ['Train data', 'EN+2k', 'Model', 'VecMap'], ['Train data', 'EN+2k', 'Model', 'VecMapμ'], ['Train data', 'EN+2k', 'Model', 'MUSE'], ['Train data', 'EN+2k', 'Model', 'MUSEμ']] | 2 | [['Spanish', 'MAP'], ['Spanish', 'MRR'], ['Spanish', 'P@5'], ['Italian', 'MAP'], ['Italian', 'MRR'], ['Italian', 'P@5']] | [['2.4', '5.5', '2.5', '3.9', '8.7', '3.9'], ['6.4', '16.5', '6.0', '4.5', '10.6', '4.3'], ['6.1', '15.4', '5.7', '5.6', '13.3', '5.4'], ['5.9', '14.1', '5.5', '4.9', '11.1', '4.7'], ['6.2', '14.8', '5.8', '5.1', '11.7', '4.9'], ['7.3', '18.2', '7.0', '6.1', '14.0', '5.8'], ['7.0', '17.6', '6.6', '6.8', '16.2', '6.4'], ['6.4', '15.9', '6.1', '5.3', '12.0', '5.0'], ['6.9', '16.9', '6.6', '6.0', '13.4', '5.7'], ['7.9', '19.2', '7.6', '7.0', '16.4', '6.6'], ['7.8', '19.2', '7.4', '7.5', '18.1', '7.0'], ['7.2', '17.3', '6.9', '6.2', '13.8', '5.8'], ['7.8', '18.8', '7.5', '6.5', '14.2', '6.3'], ['8.0', '19.1', '7.7', '8.2', '19.1', '7.5'], ['8.2', '19.9', '7.9', '8.7', '20.7', '8.1'], ['7.2', '17.2', '6.8', '7.2', '15.8', '7.0'], ['8.3', '19.5', '8.0', '7.6', '17.0', '7.2']] | column | ['MAP', 'MRR', 'P@5', 'MAP', 'MRR', 'P@5'] | ['VecMapμ', 'MUSEμ'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Spanish || MAP</th> <th>Spanish || MRR</th> <th>Spanish || P@5</th> <th>Italian || MAP</th> <th>Italian || MRR</th> <th>Italian || P@5</th> </tr> </thead> <tbody> <tr> <td>Train data || - || Model || BestUns</td> <td>2.4</td> <td>5.5</td> <td>2.5</td> <td>3.9</td> <td>8.7</td> <td>3.9</td> </tr> <tr> <td>Train data || EN || Model || VecMap</td> <td>6.4</td> <td>16.5</td> <td>6.0</td> <td>4.5</td> <td>10.6</td> <td>4.3</td> </tr> <tr> <td>Train data || EN || Model || VecMapμ</td> <td>6.1</td> <td>15.4</td> <td>5.7</td> <td>5.6</td> <td>13.3</td> <td>5.4</td> </tr> <tr> <td>Train data || EN || Model || MUSE</td> <td>5.9</td> <td>14.1</td> <td>5.5</td> <td>4.9</td> <td>11.1</td> <td>4.7</td> </tr> <tr> <td>Train data || EN || Model || MUSEμ</td> <td>6.2</td> <td>14.8</td> <td>5.8</td> <td>5.1</td> <td>11.7</td> <td>4.9</td> </tr> <tr> <td>Train data || EN+500 || Model || VecMap</td> <td>7.3</td> <td>18.2</td> <td>7.0</td> <td>6.1</td> <td>14.0</td> <td>5.8</td> </tr> <tr> <td>Train data || EN+500 || Model || VecMapμ</td> <td>7.0</td> <td>17.6</td> <td>6.6</td> <td>6.8</td> <td>16.2</td> <td>6.4</td> </tr> <tr> <td>Train data || EN+500 || Model || MUSE</td> <td>6.4</td> <td>15.9</td> <td>6.1</td> <td>5.3</td> <td>12.0</td> <td>5.0</td> </tr> <tr> <td>Train data || EN+500 || Model || MUSEμ</td> <td>6.9</td> <td>16.9</td> <td>6.6</td> <td>6.0</td> <td>13.4</td> <td>5.7</td> </tr> <tr> <td>Train data || EN+1k || Model || VecMap</td> <td>7.9</td> <td>19.2</td> <td>7.6</td> <td>7.0</td> <td>16.4</td> <td>6.6</td> </tr> <tr> <td>Train data || EN+1k || Model || VecMapμ</td> <td>7.8</td> <td>19.2</td> <td>7.4</td> <td>7.5</td> <td>18.1</td> <td>7.0</td> </tr> <tr> <td>Train data || EN+1k || Model || MUSE</td> <td>7.2</td> <td>17.3</td> <td>6.9</td> <td>6.2</td> <td>13.8</td> <td>5.8</td> </tr> <tr> <td>Train data || EN+1k || Model || MUSEμ</td> <td>7.8</td> <td>18.8</td> <td>7.5</td> <td>6.5</td> <td>14.2</td> <td>6.3</td> </tr> <tr> <td>Train data || EN+2k || Model || VecMap</td> <td>8.0</td> <td>19.1</td> <td>7.7</td> <td>8.2</td> <td>19.1</td> <td>7.5</td> </tr> <tr> <td>Train data || EN+2k || Model || VecMapμ</td> <td>8.2</td> <td>19.9</td> <td>7.9</td> <td>8.7</td> <td>20.7</td> <td>8.1</td> </tr> <tr> <td>Train data || EN+2k || Model || MUSE</td> <td>7.2</td> <td>17.2</td> <td>6.8</td> <td>7.2</td> <td>15.8</td> <td>7.0</td> </tr> <tr> <td>Train data || EN+2k || Model || MUSEμ</td> <td>8.3</td> <td>19.5</td> <td>8.0</td> <td>7.6</td> <td>17.0</td> <td>7.2</td> </tr> </tbody></table> | Table 4 | table_4 | D18-1027 | 8 | emnlp2018 | The results listed in Table 4 indicate several trends. First and foremost, in terms of model-wise comparisons, we observe that our proposed alterations of both VecMap and MUSE improve their quality in a consistent manner, across most metrics and data configurations. In Italian our proposed model shows an improvement across all configurations. However, in Spanish VecMap emerges as a highly competitive baseline, with our model only showing an improved performance when training data in this language abounds (in this specific case there is an increase from 17.2 to 19.5 points in the MRR metric). This suggests that the fact that the monolingual spaces are closer in our model is clearly beneficial when hybrid training data is given as input, opening up avenues for future work on weakly-supervised learning. Concerning the other baseline, MUSE, the contribution of our proposed model is consistent for both languages, again becoming more apparent in the Italian split and in a fully cross-lingual setting, where the improvement in MRR is almost 3 points (from 10.6 to 13.3). Finally, it is noteworthy that even in the setting where no training data from the target language is leveraged, all the systems based on cross-lingual embeddings outperform the best unsupervised baseline, which is a very encouraging result with regards to solving tasks for languages on which training data is not easily accessible or not directly available. Analysis. A manual exploration of the results obtained in cross-lingual hypernym discovery reveals a systematic pattern when comparing, for example, VecMap and our model. It was shown in Table 4 that the performance of our model gradually increased alongside the size of the training data in the target language until surpassing VecMap in the most informed configuration (i.e., EN+2k). Specifically, our model seems to show a higher presence of generic words in the output hypernyms, which may be explained by these being closer in the space. In fact, out of 1000 candidate hyponyms, our model correctly finds person 143 times, as compared to the 111 of VecMap, and this systematically occurs with generic types such as citizen or transport. Let us mention, however, that the considered baselines perform remarkably well in some cases. For example, the English-only VecMap configuration (EN), unlike ours, correctly discovered the following hypernyms for Francesc Maci`a (a Spanish politician and soldier): politician, ruler, leader and person. These were missing from the prediction of our model in all configurations until the most informed one (EN+2k). | [1, 1, 1, 1, 2, 1, 1, 2, 2, 1, 2, 2, 2, 1, 1] | ['The results listed in Table 4 indicate several trends.', 'First and foremost, in terms of model-wise comparisons, we observe that our proposed alterations of both VecMap and MUSE improve their quality in a consistent manner, across most metrics and data configurations.', 'In Italian our proposed model shows an improvement across all configurations.', 'However, in Spanish VecMap emerges as a highly competitive baseline, with our model only showing an improved performance when training data in this language abounds (in this specific case there is an increase from 17.2 to 19.5 points in the MRR metric).', 'This suggests that the fact that the monolingual spaces are closer in our model is clearly beneficial when hybrid training data is given as input, opening up avenues for future work on weakly-supervised learning.', 'Concerning the other baseline, MUSE, the contribution of our proposed model is consistent for both languages, again becoming more apparent in the Italian split and in a fully cross-lingual setting, where the improvement in MRR is almost 3 points (from 10.6 to 13.3).', 'Finally, it is noteworthy that even in the setting where no training data from the target language is leveraged, all the systems based on cross-lingual embeddings outperform the best unsupervised baseline, which is a very encouraging result with regards to solving tasks for languages on which training data is not easily accessible or not directly available.', 'Analysis.', 'A manual exploration of the results obtained in cross-lingual hypernym discovery reveals a systematic pattern when comparing, for example, VecMap and our model.', 'It was shown in Table 4 that the performance of our model gradually increased alongside the size of the training data in the target language until surpassing VecMap in the most informed configuration (i.e., EN+2k).', 'Specifically, our model seems to show a higher presence of generic words in the output hypernyms, which may be explained by these being closer in the space.', 'In fact, out of 1000 candidate hyponyms, our model correctly finds person 143 times, as compared to the 111 of VecMap, and this systematically occurs with generic types such as citizen or transport.', 'Let us mention, however, that the considered baselines perform remarkably well in some cases.', 'For example, the English-only VecMap configuration (EN), unlike ours, correctly discovered the following hypernyms for Francesc Maci`a (a Spanish politician and soldier): politician, ruler, leader and person.', 'These were missing from the prediction of our model in all configurations until the most informed one (EN+2k).'] | [None, ['VecMap', 'VecMapμ', 'MUSE', 'MUSEμ'], ['VecMapμ', 'MUSEμ', 'EN', 'EN+500', 'EN+1k', 'EN+2k', 'Spanish'], ['Spanish', 'VecMap', 'VecMapμ', 'MUSEμ', 'EN+2k', 'MUSE', 'MRR'], None, ['MUSE', 'MUSEμ', 'Spanish', 'Italian', 'MRR'], ['BestUns', 'VecMap', 'VecMapμ', 'MUSE', 'MUSEμ'], None, ['VecMap', 'VecMapμ'], ['VecMapμ', 'VecMap', 'EN', 'EN+500', 'EN+1k', 'EN+2k'], ['VecMapμ'], ['VecMapμ', 'VecMap'], ['VecMap'], ['VecMap', 'VecMapμ', 'EN', 'Spanish'], ['VecMap', 'VecMapμ', 'EN', 'EN+500', 'EN+1k', 'EN+2k', 'Spanish']] | 1 |
D18-1033table_1 | Evaluation results of cross-lingual lexical sememe prediction with different seed lexicon sizes. | 4 | [['Method', 'BiLex', 'Seed Lexicon', '1000'], ['Method', 'BiLex', 'Seed Lexicon', '2000'], ['Method', 'BiLex', 'Seed Lexicon', '4000'], ['Method', 'BiLex', 'Seed Lexicon', '6000'], ['Method', 'CLSP-WR', 'Seed Lexicon', '1000'], ['Method', 'CLSP-WR', 'Seed Lexicon', '2000'], ['Method', 'CLSP-WR', 'Seed Lexicon', '4000'], ['Method', 'CLSP-WR', 'Seed Lexicon', '6000'], ['Method', 'CLSP-SE', 'Seed Lexicon', '1000'], ['Method', 'CLSP-SE', 'Seed Lexicon', '2000'], ['Method', 'CLSP-SE', 'Seed Lexicon', '4000'], ['Method', 'CLSP-SE', 'Seed Lexicon', '6000']] | 2 | [['Sememe Prediction', 'MAP'], ['Sememe Prediction', 'F1 Score']] | [['27.57', '16.08'], ['33.79', '22.33'], ['35.78', '25.74'], ['38.29', '28.71'], ['38.12', '18.55'], ['33.78', '23.64'], ['38.30', '27.74'], ['41.23', '30.64'], ['31.78', '18.22'], ['37.70', '24.31'], ['40.77', '29.33'], ['43.16', '32.49']] | column | ['MAP', 'F1 Score'] | ['CLSP-WR', 'CLSP-SE'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Sememe Prediction || MAP</th> <th>Sememe Prediction || F1 Score</th> </tr> </thead> <tbody> <tr> <td>Method || BiLex || Seed Lexicon || 1000</td> <td>27.57</td> <td>16.08</td> </tr> <tr> <td>Method || BiLex || Seed Lexicon || 2000</td> <td>33.79</td> <td>22.33</td> </tr> <tr> <td>Method || BiLex || Seed Lexicon || 4000</td> <td>35.78</td> <td>25.74</td> </tr> <tr> <td>Method || BiLex || Seed Lexicon || 6000</td> <td>38.29</td> <td>28.71</td> </tr> <tr> <td>Method || CLSP-WR || Seed Lexicon || 1000</td> <td>38.12</td> <td>18.55</td> </tr> <tr> <td>Method || CLSP-WR || Seed Lexicon || 2000</td> <td>33.78</td> <td>23.64</td> </tr> <tr> <td>Method || CLSP-WR || Seed Lexicon || 4000</td> <td>38.30</td> <td>27.74</td> </tr> <tr> <td>Method || CLSP-WR || Seed Lexicon || 6000</td> <td>41.23</td> <td>30.64</td> </tr> <tr> <td>Method || CLSP-SE || Seed Lexicon || 1000</td> <td>31.78</td> <td>18.22</td> </tr> <tr> <td>Method || CLSP-SE || Seed Lexicon || 2000</td> <td>37.70</td> <td>24.31</td> </tr> <tr> <td>Method || CLSP-SE || Seed Lexicon || 4000</td> <td>40.77</td> <td>29.33</td> </tr> <tr> <td>Method || CLSP-SE || Seed Lexicon || 6000</td> <td>43.16</td> <td>32.49</td> </tr> </tbody></table> | Table 1 | table_1 | D18-1033 | 7 | emnlp2018 | Table 1 exhibits the evaluation results of crosslingual lexical sememe prediction with different seed lexicon sizes in {1000, 2000, 4000, 6000}. From the table, we can clearly see that:. (1) Our two models perform much better compared with BiLex in all the seed lexicon size settings. It indicates that incorporating sememe information into word embeddings can effectively improve the performance of predicting sememes for target words. The reason is that both of our models make words with similar sememe annotations have similar embeddings, and as a result, we can recommend better sememes for target words according to its related source words. (2) CLSP-SE model achieves better results than CLSP-WR model. The reason is that by representing sememes in a latent semantic space, CLSP-SE model can further capture the relatedness between sememes as well as the relatedness between words and sememes, which is helpful for modeling the representations of those words with similar sememes. | [1, 1, 1, 2, 2, 1, 2] | ['Table 1 exhibits the evaluation results of crosslingual lexical sememe prediction with different seed lexicon sizes in {1000, 2000, 4000, 6000}.', 'From the table, we can clearly see that:.', '(1) Our two models perform much better compared with BiLex in all the seed lexicon size settings.', 'It indicates that incorporating sememe information into word embeddings can effectively improve the performance of predicting sememes for target words.', 'The reason is that both of our models make words with similar sememe annotations have similar embeddings, and as a result, we can recommend better sememes for target words according to its related source words.', '(2) CLSP-SE model achieves better results than CLSP-WR model.', 'The reason is that by representing sememes in a latent semantic space, CLSP-SE model can further capture the relatedness between sememes as well as the relatedness between words and sememes, which is helpful for modeling the representations of those words with similar sememes.'] | [['Seed Lexicon'], None, ['CLSP-WR', 'CLSP-SE', 'BiLex', '1000', '2000', '4000', '6000'], None, ['CLSP-WR', 'CLSP-SE'], ['CLSP-SE', 'CLSP-WR'], ['CLSP-SE']] | 1 |
D18-1036table_1 | The main results of CH-EN translation. (cid:52) shows the BLEU points improvement of system “X+MEM” than system X. “*” indicates that system “X+MEM” is statistically significant better (p < 0.05) than system X and “†” indicates p < 0.01. | 1 | [['Baseline'], ['Arthur(test)'], ['Arthur(train+test)'], ['Baseline(sub-word)'], ['Baseline+MEM'], ['Arthur(train+test)+MEM'], ['Baseline(sub-word)+MEM']] | 2 | [['Model', '03'], ['Model', '04'], ['Model', '05'], ['Model', '06'], ['Model', '08'], ['Model', 'Avg.'], ['Model', '△']] | [['41.01', '42.94', '40.31', '40.57', '30.96', '39.16', '-'], ['41.34', '43.31', '40.79', '40.84', '31.11', '39.48', '-'], ['41.88', '43.75', '41.16', '41.63', '31.47', '39.98', '-'], ['43.93', '44.74', '42.46', '43.01', '32.53', '41.33', '-'], ['42.74', '43.94', '42.15', '41.94', '31.86', '40.53', '+1.37'], ['43.04', '44.65', '42.19', '42.59', '32.05', '40.90', '+0.92'], ['44.98', '45.51', '43.93', '43.95', '33.33', '42.34', '+1.01']] | column | ['BLEU', 'BLEU', 'BLEU', 'BLEU', 'BLEU', 'BLEU', 'BLEU'] | ['Baseline+MEM', 'Arthur(train+test)+MEM', 'Baseline(sub-word)+MEM'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Model || 03</th> <th>Model || 04</th> <th>Model || 05</th> <th>Model || 06</th> <th>Model || 08</th> <th>Model || Avg.</th> <th>Model || △</th> </tr> </thead> <tbody> <tr> <td>Baseline</td> <td>41.01</td> <td>42.94</td> <td>40.31</td> <td>40.57</td> <td>30.96</td> <td>39.16</td> <td>-</td> </tr> <tr> <td>Arthur(test)</td> <td>41.34</td> <td>43.31</td> <td>40.79</td> <td>40.84</td> <td>31.11</td> <td>39.48</td> <td>-</td> </tr> <tr> <td>Arthur(train+test)</td> <td>41.88</td> <td>43.75</td> <td>41.16</td> <td>41.63</td> <td>31.47</td> <td>39.98</td> <td>-</td> </tr> <tr> <td>Baseline(sub-word)</td> <td>43.93</td> <td>44.74</td> <td>42.46</td> <td>43.01</td> <td>32.53</td> <td>41.33</td> <td>-</td> </tr> <tr> <td>Baseline+MEM</td> <td>42.74</td> <td>43.94</td> <td>42.15</td> <td>41.94</td> <td>31.86</td> <td>40.53</td> <td>+1.37</td> </tr> <tr> <td>Arthur(train+test)+MEM</td> <td>43.04</td> <td>44.65</td> <td>42.19</td> <td>42.59</td> <td>32.05</td> <td>40.90</td> <td>+0.92</td> </tr> <tr> <td>Baseline(sub-word)+MEM</td> <td>44.98</td> <td>45.51</td> <td>43.93</td> <td>43.95</td> <td>33.33</td> <td>42.34</td> <td>+1.01</td> </tr> </tbody></table> | Table 1 | table_1 | D18-1036 | 6 | emnlp2018 | 5 Results on CH-EN Translation. 5.1 Our methods vs. Baseline. Table 1 reports the main translation results of CHEN translation. We first compare Baseline+MEM with Baseline. As shown in row 1 and row 5 in Table 1, Baseline+MEM can improve over Baseline on all test datasets, and the average improvement is 1.37 BLEU points. The results show that our method could significantly outperform the baseline model. 5.2 Results on Sub-words,2, We also test the proposed method when the translation unit is sub-word. The baseline and our method using sub-word as translation unit are respectively denoted by Baseline(subword) and Baseline(sub-word)+MEM. The results are shown in row 4 and row 7. From the results, Baseline(sub-word)+MEM outperforms Baseline(sub-word) by 1.01 BLEU points, indicating that adopting sub-words as translation units still faces the problem of troublesome tokens, and our method could alleviate this problem. 5.3 Our Method vs. Method Using Translation Lexicon. We also compare our method with Arthur et al. (2016)’s method which incorporates a translation lexicon into NMT. Here, the comparison is conducted in two ways based on whether the baseline neural model is fixed or retrained. Fixed Baseline. Comparing Arthur(test) (row 2 in Table 1) and Baseline+MEM (row 5 in Table 1), we can see that our proposed method can surpass Arthur(test) with 1.05 BLEU points. As there are three differences between our methods and Arthur(test), we take the following experiments to evaluate the effect of each difference. Retrained Baseline. In the second comparison, we allow the baseline model to be retrained by Arthur's method (Arthur(train+test)). We then implement our method using Arthur(train+test) as baseline (denoted by Arthur(train+test)+MEM). Comparing the results of these two methods in Table 1 (line 3 and 6), our method is still effective on the retrained model. The average gains are 0.92 BLEU points. | [2, 2, 1, 1, 1, 1, 2, 2, 1, 1, 2, 2, 2, 2, 1, 2, 2, 2, 2, 1, 1] | ['5 Results on CH-EN Translation.', '5.1 Our methods vs. Baseline.', 'Table 1 reports the main translation results of CHEN translation.', 'We first compare Baseline+MEM with Baseline.', 'As shown in row 1 and row 5 in Table 1, Baseline+MEM can improve over Baseline on all test datasets, and the average improvement is 1.37 BLEU points.', 'The results show that our method could significantly outperform the baseline model.', '5.2 Results on Sub-words,2, We also test the proposed method when the translation unit is sub-word.', 'The baseline and our method using sub-word as translation unit are respectively denoted by Baseline(subword) and Baseline(sub-word)+MEM.', 'The results are shown in row 4 and row 7.', 'From the results, Baseline(sub-word)+MEM outperforms Baseline(sub-word) by 1.01 BLEU points, indicating that adopting sub-words as translation units still faces the problem of troublesome tokens, and our method could alleviate this problem.', '5.3 Our Method vs. Method Using Translation Lexicon.', 'We also compare our method with Arthur et al. (2016)’s method which incorporates a translation lexicon into NMT.', 'Here, the comparison is conducted in two ways based on whether the baseline neural model is fixed or retrained.', 'Fixed Baseline.', 'Comparing Arthur(test) (row 2 in Table 1) and Baseline+MEM (row 5 in Table 1), we can see that our proposed method can surpass Arthur(test) with 1.05 BLEU points.', 'As there are three differences between our methods and Arthur(test), we take the following experiments to evaluate the effect of each difference.', 'Retrained Baseline.', "In the second comparison, we allow the baseline model to be retrained by Arthur's method (Arthur(train+test)).", 'We then implement our method using Arthur(train+test) as baseline (denoted by Arthur(train+test)+MEM).', 'Comparing the results of these two methods in Table 1 (line 3 and 6), our method is still effective on the retrained model.', 'The average gains are 0.92 BLEU points.'] | [None, None, None, ['Baseline', 'Baseline+MEM'], ['Baseline', 'Baseline+MEM', '03', '04', '05', '06', '08', '△'], ['Baseline', 'Baseline+MEM'], None, ['Baseline(sub-word)', 'Baseline(sub-word)+MEM'], ['Baseline(sub-word)', 'Baseline(sub-word)+MEM'], ['Baseline(sub-word)', 'Baseline(sub-word)+MEM', '△'], None, None, None, None, ['Arthur(test)', 'Baseline+MEM', 'Avg.'], ['Arthur(test)', 'Baseline+MEM'], None, ['Arthur(train+test)'], ['Arthur(train+test)+MEM'], ['Arthur(train+test)', 'Arthur(train+test)+MEM'], ['Arthur(train+test)+MEM', 'Arthur(train+test)', '△']] | 1 |
D18-1037table_4 | News Commentary v8 Experiment results. Seq2Seq and NMT+RNNG results are taken from Eriguchi et al. (2017), Str2Tree (string-to-linearised-tree) results (no RIBES scores) come from Aharoni and Goldberg (2017) All numbers reported here are of non-ensemble models. | 2 | [['Dataset', 'Seq2Seq'], ['Dataset', 'Str2Tree'], ['Dataset', 'NMT+RNNG'], ['Dataset', 'Seq2DRNN'], ['Dataset', 'Seq2DRNN+SynC']] | 2 | [['DE-EN', 'BLEU'], ['DE-EN', 'RIBES'], ['CS-EN', 'BLEU'], ['CS-EN', 'RIBES'], ['RU-EN', 'BLEU'], ['RU-EN', 'RIBES']] | [['16.61', '73.8', '11.22', '69.6', '12.03', '69.6'], ['16.13', '-', '11.65', '-', '11.94', '-'], ['16.41', '75.0', '12.06', '70.4', '12.46', '71.0'], ['16.90', '75.1', '11.84', '67.3', '12.04', '69.7'], ['17.21', '75.8', '12.11', '70.3', '12.96', '71.1']] | column | ['BLEU', 'RIBES', 'BLEU', 'RIBES', 'BLEU', 'RIBES'] | ['Seq2DRNN+SynC'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>DE-EN || BLEU</th> <th>DE-EN || RIBES</th> <th>CS-EN || BLEU</th> <th>CS-EN || RIBES</th> <th>RU-EN || BLEU</th> <th>RU-EN || RIBES</th> </tr> </thead> <tbody> <tr> <td>Dataset || Seq2Seq</td> <td>16.61</td> <td>73.8</td> <td>11.22</td> <td>69.6</td> <td>12.03</td> <td>69.6</td> </tr> <tr> <td>Dataset || Str2Tree</td> <td>16.13</td> <td>-</td> <td>11.65</td> <td>-</td> <td>11.94</td> <td>-</td> </tr> <tr> <td>Dataset || NMT+RNNG</td> <td>16.41</td> <td>75.0</td> <td>12.06</td> <td>70.4</td> <td>12.46</td> <td>71.0</td> </tr> <tr> <td>Dataset || Seq2DRNN</td> <td>16.90</td> <td>75.1</td> <td>11.84</td> <td>67.3</td> <td>12.04</td> <td>69.7</td> </tr> <tr> <td>Dataset || Seq2DRNN+SynC</td> <td>17.21</td> <td>75.8</td> <td>12.11</td> <td>70.3</td> <td>12.96</td> <td>71.1</td> </tr> </tbody></table> | Table 4 | table_4 | D18-1037 | 7 | emnlp2018 | 3.3 Results. Table 3 and Table 4 has the BLEU (Papineni et al., 2002) and RIBES (Isozaki et al., 2010) scores. In our IWSLT2017 tests, both Seq2DRNN and Seq2DRNN+SynC produce better results than the Seq2Seq baseline model in terms of BLEU scores, while Seq2DRNN+SynC also produces better RIBES scores indicating better reordering of phrases in the output. The Seq2DRNN+SynC model performs better than the Seq2DRNN model. Both Seq2Seq and Seq2DRNN+SynC are able to produce results with lower perplexities than the baseline Seq2Seq model on the test data. In our News Commentary v8 tests, the same relative performance from Seq2DRNN(SynC) can be observed. The Seq2DRNN+SynC model is also able to out-perform the Str2Tree model proposed by Aharoni and Goldberg (2017) and NMT+RNNG by Eriguchi et al. (2017) in most cases. Note that Eriguchi et al. (2017) used dependency information instead of constituency information as presented in our work and Aharoni and Goldberg (2017)fs work. | [2, 1, 0, 0, 0, 1, 1, 2] | ['3.3 Results.', 'Table 3 and Table 4 has the BLEU (Papineni et al., 2002) and RIBES (Isozaki et al., 2010) scores.', 'In our IWSLT2017 tests, both Seq2DRNN and Seq2DRNN+SynC produce better results than the Seq2Seq baseline model in terms of BLEU scores, while Seq2DRNN+SynC also produces better RIBES scores indicating better reordering of phrases in the output.', 'The Seq2DRNN+SynC model performs better than the Seq2DRNN model.', 'Both Seq2Seq and Seq2DRNN+SynC are able to produce results with lower perplexities than the baseline Seq2Seq model on the test data.', 'In our News Commentary v8 tests, the same relative performance from Seq2DRNN(SynC) can be observed.', 'The Seq2DRNN+SynC model is also able to out-perform the Str2Tree model proposed by Aharoni and Goldberg (2017) and NMT+RNNG by Eriguchi et al. (2017) in most cases.', 'Note that Eriguchi et al. (2017) used dependency information instead of constituency information as presented in our work and Aharoni and Goldberg (2017)\x81fs work.'] | [None, ['BLEU', 'RIBES'], None, None, None, ['Seq2DRNN+SynC'], ['Seq2DRNN+SynC', 'Str2Tree', 'NMT+RNNG'], ['NMT+RNNG', 'Str2Tree']] | 1 |
D18-1045table_2 | Perplexity of source data as assigned by a language model (5-gram Kneser–Ney). Data generated by beam search is most predictable. | 1 | [['human data'], ['beam'], ['sampling'], ['top10'], ['beam+noise']] | 1 | [['Perplexity']] | [['75.34'], ['72.42'], ['500.17'], ['87.15'], ['2823.73']] | column | ['Perplexity'] | ['beam'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Perplexity</th> </tr> </thead> <tbody> <tr> <td>human data</td> <td>75.34</td> </tr> <tr> <td>beam</td> <td>72.42</td> </tr> <tr> <td>sampling</td> <td>500.17</td> </tr> <tr> <td>top10</td> <td>87.15</td> </tr> <tr> <td>beam+noise</td> <td>2823.73</td> </tr> </tbody></table> | Table 2 | table_2 | D18-1045 | 5 | emnlp2018 | We report the perplexity of our language model on all versions of the source data in Table 2. The results show that beam outputs receive higher probability by the language model compared to sampling, beam+noise and
real source sentences. This indicates that beam search outputs are not as rich as sampling outputs or beam+noise. This lack of variability probably explains in part why back-translations from pure beam search provide a weaker training signal than alternatives. | [1, 1, 1, 2] | ['We report the perplexity of our language model on all versions of the source data in Table 2.', 'The results show that beam outputs receive higher probability by the language model compared to sampling, beam+noise and\nreal source sentences.', 'This indicates that beam search outputs are not as rich as sampling outputs or beam+noise.', 'This lack of variability probably explains in part why back-translations from pure beam search provide a weaker training signal than alternatives.'] | [None, ['beam', 'sampling', 'beam+noise', 'human data'], ['beam', 'sampling', 'beam+noise'], None] | 1 |
D18-1045table_6 | BLEU on newstest2014 for WMT English-German (En–De) and English-French (En–Fr). The first four results use only WMT bitext (WMT’14, except for b, c, d in En–De which train on WMT’16). DeepL uses proprietary high-quality bitext and our result relies on back-translation with 226M newscrawl sentences for En–De and 31M for En–Fr. We also show detokenized BLEU (SacreBLEU). | 1 | [['a. Gehring et al. (2017)'], ['b. Vaswani et al. (2017)'], ['c. Ahmed et al. (2017)'], ['d. Shaw et al. (2018)'], ['DeepL'], ['Our result'], ['detok. sacreBLEU']] | 1 | [['En?De'], ['En?Fr']] | [['25.2', '40.5'], ['28.4', '41'], ['28.9', '41.4'], ['29.2', '41.5'], ['33.3', '45.9'], ['35', '45.6'], ['33.8', '43.8']] | column | ['BLEU', 'BLEU'] | ['Our result'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>En?De</th> <th>En?Fr</th> </tr> </thead> <tbody> <tr> <td>a. Gehring et al. (2017)</td> <td>25.2</td> <td>40.5</td> </tr> <tr> <td>b. Vaswani et al. (2017)</td> <td>28.4</td> <td>41</td> </tr> <tr> <td>c. Ahmed et al. (2017)</td> <td>28.9</td> <td>41.4</td> </tr> <tr> <td>d. Shaw et al. (2018)</td> <td>29.2</td> <td>41.5</td> </tr> <tr> <td>DeepL</td> <td>33.3</td> <td>45.9</td> </tr> <tr> <td>Our result</td> <td>35</td> <td>45.6</td> </tr> <tr> <td>detok. sacreBLEU</td> <td>33.8</td> <td>43.8</td> </tr> </tbody></table> | Table 6 | table_6 | D18-1045 | 9 | emnlp2018 | We upsample the bitext with a rate of 16 so that we observe every bitext sentence 16 times more often than each monolingual sentence. This results in a new state of the art of 35 BLEU on newstest2014 by using only WMT benchmark data. For comparison, DeepL, a commercial translation engine relying on high quality bilingual training data, achieves 33.3 tokenized BLEU. Table 6 summarizes our results and compares to other work in the literature. This shows that back-translation with sampling can result in high-quality translation models based on benchmark data only. | [2, 1, 1, 1, 2] | ['We upsample the bitext with a rate of 16 so that we observe every bitext sentence 16 times more often than each monolingual sentence.', 'This results in a new state of the art of 35 BLEU on newstest2014 by using only WMT benchmark data.', 'For comparison, DeepL, a commercial translation engine relying on high quality bilingual training data, achieves 33.3 tokenized BLEU.', 'Table 6 summarizes our results and compares to other work in the literature.', 'This shows that back-translation with sampling can result in high-quality translation models based on benchmark data only.'] | [None, ['Our result'], ['DeepL'], ['Our result'], None] | 1 |
D18-1046table_2 | Comparing different approaches on the NEWS 2015 dataset using acc@1 as the evaluation metric. “Ours” denotes the Seq2Seq(HMA) model, with (.) denoting the inference strategy. Numbers for RPI-ISI are from Lin et al. (2016). | 3 | [['Approach', 'Full Supervision Setting (5-10k examples)', 'Seq2Seq w/ Att (U)'], ['Approach', 'Full Supervision Setting (5-10k examples)', 'P&R (U)'], ['Approach', 'Full Supervision Setting (5-10k examples)', 'DirecTL+ (U)'], ['Approach', 'Full Supervision Setting (5-10k examples)', 'RPI-ISI (U)'], ['Approach', 'Full Supervision Setting (5-10k examples)', 'Ours(U)'], ['Approach', 'Approaches Using Constrained Inference', 'RPI-ISI + EL'], ['Approach', 'Approaches Using Constrained Inference', 'Ours(DC)'], ['Approach', 'Low-Resource Setting (500 examples)', 'Seq2Seq w/ Att (U)'], ['Approach', 'Low-Resource Setting (500 examples)', 'P&R (U)'], ['Approach', 'Low-Resource Setting (500 examples)', 'DirecTL+ (U)'], ['Approach', 'Low-Resource Setting (500 examples)', 'RPI-ISI (U)'], ['Approach', 'Low-Resource Setting (500 examples)', 'Ours(U) + Boot.']] | 2 | [['Lang.', 'hi'], ['Lang.', 'kn'], ['Lang.', 'bn'], ['Lang.', 'ta'], ['Lang.', 'he'], ['Lang.', 'Avg.']] | [['35.5', '33.4', '46.1', '17.2', '20.3', '30.5'], ['37.4', '31.6', '45.4', '20.2', '18.7', '30.7'], ['38.9', '34.7', '48.4', '19.9', '16.8', '31.7'], ['40.3', '29.8', '49.4', '20.2', '21.5', '32.2'], ['42.8', '38.9', '52.4', '20.5', '23.4', '35.6'], ['44.8', '37.6', '52.0', '29.0', '37.2', '40.1'], ['51.8', '43.3', '56.6', '28.0', '36.1', '43.2'], ['17.0', '13.6', '14.5', '6.0', '9.5', '12.1'], ['21.1', '16.6', '34.2', '9.4', '13.0', '18.9'], ['26.6', '25.3', '35.5', '11.8', '10.7', '22.0'], ['29.1', '27.7', '37.7', '11.5', '16.2', '24.4'], ['40.1', '35.1', '50.3', '17.8', '22.8', '33.0']] | column | ['acc@1', 'acc@1', 'acc@1', 'acc@1', 'acc@1', 'acc@1'] | ['Ours(U) + Boot.', 'Ours(U)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Lang. || hi</th> <th>Lang. || kn</th> <th>Lang. || bn</th> <th>Lang. || ta</th> <th>Lang. || he</th> <th>Lang. || Avg.</th> </tr> </thead> <tbody> <tr> <td>Approach || Full Supervision Setting (5-10k examples) || Seq2Seq w/ Att (U)</td> <td>35.5</td> <td>33.4</td> <td>46.1</td> <td>17.2</td> <td>20.3</td> <td>30.5</td> </tr> <tr> <td>Approach || Full Supervision Setting (5-10k examples) || P&R (U)</td> <td>37.4</td> <td>31.6</td> <td>45.4</td> <td>20.2</td> <td>18.7</td> <td>30.7</td> </tr> <tr> <td>Approach || Full Supervision Setting (5-10k examples) || DirecTL+ (U)</td> <td>38.9</td> <td>34.7</td> <td>48.4</td> <td>19.9</td> <td>16.8</td> <td>31.7</td> </tr> <tr> <td>Approach || Full Supervision Setting (5-10k examples) || RPI-ISI (U)</td> <td>40.3</td> <td>29.8</td> <td>49.4</td> <td>20.2</td> <td>21.5</td> <td>32.2</td> </tr> <tr> <td>Approach || Full Supervision Setting (5-10k examples) || Ours(U)</td> <td>42.8</td> <td>38.9</td> <td>52.4</td> <td>20.5</td> <td>23.4</td> <td>35.6</td> </tr> <tr> <td>Approach || Approaches Using Constrained Inference || RPI-ISI + EL</td> <td>44.8</td> <td>37.6</td> <td>52.0</td> <td>29.0</td> <td>37.2</td> <td>40.1</td> </tr> <tr> <td>Approach || Approaches Using Constrained Inference || Ours(DC)</td> <td>51.8</td> <td>43.3</td> <td>56.6</td> <td>28.0</td> <td>36.1</td> <td>43.2</td> </tr> <tr> <td>Approach || Low-Resource Setting (500 examples) || Seq2Seq w/ Att (U)</td> <td>17.0</td> <td>13.6</td> <td>14.5</td> <td>6.0</td> <td>9.5</td> <td>12.1</td> </tr> <tr> <td>Approach || Low-Resource Setting (500 examples) || P&R (U)</td> <td>21.1</td> <td>16.6</td> <td>34.2</td> <td>9.4</td> <td>13.0</td> <td>18.9</td> </tr> <tr> <td>Approach || Low-Resource Setting (500 examples) || DirecTL+ (U)</td> <td>26.6</td> <td>25.3</td> <td>35.5</td> <td>11.8</td> <td>10.7</td> <td>22.0</td> </tr> <tr> <td>Approach || Low-Resource Setting (500 examples) || RPI-ISI (U)</td> <td>29.1</td> <td>27.7</td> <td>37.7</td> <td>11.5</td> <td>16.2</td> <td>24.4</td> </tr> <tr> <td>Approach || Low-Resource Setting (500 examples) || Ours(U) + Boot.</td> <td>40.1</td> <td>35.1</td> <td>50.3</td> <td>17.8</td> <td>22.8</td> <td>33.0</td> </tr> </tbody></table> | Table 2 | table_2 | D18-1046 | 5 | emnlp2018 | 6.1 Full Supervision Setting. We compare Seq2Seq(HMA) with previous approaches when provided all available supervision, to see how it fares under standard evaluation. Results in the unconstrained inference (U) setting (Table 2 top 5 rows) shows Seq2Seq(HMA),denoted by “Ours”, outperforms previous approaches on Hindi, Kannada, and Bengali, with almost 3-4% gains. I. Improvements over the Seq2Seq with Attention (Seq2Seq w/ Att) model demonstrate the benefit of imposing the monotonicity constraint in the generation model. On Tamil and Hebrew, Seq2Seq(HMA) is at par with the best approaches, with negligible gap (∼0.3) in scores. Overall, we see that Seq2Seq(HMA) can achieve better (and sometimes competitive) scores than state-of-the-art approaches in full supervision settings. When comparing approaches which use constrained inference (Table 2, rows 6 and 7), we see that using dictionary-constrained inference (as in Ours(DC)) is more effective than using a entitylinking model for re-ranking (RPI-ISI + EL). 6.2 Low-Resource Setting. In Table 2 (rows under “Low-Resource Setting”), we evaluate different models in a low-resource setting when provided only 500 name pairs as supervision. Results are averaged over 5 different random sub-samples of 500 examples. The results clearly demonstrate that all generation models suffer a drop in performance when provided limited training data. Note that models like Seq2Seq with Attention suffer a larger drop than those which enforce monotonicity, suggesting that incorporating monotonicity into the inference step in the low-resource setting is essential. After bootstrapping our weak generation model using Algorithm 1, the performance improves substantially (last row in Table 2). On almost all languages, the generation model improves by at least 6%, with performance for Hindi and Bengali improving by more than 10%. Bootstrapping results for the languages are within 2-4% of the best model trained with all available supervision.". | [2, 2, 1, 1, 1, 1, 1, 2, 1, 1, 1, 1, 1, 1, 1] | ['6.1 Full Supervision Setting.', 'We compare Seq2Seq(HMA) with previous approaches when provided all available supervision, to see how it fares under standard evaluation.', 'Results in the unconstrained inference (U) setting (Table 2 top 5 rows) shows Seq2Seq(HMA),denoted by “Ours”, outperforms previous approaches on Hindi, Kannada, and Bengali, with almost 3-4% gains. I.', 'Improvements over the Seq2Seq with Attention (Seq2Seq w/ Att) model demonstrate the benefit of imposing the monotonicity constraint in the generation model.', 'On Tamil and Hebrew, Seq2Seq(HMA) is at par with the best approaches, with negligible gap (∼0.3) in scores.', 'Overall, we see that Seq2Seq(HMA) can achieve better (and sometimes competitive) scores than state-of-the-art approaches in full supervision settings.', 'When comparing approaches which use constrained inference (Table 2, rows 6 and 7), we see that using dictionary-constrained inference (as in Ours(DC)) is more effective than using a entitylinking model for re-ranking (RPI-ISI + EL).', '6.2 Low-Resource Setting.', 'In Table 2 (rows under “Low-Resource Setting”), we evaluate different models in a low-resource setting when provided only 500 name pairs as supervision.', 'Results are averaged over 5 different random sub-samples of 500 examples.', 'The results clearly demonstrate that all generation models suffer a drop in performance when provided limited training data.', 'Note that models like Seq2Seq with Attention suffer a larger drop than those which enforce monotonicity, suggesting that incorporating monotonicity into the inference step in the low-resource setting is essential.', 'After bootstrapping our weak generation model using Algorithm 1, the performance improves substantially (last row in Table 2).', 'On almost all languages, the generation model improves by at least 6%, with performance for Hindi and Bengali improving by more than 10%.', 'Bootstrapping results for the languages are within 2-4% of the best model trained with all available supervision.".'] | [None, ['Full Supervision Setting (5-10k examples)', 'Ours(U)'], ['Seq2Seq w/ Att (U)', 'P&R (U)', 'DirecTL+ (U)', 'RPI-ISI (U)', 'Ours(U)'], ['Seq2Seq w/ Att (U)'], ['Ours(U)', 'ta', 'he', 'Seq2Seq w/ Att (U)', 'P&R (U)', 'DirecTL+ (U)', 'RPI-ISI (U)'], ['Ours(U)', 'Seq2Seq w/ Att (U)', 'P&R (U)', 'DirecTL+ (U)', 'RPI-ISI (U)'], ['Approaches Using Constrained Inference', 'RPI-ISI + EL', 'Ours(DC)'], None, ['Low-Resource Setting (500 examples)'], ['Low-Resource Setting (500 examples)', 'Avg.'], ['Seq2Seq w/ Att (U)', 'P&R (U)', 'DirecTL+ (U)', 'Ours(U)', 'Ours(U) + Boot.'], ['Seq2Seq w/ Att (U)', 'Low-Resource Setting (500 examples)', 'Full Supervision Setting (5-10k examples)'], ['Ours(U) + Boot.'], ['hi', 'kn', 'bn', 'ta', 'he', 'Ours(U) + Boot.', 'Ours(U)'], ['Ours(U) + Boot.']] | 1 |
D18-1046table_3 | Acc@1 for native and foreign words for four languages (§7.2). Ratio is native performance relative to foreign. | 1 | [['Hindi'], ['Bengali'], ['Kannada'], ['Tamil']] | 1 | [['Native'], ['Foreign'], ['Ratio']] | [['45.1', '31.4', '1.44'], ['63.1', '20.1', '3.14'], ['42.6', '23.1', '1.84'], ['24.3', '05.2', '4.67']] | column | ['Acc@1', 'Acc@1', 'Acc@1'] | ['Native', 'Foreign'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Native</th> <th>Foreign</th> <th>Ratio</th> </tr> </thead> <tbody> <tr> <td>Hindi</td> <td>45.1</td> <td>31.4</td> <td>1.44</td> </tr> <tr> <td>Bengali</td> <td>63.1</td> <td>20.1</td> <td>3.14</td> </tr> <tr> <td>Kannada</td> <td>42.6</td> <td>23.1</td> <td>1.84</td> </tr> <tr> <td>Tamil</td> <td>24.3</td> <td>05.2</td> <td>4.67</td> </tr> </tbody></table> | Table 3 | table_3 | D18-1046 | 7 | emnlp2018 | To quantify the effect of this, we annotate native and foreign names in the test split of the four Indian languages, and evaluate performance for both categories. Table 3 shows that our model performs significantly better on native names for all the languages. A possible reason for is that the source scripts were designed for writing native names (e.g., Tamil script lacks separate {ta, da, tha, dha} characters because the Tamil language does not distinguish these sounds). Furthermore, foreign names have a wide variety of origins with their own conventions as discussed in ยง7.1. The performance gap is proportionally greatest for Tamil, likely due to its script. | [2, 1, 2, 2, 2] | ['To quantify the effect of this, we annotate native and foreign names in the test split of the four Indian languages, and evaluate performance for both categories.', 'Table 3 shows that our model performs significantly better on native names for all the languages.', 'A possible reason for is that the source scripts were designed for writing native names (e.g., Tamil script lacks separate {ta, da, tha, dha} characters because the Tamil language does not distinguish these sounds).', 'Furthermore, foreign names have a wide variety of origins with their own conventions as discussed in ยง7.1.', 'The performance gap is proportionally greatest for Tamil, likely due to its script.'] | [['Native', 'Foreign', 'Hindi', 'Bengali', 'Kannada', 'Tamil'], ['Native', 'Hindi', 'Bengali', 'Kannada', 'Tamil'], ['Tamil'], None, ['Tamil']] | 1 |
D18-1047table_3 | Performance for en-pt on rare words (RARE), and the en-pt MUSE dataset, which as shown in Figure 3 contains a lot of frequent words. | 1 | [['NORMA-Linear'], ['NORMA-Highway-NN'], ['1 layer-NN'], ['1 layer-Highway-NN'], ['Artetxe et al . 2018'], ['Lazaridou et al 2015']] | 2 | [['en-pt', 'RARE'], ['en-pt', 'MUSE']] | [['57.67', '72.60'], ['49.33', '71.73'], ['48.67', '72.13'], ['49.33', '72.10'], ['47.00', '77.73'], ['48.00', '72.27']] | column | ['accuracy', 'accuracy'] | ['NORMA-Linear', 'NORMA-Highway-NN'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>en-pt || RARE</th> <th>en-pt || MUSE</th> </tr> </thead> <tbody> <tr> <td>NORMA-Linear</td> <td>57.67</td> <td>72.60</td> </tr> <tr> <td>NORMA-Highway-NN</td> <td>49.33</td> <td>71.73</td> </tr> <tr> <td>1 layer-NN</td> <td>48.67</td> <td>72.13</td> </tr> <tr> <td>1 layer-Highway-NN</td> <td>49.33</td> <td>72.10</td> </tr> <tr> <td>Artetxe et al . 2018</td> <td>47.00</td> <td>77.73</td> </tr> <tr> <td>Lazaridou et al 2015</td> <td>48.00</td> <td>72.27</td> </tr> </tbody></table> | Table 3 | table_3 | D18-1047 | 7 | emnlp2018 | Table 3 shows that NORMA-Linear outperforms (Artetxe et al., 2018a) by over 10 points on the RARE words dataset. On the regular MUSE dictionary, (Artetxe et al., 2018a) is ahead by about 5 points. On RARE, (Lazaridou et al., 2015) is behind NORMA-Linear by 9 points, whereas on the MUSE dictionary performance of (Lazaridou et al., 2015) and NORMA-Linear is about the same. | [1, 1, 1] | ['Table 3 shows that NORMA-Linear outperforms (Artetxe et al., 2018a) by over 10 points on the RARE words dataset.', 'On the regular MUSE dictionary, (Artetxe et al., 2018a) is ahead by about 5 points.', 'On RARE, (Lazaridou et al., 2015) is behind NORMA-Linear by 9 points, whereas on the MUSE dictionary performance of (Lazaridou et al., 2015) and NORMA-Linear is about the same.'] | [['NORMA-Linear', 'Artetxe et al . 2018', 'RARE'], ['Artetxe et al . 2018', 'NORMA-Linear', 'MUSE'], ['Lazaridou et al 2015', 'NORMA-Linear', 'RARE', 'MUSE']] | 1 |
D18-1049table_3 | Comparison with previous works on Chinese-English translation task. The evaluation metric is caseinsensitive BLEU score. (Wang et al., 2017) use a hierarchical RNN to incorporate document-level context into RNNsearch. (Kuang et al., 2017) use a cache to exploit document-level context for RNNsearch. (Kuang et al., 2017)* is an adapted version of the cache-based method for Transformer. Note that “MT06” is not included in “All”. | 4 | [['Method', '(Wang et al. 2017)', 'Model', 'RNNsearch'], ['Method', '(Kuang et al. 2017)', 'Model', 'RNNsearch'], ['Method', '(Vaswani et al. 2017)', 'Model', 'Transformer'], ['Method', '(Kuang et al. 2017)*', 'Model', 'Transformer'], ['Method', 'this work', 'Model', 'Transformer']] | 1 | [['MT06'], ['MT02'], ['MT03'], ['MT04'], ['MT05'], ['MT08'], ['All']] | [['37.76', '-', '-', '-', '36.89', '27.57', '-'], ['-', '34.41', '-', '38.40', '32.90', '31.86', '-'], ['48.09', '48.63', '47.54', '47.79', '48.34', '38.31', '45.97'], ['48.14', '48.97', '48.05', '47.91', '48.53', '38.38', '46.37'], ['46.69', '50.96', '50.21', '49.73', '49.46', '39.69', '47.93']] | column | ['BLEU', 'BLEU', 'BLEU', 'BLEU', 'BLEU', 'BLEU', 'BLEU'] | ['this work'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>MT06</th> <th>MT02</th> <th>MT03</th> <th>MT04</th> <th>MT05</th> <th>MT08</th> <th>All</th> </tr> </thead> <tbody> <tr> <td>Method || (Wang et al. 2017) || Model || RNNsearch</td> <td>37.76</td> <td>-</td> <td>-</td> <td>-</td> <td>36.89</td> <td>27.57</td> <td>-</td> </tr> <tr> <td>Method || (Kuang et al. 2017) || Model || RNNsearch</td> <td>-</td> <td>34.41</td> <td>-</td> <td>38.40</td> <td>32.90</td> <td>31.86</td> <td>-</td> </tr> <tr> <td>Method || (Vaswani et al. 2017) || Model || Transformer</td> <td>48.09</td> <td>48.63</td> <td>47.54</td> <td>47.79</td> <td>48.34</td> <td>38.31</td> <td>45.97</td> </tr> <tr> <td>Method || (Kuang et al. 2017)* || Model || Transformer</td> <td>48.14</td> <td>48.97</td> <td>48.05</td> <td>47.91</td> <td>48.53</td> <td>38.38</td> <td>46.37</td> </tr> <tr> <td>Method || this work || Model || Transformer</td> <td>46.69</td> <td>50.96</td> <td>50.21</td> <td>49.73</td> <td>49.46</td> <td>39.69</td> <td>47.93</td> </tr> </tbody></table> | Table 3 | table_3 | D18-1049 | 7 | emnlp2018 | As shown in Table 3, using the same data, our approach achieves significant improvements over the original Transformer model (Vaswani et al. 2017) (p < 0.01). The gain on the concatenated test set (i.e., “All”) is 1.96 BLEU points. It also outperforms the cache-based method (Kuang et al. 2017) adapted for Transformer significantly (p < 0.01), which also uses the two-step training strategy. | [1, 1, 1] | ['As shown in Table 3, using the same data, our approach achieves significant improvements over the original Transformer model (Vaswani et al. 2017) (p < 0.01).', 'The gain on the concatenated test set (i.e., “All”) is 1.96 BLEU points.', 'It also outperforms the cache-based method (Kuang et al. 2017) adapted for Transformer significantly (p < 0.01), which also uses the two-step training strategy.'] | [['this work', '(Vaswani et al. 2017)'], ['this work', '(Vaswani et al. 2017)', 'All'], ['this work', '(Kuang et al. 2017)*']] | 1 |
D18-1049table_5 | Subjective evaluation of the comparison between the original Transformer model and our model. “>” means that Transformer is better than our model, “=” means equal, and “<” means worse. | 1 | [['Human 1'], ['Human 2'], ['Human 3'], ['Overall']] | 1 | [['>'], ['='], ['<']] | [['24%', '45%', '31%'], ['20%', '55%', '25%'], ['12%', '52%', '36%'], ['19%', '51%', '31%']] | column | ['percentage', 'percentage', 'percentage'] | ['Overall'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>></th> <th>=</th> <th><</th> </tr> </thead> <tbody> <tr> <td>Human 1</td> <td>24%</td> <td>45%</td> <td>31%</td> </tr> <tr> <td>Human 2</td> <td>20%</td> <td>55%</td> <td>25%</td> </tr> <tr> <td>Human 3</td> <td>12%</td> <td>52%</td> <td>36%</td> </tr> <tr> <td>Overall</td> <td>19%</td> <td>51%</td> <td>31%</td> </tr> </tbody></table> | Table 5 | table_5 | D18-1049 | 7 | emnlp2018 | Table 5 shows the results of subjective evaluation. Three human evaluators generally made consistent judgements. On average, around 19% of Transformer’s translations are better than that of our model, 51% are equal, and 31% are worse. This evaluation confirms that exploiting document-level context helps to improve translation quality. | [1, 2, 1, 2] | ['Table 5 shows the results of subjective evaluation.', 'Three human evaluators generally made consistent judgements.', 'On average, around 19% of Transformer’s translations are better than that of our model, 51% are equal, and 31% are worse.', 'This evaluation confirms that exploiting document-level context helps to improve translation quality.'] | [None, ['Human 1', 'Human 2', 'Human 3'], ['Overall', '>', '=', '<'], None] | 1 |
Subsets and Splits