table_id_paper
stringlengths
15
15
caption
stringlengths
14
1.88k
row_header_level
int32
1
9
row_headers
large_stringlengths
15
1.75k
column_header_level
int32
1
6
column_headers
large_stringlengths
7
1.01k
contents
large_stringlengths
18
2.36k
metrics_loc
stringclasses
2 values
metrics_type
large_stringlengths
5
532
target_entity
large_stringlengths
2
330
table_html_clean
large_stringlengths
274
7.88k
table_name
stringclasses
9 values
table_id
stringclasses
9 values
paper_id
stringlengths
8
8
page_no
int32
1
13
dir
stringclasses
8 values
description
large_stringlengths
103
3.8k
class_sentence
stringlengths
3
120
sentences
large_stringlengths
110
3.92k
header_mention
stringlengths
12
1.8k
valid
int32
0
1
D18-1280table_6
Comparison of the performance of ICON on both IEMOCAP and SEMAINE considering different modality combinations. Note: T=Text, A=Audio, V=Video
2
[['Modality', 'T'], ['Modality', 'A'], ['Modality', 'V'], ['Modality', 'A+V'], ['Modality', 'T+A'], ['Modality', 'T+V'], ['Modality', 'T+A+V']]
3
[['IEMOCAP', 'Emotions', 'acc.'], ['IEMOCAP', 'Emotions', 'F1'], ['SEMAINE', 'DV', 'r'], ['SEMAINE', 'DA', 'r'], ['SEMAINE', 'DP', 'r'], ['SEMAINE', 'DE', 'r']]
[['58.3', '57.9', '.237', '.297', '.260', '.225'], ['50.7', '50.9', '.021', '.082', '.250', '.035'], ['41.2', '39.8', '.001', '.068', '.251', '.001'], ['52.0', '51.2', '.031', '.122', '.283', '.050'], ['63.8', '63.2', '.237', '.310', '.272', '.242'], ['61.4', '61.2', '.238', '.293', '.268', '.239'], ['64.0', '63.5', '.243', '.312', '.279', '.244']]
column
['acc.', 'F1', 'r', 'r', 'r', 'r']
['T', 'A', 'V', 'A+V', 'T+A', 'T+V', 'T+A+V']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>IEMOCAP || Emotions || acc.</th> <th>IEMOCAP || Emotions || F1</th> <th>SEMAINE || DV || r</th> <th>SEMAINE || DA || r</th> <th>SEMAINE || DP || r</th> <th>SEMAINE || DE || r</th> </tr> </thead> <tbody> <tr> <td>Modality || T</td> <td>58.3</td> <td>57.9</td> <td>.237</td> <td>.297</td> <td>.260</td> <td>.225</td> </tr> <tr> <td>Modality || A</td> <td>50.7</td> <td>50.9</td> <td>.021</td> <td>.082</td> <td>.250</td> <td>.035</td> </tr> <tr> <td>Modality || V</td> <td>41.2</td> <td>39.8</td> <td>.001</td> <td>.068</td> <td>.251</td> <td>.001</td> </tr> <tr> <td>Modality || A+V</td> <td>52.0</td> <td>51.2</td> <td>.031</td> <td>.122</td> <td>.283</td> <td>.050</td> </tr> <tr> <td>Modality || T+A</td> <td>63.8</td> <td>63.2</td> <td>.237</td> <td>.310</td> <td>.272</td> <td>.242</td> </tr> <tr> <td>Modality || T+V</td> <td>61.4</td> <td>61.2</td> <td>.238</td> <td>.293</td> <td>.268</td> <td>.239</td> </tr> <tr> <td>Modality || T+A+V</td> <td>64.0</td> <td>63.5</td> <td>.243</td> <td>.312</td> <td>.279</td> <td>.244</td> </tr> </tbody></table>
Table 6
table_6
D18-1280
7
emnlp2018
Multimodality:. We investigate the importance of multimodal features for our task. Table 6 presents the results for different combinations of modes used by ICON on IEMOCAP. As seen, the trimodal network provides the best performance which is preceded by the bimodal variants. Among unimodals, language modality performs the best, reaffirming its significance in multimodal systems. Interestingly, the audio and visual modality, on their own, do not provide good performance, but when used with text, complementary data is shared to improve overall performance.
[2, 2, 1, 1, 1, 1]
['Multimodality:.', 'We investigate the importance of multimodal features for our task.', 'Table 6 presents the results for different combinations of modes used by ICON on IEMOCAP.', 'As seen, the trimodal network provides the best performance which is preceded by the bimodal variants.', 'Among unimodals, language modality performs the best, reaffirming its significance in multimodal systems.', 'Interestingly, the audio and visual modality, on their own, do not provide good performance, but when used with text, complementary data is shared to improve overall performance.']
[None, None, ['Modality', 'T', 'A', 'V', 'A+V', 'T+A', 'T+V', 'T+A+V'], ['Modality', 'T+A+V', 'A+V', 'T+A', 'T+V'], ['Modality', 'T'], ['Modality', 'A', 'V', 'T+A', 'T+V']]
1
D18-1282table_5
Sense prediction accuracy for motion (left) and non-motion verbs (right) using different image representations. + marks results taken from Gella et al. (2018). MFS is the most frequent sense heuristic.
2
[['Features', 'Random'], ['Features', 'MFS'], ['Features', 'CNN'], ['Features', 'Gella–CNN+O'], ['Features', 'Gella–CNN+C'], ['Features', 'CNN (reproduced)'], ['Features', 'ImgObjLoc']]
1
[['Motion'], ['Non-motion']]
[['76.7 ± 0.86', '78.5 ± 0.39'], ['76.1', '80.0'], ['82.3', '80.0'], ['83.0', '80.0'], ['82.3', '80.3'], ['83.1', '79.8 ± 0.53'], ['84.8 ± 0.69', '80.4 ± 0.57']]
column
['accuracy', 'accuracy']
['ImgObjLoc']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Motion</th> <th>Non-motion</th> </tr> </thead> <tbody> <tr> <td>Features || Random</td> <td>76.7 ± 0.86</td> <td>78.5 ± 0.39</td> </tr> <tr> <td>Features || MFS</td> <td>76.1</td> <td>80.0</td> </tr> <tr> <td>Features || CNN</td> <td>82.3</td> <td>80.0</td> </tr> <tr> <td>Features || Gella–CNN+O</td> <td>83.0</td> <td>80.0</td> </tr> <tr> <td>Features || Gella–CNN+C</td> <td>82.3</td> <td>80.3</td> </tr> <tr> <td>Features || CNN (reproduced)</td> <td>83.1</td> <td>79.8 ± 0.53</td> </tr> <tr> <td>Features || ImgObjLoc</td> <td>84.8 ± 0.69</td> <td>80.4 ± 0.57</td> </tr> </tbody></table>
Table 5
table_5
D18-1282
9
emnlp2018
Table 5 gives the mean accuracy obtained on the test data (of 100 runs). Our ImgObjLoc vectors outperform all comparison models on motion verbs, including CNN-based image features and the best-performing models of (Gella et al., 2018), namely Gella–CNN+O and Gella–CNN+C (CNN features concatenated with predicted object labels and image captions, respectively). On non-motion verbs, the best models, including our own, perform only comparably to the most frequent sense heuristic. Note that we examine the simplest representation ImgObjLoc can yield, i.e., frame-semantic representations for individual images. More complex representations are left for future work.
[1, 1, 1, 2, 2]
['Table 5 gives the mean accuracy obtained on the test data (of 100 runs).', 'Our ImgObjLoc vectors outperform all comparison models on motion verbs, including CNN-based image features and the best-performing models of (Gella et al., 2018), namely Gella–CNN+O and Gella–CNN+C (CNN features concatenated with predicted object labels and image captions, respectively).', 'On non-motion verbs, the best models, including our own, perform only comparably to the most frequent sense heuristic.', 'Note that we examine the simplest representation ImgObjLoc can yield, i.e., frame-semantic representations for individual images.', 'More complex representations are left for future work.']
[None, ['ImgObjLoc', 'Random', 'MFS', 'CNN', 'Gella–CNN+O', 'Gella–CNN+C', 'CNN (reproduced)', 'Motion'], ['Non-motion', 'MFS', 'CNN', 'Gella–CNN+O', 'Gella–CNN+C', 'CNN (reproduced)', 'ImgObjLoc'], ['ImgObjLoc'], None]
1
D18-1283table_4
Results from the human subject study on common ground.
1
[['Easy'], ['Hard']]
1
[['Attenton'], ['CVAE'], ['CVAE+SV'], ['Gold']]
[['0.665', '0.776', '0.818', '0.888'], ['0.576', '0.718', '0.788', '0.841']]
column
['accuracy', 'accuracy', 'accuracy', 'accuracy']
['CVAE', 'CVAE+SV']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Attenton</th> <th>CVAE</th> <th>CVAE+SV</th> <th>Gold</th> </tr> </thead> <tbody> <tr> <td>Easy</td> <td>0.665</td> <td>0.776</td> <td>0.818</td> <td>0.888</td> </tr> <tr> <td>Hard</td> <td>0.576</td> <td>0.718</td> <td>0.788</td> <td>0.841</td> </tr> </tbody></table>
Table 4
table_4
D18-1283
9
emnlp2018
7.2 Experimental Results. Table 4 shows the comparison results among various models and the upper bound where the gold commonsense evidence provided to the human. It’s not surprising that performance on common ground is worse in the hard configuration as the distracting verbs are more similar to the target action. The CVAE-based method is better than the attention-based method in facilitating common ground.
[2, 1, 1, 1]
['7.2 Experimental Results.', 'Table 4 shows the comparison results among various models and the upper bound where the gold commonsense evidence provided to the human.', 'It’s not surprising that performance on common ground is worse in the hard configuration as the distracting verbs are more similar to the target action.', 'The CVAE-based method is better than the attention-based method in facilitating common ground.']
[None, ['Attenton', 'CVAE', 'CVAE+SV', 'Gold'], ['Hard', 'Attenton', 'CVAE', 'CVAE+SV', 'Gold'], ['CVAE', 'Attenton']]
1
D18-1289table_3
Performance of the linear SVM regression model and the avg score at different agreements.
1
[['mean abs err 1'], ['Spearman 1'], ['mean abs err 2'], ['Spearman 2'], ['mean abs err 3'], ['Spearman 3'], ['avg mean abs err'], ['avg Spearman']]
1
[['IT-10'], ['IT-14'], ['EN-10'], ['EN-14']]
[['0.77', '0.78', '0.71', '0.68'], ['0.57', '0.64', '0.68', '0.64'], ['0.79', '0.80', '0.70', '0.70'], ['0.55', '0.63', '0.67', '0.73'], ['0.85', '0.75', '0.77', '0.60'], ['0.55', '0.64', '0.61', '0.71'], ['0.80', '0.78', '0.72', '0.66'], ['0.56', '0.63', '0.65', '0.69']]
row
['mean abs err 1', 'Spearman 1', 'mean abs err 2', 'Spearman 2', 'mean abs err 3', 'Spearman 3', 'avg mean abs err', 'avg Spearman']
['IT-10', 'IT-14', 'EN-10', 'EN-14']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>IT-10</th> <th>IT-14</th> <th>EN-10</th> <th>EN-14</th> </tr> </thead> <tbody> <tr> <td>mean abs err 1</td> <td>0.77</td> <td>0.78</td> <td>0.71</td> <td>0.68</td> </tr> <tr> <td>Spearman 1</td> <td>0.57</td> <td>0.64</td> <td>0.68</td> <td>0.64</td> </tr> <tr> <td>mean abs err 2</td> <td>0.79</td> <td>0.80</td> <td>0.70</td> <td>0.70</td> </tr> <tr> <td>Spearman 2</td> <td>0.55</td> <td>0.63</td> <td>0.67</td> <td>0.73</td> </tr> <tr> <td>mean abs err 3</td> <td>0.85</td> <td>0.75</td> <td>0.77</td> <td>0.60</td> </tr> <tr> <td>Spearman 3</td> <td>0.55</td> <td>0.64</td> <td>0.61</td> <td>0.71</td> </tr> <tr> <td>avg mean abs err</td> <td>0.80</td> <td>0.78</td> <td>0.72</td> <td>0.66</td> </tr> <tr> <td>avg Spearman</td> <td>0.56</td> <td>0.63</td> <td>0.65</td> <td>0.69</td> </tr> </tbody></table>
Table 3
table_3
D18-1289
7
emnlp2018
5.1 Predicting Human Complexity Judgmen. To asses the contribution of the linguistic features to predict the judgment of sentence complexity we trained a linear SVM regression model with default parameters. We performed a 3-fold cross validation over each subset of agreed sentences at agreement 10 and 14. We measured two performance metrics: the mean absolute error to evaluate the accuracy of the model to predict the same complexity judgment assigned by humans; the Spearman correlation to evaluate the correlation between the ranking of features produced by the regression model with the ranking produced by the human judgments. Table 3 reports the results and the average score of the two metrics. As it can be seen, the model is very accurate and achieves a very high correlation (>0.56 with p <0.001) with an average error difference (avg mean abs err) below 1. In particular, the model obtained higher performance in predicting the ranking of features extracted from sentences at agreement 14. This might be due to the fact these sentences are characterized by a more uniform distribution of linguistic phenomena and that these phenomena contribute to predict the same judgment of complexity. This is in line with the results obtained by the SVM classifier in predicting agreement (Table 2). This is particularly the case of English and it possibly suggests that the set of sentences similarly judged by humans are characterized by a lower variability of the values of the features.
[2, 2, 2, 2, 1, 1, 1, 2, 2, 2]
['5.1 Predicting Human Complexity Judgmen.', 'To asses the contribution of the linguistic features to predict the judgment of sentence complexity we trained a linear SVM regression model with default parameters.', 'We performed a 3-fold cross validation over each subset of agreed sentences at agreement 10 and 14.', 'We measured two performance metrics: the mean absolute error to evaluate the accuracy of the model to predict the same complexity judgment assigned by humans; the Spearman correlation to evaluate the correlation between the ranking of features produced by the regression model with the ranking produced by the human judgments.', 'Table 3 reports the results and the average score of the two metrics.', 'As it can be seen, the model is very accurate and achieves a very high correlation (>0.56 with p <0.001) with an average error difference (avg mean abs err) below 1.', 'In particular, the model obtained higher performance in predicting the ranking of features extracted from sentences at agreement 14.', 'This might be due to the fact these sentences are characterized by a more uniform distribution of linguistic phenomena and that these phenomena contribute to predict the same judgment of complexity.', 'This is in line with the results obtained by the SVM classifier in predicting agreement (Table 2).', 'This is particularly the case of English and it possibly suggests that the set of sentences similarly judged by humans are characterized by a lower variability of the values of the features.']
[None, None, ['IT-10', 'IT-14', 'EN-10', 'EN-14'], ['mean abs err 1', 'Spearman 1', 'mean abs err 2', 'Spearman 2', 'mean abs err 3', 'Spearman 3'], ['mean abs err 1', 'Spearman 1', 'mean abs err 2', 'Spearman 2', 'mean abs err 3', 'Spearman 3', 'avg mean abs err', 'avg Spearman'], ['Spearman 1', 'Spearman 2', 'Spearman 3', 'avg Spearman', 'avg mean abs err'], ['IT-14', 'EN-14'], None, None, ['EN-10', 'EN-14']]
1
D18-1292table_1
Development results for different systems using posterior inference on constituents (PIoC).
2
[['System', 'Best'], ['System', 'Best w/ PIoC'], ['System', 'All w/ PIoC'], ['System', 'All w/ PIoC w/o best']]
1
[['Rec'], ['Prec'], ['F1']]
[['73.65', '55.66', '63.40'], ['73.59', '56.41', '63.87'], ['72.99', '59.21', '65.38'], ['73.00', '59.06', '65.29']]
column
['Rec', 'Prec', 'F1']
['Best', 'Best w/ PIoC', 'All w/ PIoC', 'All w/ PIoC w/o best']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Rec</th> <th>Prec</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>System || Best</td> <td>73.65</td> <td>55.66</td> <td>63.40</td> </tr> <tr> <td>System || Best w/ PIoC</td> <td>73.59</td> <td>56.41</td> <td>63.87</td> </tr> <tr> <td>System || All w/ PIoC</td> <td>72.99</td> <td>59.21</td> <td>65.38</td> </tr> <tr> <td>System || All w/ PIoC w/o best</td> <td>73.00</td> <td>59.06</td> <td>65.29</td> </tr> </tbody></table>
Table 1
table_1
D18-1292
8
emnlp2018
Table 1 shows parsing results on the WSJ20dev dataset. The Best result is from an arbitrary sample at convergence of the oracle best run. The Best with PIoC is the same run, but with PIoC to aggregate 100 posterior samples at convergence. All with PIoC uses 100 posterior samples from all of the 10 chosen runs, and finally All with PIoC without best excludes the best run in PIoC calculation. There is almost a point of gain in precision going from Best to Best with PIoC with virtually no recall loss, showing that the posterior uncertainty is helpful in flattening binary trees. As more samples from the posterior are collected, as shown in All with PIoC without best, the precision gain is even more substantial. This shows that with PIoC there is no need to know which sample from which run is the best. Model selection in this case is only needed to weed out the runs with very low likelihood.
[1, 2, 2, 2, 1, 1, 2, 2]
['Table 1 shows parsing results on the WSJ20dev dataset.', 'The Best result is from an arbitrary sample at convergence of the oracle best run.', 'The Best with PIoC is the same run, but with PIoC to aggregate 100 posterior samples at convergence.', 'All with PIoC uses 100 posterior samples from all of the 10 chosen runs, and finally All with PIoC without best excludes the best run in PIoC calculation.', 'There is almost a point of gain in precision going from Best to Best with PIoC with virtually no recall loss, showing that the posterior uncertainty is helpful in flattening binary trees.', 'As more samples from the posterior are collected, as shown in All with PIoC without best, the precision gain is even more substantial.', 'This shows that with PIoC there is no need to know which sample from which run is the best.', 'Model selection in this case is only needed to weed out the runs with very low likelihood.']
[None, None, None, None, ['Best', 'Best w/ PIoC', 'Rec', 'Prec'], ['All w/ PIoC w/o best', 'Prec', 'Best w/ PIoC'], ['All w/ PIoC w/o best'], None]
1
D18-1296table_2
Recognition results with standard and session-based LSTM-LMs, measured by word error rates (WER).
4
[['Word encoding', 'Letter 3gram', 'Model', 'LSTM-LM'], ['Word encoding', 'Letter 3gram', 'Model', 'Session LSTM-LM'], ['Word encoding', 'Letter 3gram', 'Model', 'Session LSTM-LM 2nd iteration'], ['Word encoding', 'One-hot', 'Model', 'LSTM-LM'], ['Word encoding', 'One-hot', 'Model', 'Session LSTM-LM'], ['Word encoding', 'One-hot', 'Model', 'Session LSTM-LM 2nd iteration'], ['Word encoding', 'Letter 3gram + One-hot', 'Model', 'LSTM-LM'], ['Word encoding', 'Letter 3gram + One-hot', 'Model', 'Session LSTM-LM'], ['Word encoding', 'Letter 3gram + One-hot', 'Model', 'LSTM-LM + Session LSTM-LM']]
3
[['WER', 'dev set', '-'], ['WER', 'test', 'SWB'], ['WER', 'test', 'CH']]
[['10.01', '6.88', '12.79'], ['9.67', '6.81', '12.54'], ['9.66', '6.77', '12.56'], ['9.81', '6.89', '13.02'], ['9.47', '6.81', '12.60'], ['9.50', '6.83', '12.73'], ['9.66', '6.63', '12.77'], ['9.28', '6.52', '12.34'], ['9.22', '6.45', '12.11']]
column
['WER', 'WER', 'WER']
['Session LSTM-LM', 'Session LSTM-LM 2nd iteration', 'LSTM-LM + Session LSTM-LM']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>WER || devset</th> <th>WER test || SWB</th> <th>WER test || CH</th> </tr> </thead> <tbody> <tr> <td>Word encoding || Letter 3gram || Model || LSTM-LM</td> <td>10.01</td> <td>6.88</td> <td>12.79</td> </tr> <tr> <td>Word encoding || Letter 3gram || Model || Session LSTM-LM</td> <td>9.67</td> <td>6.81</td> <td>12.54</td> </tr> <tr> <td>Word encoding || Letter 3gram || Model || Session LSTM-LM 2nd iteration</td> <td>9.66</td> <td>6.77</td> <td>12.56</td> </tr> <tr> <td>Word encoding || One-hot || Model || LSTM-LM</td> <td>9.81</td> <td>6.89</td> <td>13.02</td> </tr> <tr> <td>Word encoding || One-hot || Model || Session LSTM-LM</td> <td>9.47</td> <td>6.81</td> <td>12.60</td> </tr> <tr> <td>Word encoding || One-hot || Model || Session LSTM-LM 2nd iteration</td> <td>9.50</td> <td>6.83</td> <td>12.73</td> </tr> <tr> <td>Word encoding || Letter 3gram + One-hot || Model || LSTM-LM</td> <td>9.66</td> <td>6.63</td> <td>12.77</td> </tr> <tr> <td>Word encoding || Letter 3gram + One-hot || Model || Session LSTM-LM</td> <td>9.28</td> <td>6.52</td> <td>12.34</td> </tr> <tr> <td>Word encoding || Letter 3gram + One-hot || Model || LSTM-LM + Session LSTM-LM</td> <td>9.22</td> <td>6.45</td> <td>12.11</td> </tr> </tbody></table>
Table 2
table_2
D18-1296
4
emnlp2018
Table 2 presents recognition results, comparing baseline LSTM-LMs to the full session-based LSTM-LMs. Both the letter-trigram and one-word word encoding versions are reported. The different models may also be used jointly, using log-linear score combination in rescoring, shown in the third section of the table. We also tried iterating the session LM rescoring, after the recognized word histories were updated from the first rescoring pass (shown as “2nd iteration” in the table). Results show that the session-based LM yields between 1% and 4% relative word error reduction for the two word encodings, and test sets. When the two word encoding types are combined by log-linear combination of model scores, the gain from session-based modeling is preserved. Iterating the session LM rescoring to improve the word histories did not give consistent gains. Even though the session-based LSTM subsumes all the information used in the standard LSTM, there is an additional gain to be had from combining those two model types (last row in the table). Thus, the overall gain from adding the session-based models to the two baseline models is 3-5% relative word error reduction.
[1, 1, 1, 1, 1, 2, 1, 1, 1]
['Table 2 presents recognition results, comparing baseline LSTM-LMs to the full session-based LSTM-LMs.', 'Both the letter-trigram and one-word word encoding versions are reported.', 'The different models may also be used jointly, using log-linear score combination in rescoring, shown in the third section of the table.', 'We also tried iterating the session LM rescoring, after the recognized word histories were updated from the first rescoring pass (shown as “2nd iteration” in the table).', 'Results show that the session-based LM yields between 1% and 4% relative word error reduction for the two word encodings, and test sets.', 'When the two word encoding types are combined by log-linear combination of model scores, the gain from session-based modeling is preserved.', 'Iterating the session LM rescoring to improve the word histories did not give consistent gains.', 'Even though the session-based LSTM subsumes all the information used in the standard LSTM, there is an additional gain to be had from combining those two model types (last row in the table).', 'Thus, the overall gain from adding the session-based models to the two baseline models is 3-5% relative word error reduction.']
[['LSTM-LM', 'Session LSTM-LM'], ['Letter 3gram', 'One-hot'], ['Letter 3gram + One-hot'], ['Session LSTM-LM 2nd iteration'], ['Session LSTM-LM', 'Letter 3gram', 'One-hot', 'Letter 3gram + One-hot', 'WER', 'test', 'SWB', 'CH'], ['Letter 3gram + One-hot', 'Session LSTM-LM'], ['Session LSTM-LM', 'Session LSTM-LM 2nd iteration'], ['LSTM-LM + Session LSTM-LM'], ['Letter 3gram + One-hot', 'LSTM-LM', 'Session LSTM-LM', 'LSTM-LM + Session LSTM-LM']]
1
D18-1303table_2
Multi-label classification results.
2
[['Model', 'Random Forest'], ['Model', 'CNN'], ['Model', 'RNN'], ['Model', 'CNN-RNN'], ['Model', 'CNN-RNN (bidirec + char)']]
1
[['Exact Match'], ['Hamming']]
[['35.0', '70.2'], ['53.7', '80.2'], ['57.1', '81.5'], ['59.2', '82.3'], ['62.0', '82.5']]
column
['accuracy', 'accuracy']
['CNN-RNN', 'CNN-RNN (bidirec + char)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Exact Match</th> <th>Hamming</th> </tr> </thead> <tbody> <tr> <td>Model || Random Forest</td> <td>35.0</td> <td>70.2</td> </tr> <tr> <td>Model || CNN</td> <td>53.7</td> <td>80.2</td> </tr> <tr> <td>Model || RNN</td> <td>57.1</td> <td>81.5</td> </tr> <tr> <td>Model || CNN-RNN</td> <td>59.2</td> <td>82.3</td> </tr> <tr> <td>Model || CNN-RNN (bidirec + char)</td> <td>62.0</td> <td>82.5</td> </tr> </tbody></table>
Table 2
table_2
D18-1303
3
emnlp2018
See Table 2 for multi-label classification results, where the Hamming score for the multi-label CNN-RNN model is 82.5%, showing potential for real-world use as well as substantial future research scope.
[1]
['See Table 2 for multi-label classification results, where the Hamming score for the multi-label CNN-RNN model is 82.5%, showing potential for real-world use as well as substantial future research scope.']
[['CNN-RNN (bidirec + char)', 'Hamming']]
1
D18-1309table_2
Performance comparison of the state-ofthe-art nested NER models on the test dataset.
2
[['Model', 'Exhaustive Model'], ['Model', 'Ju et al. (2018)'], ['Model', 'Katiyar and Cardie'], ['Model', 'Muis and Lu (2017)'], ['Model', 'Lu and Roth (2015)'], ['Model', 'Finkel and Manning']]
1
[['P(%)'], ['R(%)'], ['F(%)']]
[['93.2', '64.0', '77.1'], ['78.5', '71.3', '74.7'], ['76.7', '71.1', '73.8'], ['75.4', '66.8', '70.8'], ['72.5', '65.2', '68.7'], ['75.4', '65.9', '70.3']]
column
['P(%)', 'R(%)', 'F(%)']
['Exhaustive Model']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P(%)</th> <th>R(%)</th> <th>F(%)</th> </tr> </thead> <tbody> <tr> <td>Model || Exhaustive Model</td> <td>93.2</td> <td>64.0</td> <td>77.1</td> </tr> <tr> <td>Model || Ju et al. (2018)</td> <td>78.5</td> <td>71.3</td> <td>74.7</td> </tr> <tr> <td>Model || Katiyar and Cardie</td> <td>76.7</td> <td>71.1</td> <td>73.8</td> </tr> <tr> <td>Model || Muis and Lu (2017)</td> <td>75.4</td> <td>66.8</td> <td>70.8</td> </tr> <tr> <td>Model || Lu and Roth (2015)</td> <td>72.5</td> <td>65.2</td> <td>68.7</td> </tr> <tr> <td>Model || Finkel and Manning</td> <td>75.4</td> <td>65.9</td> <td>70.3</td> </tr> </tbody></table>
Table 2
table_2
D18-1309
4
emnlp2018
4.1 Nested NER. Table 2 shows the comparison of our model with several previous state-of-the nested NER models on the test dataset. Our model outperforms the state-of-the-art models in terms of F-score. Our results on Table 2 is based on bidirectional LSTM with character embeddings and the maximum region size is 10.
[2, 1, 1, 2]
['4.1 Nested NER.', 'Table 2 shows the comparison of our model with several previous state-of-the nested NER models on the test dataset.', 'Our model outperforms the state-of-the-art models in terms of F-score.', 'Our results on Table 2 is based on bidirectional LSTM with character embeddings and the maximum region size is 10.']
[None, ['Exhaustive Model', 'Ju et al. (2018)', 'Katiyar and Cardie', 'Muis and Lu (2017)', 'Lu and Roth (2015)', 'Finkel and Manning'], ['Exhaustive Model', 'F(%)'], ['Exhaustive Model']]
1
D18-1309table_3
Performances of our model on different entity level on the test dataset.
2
[['Entity Level', 'Single-token'], ['Entity Level', 'Multi-token'], ['Entity Level', 'Top Level'], ['Entity Level', 'Nested'], ['Entity Level', 'All entities']]
1
[['P(%)'], ['R(%)'], ['F(%)']]
[['91.6', '58.4', '69.9'], ['95.9', '65.8', '77.9'], ['92.7', '69.8', '79.3'], ['94.3', '59.3', '72.7'], ['93.2', '64.0', '77.1']]
column
['P(%)', 'R(%)', 'F(%)']
['Entity Level']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P(%)</th> <th>R(%)</th> <th>F(%)</th> </tr> </thead> <tbody> <tr> <td>Entity Level || Single-token</td> <td>91.6</td> <td>58.4</td> <td>69.9</td> </tr> <tr> <td>Entity Level || Multi-token</td> <td>95.9</td> <td>65.8</td> <td>77.9</td> </tr> <tr> <td>Entity Level || Top Level</td> <td>92.7</td> <td>69.8</td> <td>79.3</td> </tr> <tr> <td>Entity Level || Nested</td> <td>94.3</td> <td>59.3</td> <td>72.7</td> </tr> <tr> <td>Entity Level || All entities</td> <td>93.2</td> <td>64.0</td> <td>77.1</td> </tr> </tbody></table>
Table 3
table_3
D18-1309
4
emnlp2018
Table 3 describes the performances of our model on different entity levels on the test dataset. The model performs well on multi-token and top-level entities. This is interesting because they are often considered difficult for sequential labeling models.
[1, 1, 2]
['Table 3 describes the performances of our model on different entity levels on the test dataset.', 'The model performs well on multi-token and top-level entities.', 'This is interesting because they are often considered difficult for sequential labeling models.']
[['Single-token', 'Multi-token', 'Top Level', 'Nested', 'All entities'], ['Multi-token', 'Top Level'], None]
1
D18-1309table_4
Categorical performances on the GENIA test dataset.
2
[['Label', 'DNA'], ['Label', 'RNA'], ['Label', 'cell line'], ['Label', 'cell type'], ['Label', 'protein']]
1
[['P(%)'], ['R(%)'], ['F(%)'], ['F&M F(%)']]
[['92.6', '58.7', '71.8', '65.2'], ['98.8', '57.1', '72.4', '74.7'], ['94.6', '53.1', '67.9', '64.0'], ['88.4', '70.0', '78.1', '67.1'], ['94.1', '70.8', '80.8', '73.8']]
column
['P(%)', 'R(%)', 'F(%)', 'F&M F(%)']
['DNA', 'RNA', 'cell line', 'cell type', 'protein']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P(%)</th> <th>R(%)</th> <th>F(%)</th> <th>F&amp;M F(%)</th> </tr> </thead> <tbody> <tr> <td>Label || DNA</td> <td>92.6</td> <td>58.7</td> <td>71.8</td> <td>65.2</td> </tr> <tr> <td>Label || RNA</td> <td>98.8</td> <td>57.1</td> <td>72.4</td> <td>74.7</td> </tr> <tr> <td>Label || cell line</td> <td>94.6</td> <td>53.1</td> <td>67.9</td> <td>64.0</td> </tr> <tr> <td>Label || cell type</td> <td>88.4</td> <td>70.0</td> <td>78.1</td> <td>67.1</td> </tr> <tr> <td>Label || protein</td> <td>94.1</td> <td>70.8</td> <td>80.8</td> <td>73.8</td> </tr> </tbody></table>
Table 4
table_4
D18-1309
4
emnlp2018
Table 4 shows the performances on the five entity types on the test dataset. We here show the performance by Finkel and Manning (2009) (F&M) for the reference. Our system performs better than their model except for the RNA type.
[1, 1, 1]
['Table 4 shows the performances on the five entity types on the test dataset.', 'We here show the performance by Finkel and Manning (2009) (F&M) for the reference.', 'Our system performs better than their model except for the RNA type.']
[['DNA', 'RNA', 'cell line', 'cell type', 'protein'], ['F&M F(%)'], ['F(%)', 'F&M F(%)', 'DNA', 'cell line', 'cell type', 'protein']]
1
D18-1309table_5
Performance of our model with different maximum region sizes on the development dataset. Ratio refers to the coverage ratio of entity mentions.
2
[['Region', 'size = 3'], ['Region', 'size = 6'], ['Region', 'size = 8'], ['Region', 'size = 10']]
1
[['Ratio(%)'], ['P(%)'], ['R(%)'], ['F(%)']]
[['89.6', '92.9', '69.8', '79.5'], ['98.9', '93.6', '66.7', '77.5'], ['99.4', '93.7', '66.5', '77.6'], ['100', '93.5', '67.6', '78.2']]
column
['Ratio(%)', 'P(%)', 'R(%)', 'F(%)']
['Region']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Ratio(%)</th> <th>P(%)</th> <th>R(%)</th> <th>F(%)</th> </tr> </thead> <tbody> <tr> <td>Region || size = 3</td> <td>89.6</td> <td>92.9</td> <td>69.8</td> <td>79.5</td> </tr> <tr> <td>Region || size = 6</td> <td>98.9</td> <td>93.6</td> <td>66.7</td> <td>77.5</td> </tr> <tr> <td>Region || size = 8</td> <td>99.4</td> <td>93.7</td> <td>66.5</td> <td>77.6</td> </tr> <tr> <td>Region || size = 10</td> <td>100</td> <td>93.5</td> <td>67.6</td> <td>78.2</td> </tr> </tbody></table>
Table 5
table_5
D18-1309
4
emnlp2018
Table 5 shows the coverage ratio and the performance with different maximum region sizes. Since the average entity mention length of GENIA dataset is less than 4, the system can cover almost all the entities for the maximum sizes of 6 or more. The longer maximum region size is desirable to cover all the mentions, but it requires more computational costs. Fortunately, the performance did not degrade with the long maximum region size, despite the fact that it introduces more out-of-entity regions.
[1, 2, 2, 1]
['Table 5 shows the coverage ratio and the performance with different maximum region sizes.', 'Since the average entity mention length of GENIA dataset is less than 4, the system can cover almost all the entities for the maximum sizes of 6 or more.', 'The longer maximum region size is desirable to cover all the mentions, but it requires more computational costs.', 'Fortunately, the performance did not degrade with the long maximum region size, despite the fact that it introduces more out-of-entity regions.']
[['Ratio(%)', 'Region', 'size = 3', 'size = 6', 'size = 8', 'size = 10'], None, ['Region'], ['Region', 'size = 6', 'size = 8', 'size = 10']]
1
D18-1309table_6
Performance of our model with different model architectures on the development dataset. (cid:63) indicates results using character embeddings.
2
[['Setting', 'Bi-LSTM'], ['Setting', 'Bi-LSTM + Character*'], ['Setting', 'Boundary*'], ['Setting', 'Inside*'], ['Setting', 'Boundary+Inside*']]
1
[['P(%)'], ['R(%)'], ['F(%)']]
[['94.1', '65.7', '77.1'], ['93.5', '67.6', '78.2'], ['94.1', '54.3', '68.5'], ['93.2', '46.4', '61.2'], ['93.5', '67.6', '78.2']]
column
['P(%)', 'R(%)', 'F(%)']
['Bi-LSTM', 'Bi-LSTM + Character*']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P(%)</th> <th>R(%)</th> <th>F(%)</th> </tr> </thead> <tbody> <tr> <td>Setting || Bi-LSTM</td> <td>94.1</td> <td>65.7</td> <td>77.1</td> </tr> <tr> <td>Setting || Bi-LSTM + Character*</td> <td>93.5</td> <td>67.6</td> <td>78.2</td> </tr> <tr> <td>Setting || Boundary*</td> <td>94.1</td> <td>54.3</td> <td>68.5</td> </tr> <tr> <td>Setting || Inside*</td> <td>93.2</td> <td>46.4</td> <td>61.2</td> </tr> <tr> <td>Setting || Boundary+Inside*</td> <td>93.5</td> <td>67.6</td> <td>78.2</td> </tr> </tbody></table>
Table 6
table_6
D18-1309
4
emnlp2018
Ablations on character embeddings in Table 6 also show the importance of character embeddings. It also shows that both the boundary information and the inside information, i.e., average of the embeddings in a region, are necessary to improve the performance.
[1, 1]
['Ablations on character embeddings in Table 6 also show the importance of character embeddings.', 'It also shows that both the boundary information and the inside information, i.e., average of the embeddings in a region, are necessary to improve the performance.']
[['Bi-LSTM', 'Bi-LSTM + Character*', 'F(%)'], ['Boundary*', 'Inside*', 'Boundary+Inside*', 'F(%)']]
1
D18-1309table_7
Categorical and overall performances of the JNLPBA test dataset.
2
[['Label', 'DNA'], ['Label', 'RNA'], ['Label', 'cell line'], ['Label', 'cell type'], ['Label', 'protein'], ['Label', 'overall']]
1
[['P(%)'], ['R(%)'], ['F(%)']]
[['95.2', '56.8', '71.4'], ['96.1', '61.4', '75.2'], ['86.2', '44.1', '58.8'], ['96.7', '61.5', '75.3'], ['97.1', '72.2', '82.6'], ['96.4', '66.8', '78.4']]
column
['P(%)', 'R(%)', 'F(%)']
['DNA', 'RNA', 'cell line', 'cell type', 'protein']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P(%)</th> <th>R(%)</th> <th>F(%)</th> </tr> </thead> <tbody> <tr> <td>Label || DNA</td> <td>95.2</td> <td>56.8</td> <td>71.4</td> </tr> <tr> <td>Label || RNA</td> <td>96.1</td> <td>61.4</td> <td>75.2</td> </tr> <tr> <td>Label || cell line</td> <td>86.2</td> <td>44.1</td> <td>58.8</td> </tr> <tr> <td>Label || cell type</td> <td>96.7</td> <td>61.5</td> <td>75.3</td> </tr> <tr> <td>Label || protein</td> <td>97.1</td> <td>72.2</td> <td>82.6</td> </tr> <tr> <td>Label || overall</td> <td>96.4</td> <td>66.8</td> <td>78.4</td> </tr> </tbody></table>
Table 7
table_7
D18-1309
4
emnlp2018
4.3 Flat NER. We evaluated our model on JNLPBA as a flat dataset, where nested and discontinuous entities are removed. Table 7 shows the performances of our model on JNLPBA dataset. We compared our result with the state-of-the-art result of Gridach (2017) which achieved 75.8% in F-score, where our model obtained 78.4% in terms of F-score.
[2, 2, 1, 1]
['4.3 Flat NER.', 'We evaluated our model on JNLPBA as a flat dataset, where nested and discontinuous entities are removed.', 'Table 7 shows the performances of our model on JNLPBA dataset.', 'We compared our result with the state-of-the-art result of Gridach (2017) which achieved 75.8% in F-score, where our model obtained 78.4% in terms of F-score.']
[None, None, None, ['overall', 'F(%)']]
1
D18-1315table_1
We reproduce experiments in Malouf (2017) using our own implementation of the model. In contrast to Malouf (2017), who used cross-validation, we train one system for each language. Therefore, we only report standard deviation for the results in Column 2.
1
[['FINNISH NOUNS'], ['FRENCH VERBS'], ['IRISH NOUNS'], ['KHALING VERBS'], ['MALTESE VERBS'], ['P. CHINANTEC VERBS'], ['RUSSIAN NOUNS']]
1
[['Our baseline'], ['Malouf (2017)']]
[['99.50', '99.27 ±0.09'], ['99.88', '99.92 ±0.02'], ['85.11', '85.69 ±1.71'], ['99.66', '99.29 ±0.08'], ['98.65', '98.93 ±0.32'], ['91.16', '91.20 ±0.97'], ['95.90', '96.34 ±0.96']]
column
['accuracy', 'accuracy']
['Our baseline']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Our baseline</th> <th>Malouf (2017)</th> </tr> </thead> <tbody> <tr> <td>FINNISH NOUNS</td> <td>99.50</td> <td>99.27 ±0.09</td> </tr> <tr> <td>FRENCH VERBS</td> <td>99.88</td> <td>99.92 ±0.02</td> </tr> <tr> <td>IRISH NOUNS</td> <td>85.11</td> <td>85.69 ±1.71</td> </tr> <tr> <td>KHALING VERBS</td> <td>99.66</td> <td>99.29 ±0.08</td> </tr> <tr> <td>MALTESE VERBS</td> <td>98.65</td> <td>98.93 ±0.32</td> </tr> <tr> <td>P. CHINANTEC VERBS</td> <td>91.16</td> <td>91.20 ±0.97</td> </tr> <tr> <td>RUSSIAN NOUNS</td> <td>95.90</td> <td>96.34 ±0.96</td> </tr> </tbody></table>
Table 1
table_1
D18-1315
3
emnlp2018
In order to assure fair comparison, we perform the paradigm completion experiment described in Malouf (2017), where 90% of the word forms in the data set is used for training and the remaining 10% for testing. As the results in Table 1 show, our results very closely replicate those reported by Malouf (2017).
[2, 1]
['In order to assure fair comparison, we perform the paradigm completion experiment described in Malouf (2017), where 90% of the word forms in the data set is used for training and the remaining 10% for testing.', 'As the results in Table 1 show, our results very closely replicate those reported by Malouf (2017).']
[None, ['Our baseline', 'Malouf (2017)']]
1
D18-1315table_4
Overall results for filling in missing forms when the 10,000 most frequent forms are given in the inflection tables. We give the 0.99 confidence intervals as given by a one-sided t-test. Figures where one system significantly outperforms the other one are in boldface.
1
[['FINNISH NOUNS'], ['FINNISH VERBS'], ['FRENCH VERBS'], ['GERMAN NOUNS'], ['GERMAN VERBS'], ['LATVIAN NOUNS'], ['SPANISH VERBS'], ['TURKISH NOUNS']]
1
[['Our system'], ['Baseline']]
[['63.64 ± 3.24', '25.63 ± 1.63'], ['24.82 ± 1.13', '16.14 ± 1.14'], ['31.34 ± 1.18', '14.34 ± 0.87'], ['18.73 ± 1.26', '67.16 ± 3.20'], ['61.21 ± 1.85', '50.18 ± 2.58'], ['76.90 ± 5.30', '57.28 ± 2.05'], ['27.27 ± 0.72', '16.61 ± 0.70'], ['33.87 ± 2.03', '25.00 ± 2.52']]
column
['accuracy', 'accuracy']
['Our system']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Our system</th> <th>Baseline</th> </tr> </thead> <tbody> <tr> <td>FINNISH NOUNS</td> <td>63.64 ± 3.24</td> <td>25.63 ± 1.63</td> </tr> <tr> <td>FINNISH VERBS</td> <td>24.82 ± 1.13</td> <td>16.14 ± 1.14</td> </tr> <tr> <td>FRENCH VERBS</td> <td>31.34 ± 1.18</td> <td>14.34 ± 0.87</td> </tr> <tr> <td>GERMAN NOUNS</td> <td>18.73 ± 1.26</td> <td>67.16 ± 3.20</td> </tr> <tr> <td>GERMAN VERBS</td> <td>61.21 ± 1.85</td> <td>50.18 ± 2.58</td> </tr> <tr> <td>LATVIAN NOUNS</td> <td>76.90 ± 5.30</td> <td>57.28 ± 2.05</td> </tr> <tr> <td>SPANISH VERBS</td> <td>27.27 ± 0.72</td> <td>16.61 ± 0.70</td> </tr> <tr> <td>TURKISH NOUNS</td> <td>33.87 ± 2.03</td> <td>25.00 ± 2.52</td> </tr> </tbody></table>
Table 4
table_4
D18-1315
4
emnlp2018
Table 4 shows results for completing tables for common lexemes. Our system significantly outperforms the baseline on all other datasets apart from German nouns. We believe that the reason for the German outlier is the high degree of syncretism in German noun tables.
[1, 1, 2]
['Table 4 shows results for completing tables for common lexemes.', 'Our system significantly outperforms the baseline on all other datasets apart from German nouns.', 'We believe that the reason for the German outlier is the high degree of syncretism in German noun tables.']
[None, ['Our system', 'Baseline', 'FINNISH NOUNS', 'FINNISH VERBS', 'FRENCH VERBS', 'GERMAN VERBS', 'LATVIAN NOUNS', 'SPANISH VERBS', 'TURKISH NOUNS'], ['GERMAN NOUNS']]
1
D18-1316table_3
Comparison between the attack success rate and mean percentage of modifications required by the genetic attack and perturb baseline for the two tasks.
1
[['Perturb baseline'], ['Genetic attack']]
2
[['Sentiment Analysis', '% success'], ['Sentiment Analysis', '% modified'], ['Textual Entailment', '% success'], ['Textual Entailment', '% modified']]
[['52%', '19%', '-', '-'], ['97%', '14.7%', '70%', '23%']]
column
['% success', '% modified', '% success', '% modified']
['Genetic attack']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Sentiment Analysis || % success</th> <th>Sentiment Analysis || % modified</th> <th>Textual Entailment || % success</th> <th>Textual Entailment || % modified</th> </tr> </thead> <tbody> <tr> <td>Perturb baseline</td> <td>52%</td> <td>19%</td> <td>-</td> <td>-</td> </tr> <tr> <td>Genetic attack</td> <td>97%</td> <td>14.7%</td> <td>70%</td> <td>23%</td> </tr> </tbody></table>
Table 3
table_3
D18-1316
4
emnlp2018
Sample outputs produced by our attack are shown in Tables 1 and 2. Additional outputs can be found in the supplementary material. Table 3 shows the attack success rate and mean percentage of modified words on each task. We compare to the Perturb baseline, which greedily applies the Perturb subroutine, to validate the use of population-based optimization. As can be seen from our results, we are able to achieve high success rate with a limited number of modifications on both tasks. In addition, the genetic algorithm significantly outperformed the Perturb baseline in both success rate and percentage of words modified, demonstrating the additional benefit yielded by using population-based optimization. Testing using a single TitanX GPU, for sentiment analysis and textual entailment, we measured average runtimes on success to be 43.5 and 5 seconds per example, respectively. The high success rate and reasonable runtimes demonstrate the practicality of our approach, even when scaling to long sentences, such as those found in the IMDB dataset. Speaking of which, our success rate on textual entailment is lower due to the large disparity in sentence length. On average, hypothesis sentences in the SNLI corpus are 9 words long, which is very short compared to IMDB (229 words, limited to 100 for experiments). With sentences that short, applying successful perturbations becomes much harder, however we were still able to achieve a success rate of 70%. For the same reason, we didn’t apply the Perturb baseline on the textual entailment task, as the Perturb baseline fails to achieve any success under the limits of the maximum allowed changes constraint.
[2, 2, 1, 1, 1, 1, 2, 2, 1, 2, 1, 1]
['Sample outputs produced by our attack are shown in Tables 1 and 2.', 'Additional outputs can be found in the supplementary material.', 'Table 3 shows the attack success rate and mean percentage of modified words on each task.', 'We compare to the Perturb baseline, which greedily applies the Perturb subroutine, to validate the use of population-based optimization.', 'As can be seen from our results, we are able to achieve high success rate with a limited number of modifications on both tasks.', 'In addition, the genetic algorithm significantly outperformed the Perturb baseline in both success rate and percentage of words modified, demonstrating the additional benefit yielded by using population-based optimization.', 'Testing using a single TitanX GPU, for sentiment analysis and textual entailment, we measured average runtimes on success to be 43.5 and 5 seconds per example, respectively.', 'The high success rate and reasonable runtimes demonstrate the practicality of our approach, even when scaling to long sentences, such as those found in the IMDB dataset.', 'Speaking of which, our success rate on textual entailment is lower due to the large disparity in sentence length.', 'On average, hypothesis sentences in the SNLI corpus are 9 words long, which is very short compared to IMDB (229 words, limited to 100 for experiments).', 'With sentences that short, applying successful perturbations becomes much harder, however we were still able to achieve a success rate of 70%.', 'For the same reason, we didn’t apply the Perturb baseline on the textual entailment task, as the Perturb baseline fails to achieve any success under the limits of the maximum allowed changes constraint.']
[['Genetic attack'], None, ['Sentiment Analysis', 'Textual Entailment', '% success', '% modified'], ['Perturb baseline'], ['Genetic attack', 'Sentiment Analysis', 'Textual Entailment', '% success'], ['Genetic attack', 'Perturb baseline', '% success', '% modified'], ['Sentiment Analysis', 'Textual Entailment'], ['Genetic attack'], ['Genetic attack', 'Textual Entailment', '% success'], None, ['Genetic attack', 'Textual Entailment', '% success'], ['Perturb baseline', 'Textual Entailment']]
1
D18-1321table_3
Comparison with previous state-of-the-art models.
1
[['Google IME'], ['CoCat'], ['OMWA'], ['basic P2C'], ['Simple C+ P2C'], ['Gated C+ P2C']]
2
[['DC', 'Top-1'], ['DC', 'Top-5'], ['DC', 'Top-10'], ['DC', 'KySS'], ['PD', 'Top-1'], ['PD', 'Top-5'], ['PD', 'Top-10'], ['PD', 'KySS']]
[['62.13', '72.17', '74.72', '0.6731', '70.93', '80.32', '82.23', '0.7535'], ['59.15', '71.85', '76.78', '0.7651', '61.42', '73.08', '78.33', '0.7933'], ['57.14', '72.32', '80.21', '0.7389', '64.42', '72.91', '77.93', '0.7115'], ['71.31', '89.12', '90.17', '0.8845', '70.5', '79.8', '80.1', '0.8301'], ['61.28', '71.88', '73.74', '0.7963', '60.87', '71.23', '75.33', ' 0.7883'], ['73.89', '90.14', '91.22', '0.8935', '70.98', '80.79', '81.37', '0.8407']]
column
['Top-1', 'Top-5', 'Top-10', 'KySS', 'Top-1', 'Top-5', 'Top-10', 'KySS']
['basic P2C', 'Simple C+ P2C', 'Gated C+ P2C']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>DC || Top-1</th> <th>DC || Top-5</th> <th>DC || Top-10</th> <th>DC || KySS</th> <th>PD || Top-1</th> <th>PD || Top-5</th> <th>PD || Top-10</th> <th>PD || KySS</th> </tr> </thead> <tbody> <tr> <td>Google IME</td> <td>62.13</td> <td>72.17</td> <td>74.72</td> <td>0.6731</td> <td>70.93</td> <td>80.32</td> <td>82.23</td> <td>0.7535</td> </tr> <tr> <td>CoCat</td> <td>59.15</td> <td>71.85</td> <td>76.78</td> <td>0.7651</td> <td>61.42</td> <td>73.08</td> <td>78.33</td> <td>0.7933</td> </tr> <tr> <td>OMWA</td> <td>57.14</td> <td>72.32</td> <td>80.21</td> <td>0.7389</td> <td>64.42</td> <td>72.91</td> <td>77.93</td> <td>0.7115</td> </tr> <tr> <td>basic P2C</td> <td>71.31</td> <td>89.12</td> <td>90.17</td> <td>0.8845</td> <td>70.5</td> <td>79.8</td> <td>80.1</td> <td>0.8301</td> </tr> <tr> <td>Simple C+ P2C</td> <td>61.28</td> <td>71.88</td> <td>73.74</td> <td>0.7963</td> <td>60.87</td> <td>71.23</td> <td>75.33</td> <td>0.7883</td> </tr> <tr> <td>Gated C+ P2C</td> <td>73.89</td> <td>90.14</td> <td>91.22</td> <td>0.8935</td> <td>70.98</td> <td>80.79</td> <td>81.37</td> <td>0.8407</td> </tr> </tbody></table>
Table 3
table_3
D18-1321
5
emnlp2018
Effect of Gated Attention Mechanism. Table 3 shows the Effect of gated attention mechanism. We compared models with Gated C+ P2C and Simple C+ P2C. The MIU accuracy of the P2C model has over 10% improvement when changing the operate pattern of the extra information proves the effect of GA mechanism. The Gated C+ P2C achieves the best in DC corpus, suggesting that the gated-attention works extremely well for handling long and diverse context. Main Result. Our model is compared to other models in Table 3. So far, (Huang et al., 2015) and (Zhang et al., 2017) reported the state-of-theart results among statistical models. We list the top-5 accuracy contrast to all baselines with top10 results, and the comparison indicates the noticeable advancement of our P2C model. To our surprise, the top-5 result on PD of our best Gated C+ P2C system approaches the top-10 accuracy of Google IME. On DC corpus, our Gated C+ P2C model with the best setting achieves 90.14% accuracy, surpassing all the baselines. The comparison shows our gated-attention system outperforms all state-of-the-art baselines with better user experience.
[2, 1, 1, 1, 1, 2, 1, 2, 1, 1, 1, 1]
['Effect of Gated Attention Mechanism.', 'Table 3 shows the Effect of gated attention mechanism.', 'We compared models with Gated C+ P2C and Simple C+ P2C.', 'The MIU accuracy of the P2C model has over 10% improvement when changing the operate pattern of the extra information proves the effect of GA mechanism.', 'The Gated C+ P2C achieves the best in DC corpus, suggesting that the gated-attention works extremely well for handling long and diverse context.', 'Main Result.', 'Our model is compared to other models in Table 3.', 'So far, (Huang et al., 2015) and (Zhang et al., 2017) reported the state-of-theart results among statistical models.', 'We list the top-5 accuracy contrast to all baselines with top10 results, and the comparison indicates the noticeable advancement of our P2C model.', 'To our surprise, the top-5 result on PD of our best Gated C+ P2C system approaches the top-10 accuracy of Google IME.', 'On DC corpus, our Gated C+ P2C model with the best setting achieves 90.14% accuracy, surpassing all the baselines.', 'The comparison shows our gated-attention system outperforms all state-of-the-art baselines with better user experience.']
[None, None, ['basic P2C', 'Simple C+ P2C', 'Gated C+ P2C'], ['basic P2C'], ['Gated C+ P2C', 'DC'], None, ['basic P2C', 'Google IME', 'CoCat', 'OMWA'], ['CoCat', 'OMWA'], ['basic P2C', 'Google IME', 'CoCat', 'OMWA', 'Top-5', 'Top-10'], ['Gated C+ P2C', 'Google IME', 'Top-5', 'Top-10', 'PD'], ['Gated C+ P2C', 'DC', 'Top-5'], ['Gated C+ P2C']]
1
D18-1323table_1
LM perplexity results on PTB. ∆: difference in test perplexity of the tied models with respect to the non-tied model with the same number of hidden units.
6
[['Hid', '200', 'Emb', '200', 'Model', 'non-tied'], ['Hid', '200', 'Emb', '200', 'Model', 'tied'], ['Hid', '200', 'Emb', '200', 'Model', 'tied+L'], ['Hid', '400', 'Emb', '200', 'Model', 'non-tied'], ['Hid', '400', 'Emb', '200', 'Model', 'tied+L'], ['Hid', '400', 'Emb', '400', 'Model', 'non-tied'], ['Hid', '400', 'Emb', '400', 'Model', 'tied'], ['Hid', '400', 'Emb', '400', 'Model', 'tied+L'], ['Hid', '600', 'Emb', '400', 'Model', 'non-tied'], ['Hid', '600', 'Emb', '400', 'Model', 'tied+L'], ['Hid', '600', 'Emb', '600', 'Model', 'non-tied'], ['Hid', '600', 'Emb', '600', 'Model', 'tied'], ['Hid', '600', 'Emb', '600', 'Model', 'tied+L'], ['Inan2017 VD tied 650', '-', '-', '-', '-', '-'], ['Zaremba2014 1500', '-', '-', '-', '-', '-'], ['P&W2016 tied 1500', '-', '-', '-', '-', '-']]
1
[['Valid'], ['Test'], ['Δ']]
[['95.0', '91.1', ''], ['90.8', '86.6', '-4.5'], ['89.8', '85.8', '-5.3'], ['89.4', '85.3', ''], ['83.4', '80.3', '-5.0'], ['87.2', '83.5', ''], ['82.0', '78.2', '-5.3'], ['81.9', '78.0', '-5.5'], ['85.8', '82.4', ''], ['79.0', '76.0', '-6.4'], ['84.3', '81.3', ''], ['79.7', '76.1', '-5.2'], ['78.7', '75.5', '-5.8'], ['77.1', '73.9', ''], ['82.2', '78.4', ''], ['77.7', '74.3', '']]
column
['perplexity', 'perplexity', 'perplexity']
['tied+L']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Valid</th> <th>Test</th> <th>Δ</th> <th>Size</th> </tr> </thead> <tbody> <tr> <td>Hid || 200 || Emb || 200 || Model || non-tied</td> <td>95.0</td> <td>91.1</td> <td></td> <td>4.7M</td> </tr> <tr> <td>Hid || 200 || Emb || 200 || Model || tied</td> <td>90.8</td> <td>86.6</td> <td>-4.5</td> <td>2.7M</td> </tr> <tr> <td>Hid || 200 || Emb || 200 || Model || tied+L</td> <td>89.8</td> <td>85.8</td> <td>-5.3</td> <td>2.7M</td> </tr> <tr> <td>Hid || 400 || Emb || 200 || Model || non-tied</td> <td>89.4</td> <td>85.3</td> <td></td> <td>8.3M</td> </tr> <tr> <td>Hid || 400 || Emb || 200 || Model || tied+L</td> <td>83.4</td> <td>80.3</td> <td>-5.0</td> <td>4.3M</td> </tr> <tr> <td>Hid || 400 || Emb || 400 || Model || non-tied</td> <td>87.2</td> <td>83.5</td> <td></td> <td>10.6M</td> </tr> <tr> <td>Hid || 400 || Emb || 400 || Model || tied</td> <td>82.0</td> <td>78.2</td> <td>-5.3</td> <td>6.6M</td> </tr> <tr> <td>Hid || 400 || Emb || 400 || Model || tied+L</td> <td>81.9</td> <td>78.0</td> <td>-5.5</td> <td>6.7M</td> </tr> <tr> <td>Hid || 600 || Emb || 400 || Model || non-tied</td> <td>85.8</td> <td>82.4</td> <td></td> <td>15.3M</td> </tr> <tr> <td>Hid || 600 || Emb || 400 || Model || tied+L</td> <td>79.0</td> <td>76.0</td> <td>-6.4</td> <td>9.5M</td> </tr> <tr> <td>Hid || 600 || Emb || 600 || Model || non-tied</td> <td>84.3</td> <td>81.3</td> <td></td> <td>17.8M</td> </tr> <tr> <td>Hid || 600 || Emb || 600 || Model || tied</td> <td>79.7</td> <td>76.1</td> <td>-5.2</td> <td>11.8M</td> </tr> <tr> <td>Hid || 600 || Emb || 600 || Model || tied+L</td> <td>78.7</td> <td>75.5</td> <td>-5.8</td> <td>12.1M</td> </tr> <tr> <td>Inan2017 VD tied 650 || - || - || - || - || -</td> <td>77.1</td> <td>73.9</td> <td></td> <td>-</td> </tr> <tr> <td>Zaremba2014 1500 || - || - || - || - || -</td> <td>82.2</td> <td>78.4</td> <td></td> <td>66M</td> </tr> <tr> <td>P&amp;W2016 tied 1500 || - || - || - || - || -</td> <td>77.7</td> <td>74.3</td> <td></td> <td>51M</td> </tr> </tbody></table>
Table 1
table_1
D18-1323
4
emnlp2018
3.3 Language modelling results. We present the LM results for the standard nontied model, the tied model as in Inan et al. (2017) and Press and Wolf (2017), and our tied model with an additional linear transformation (tied+L) in Tables 1 (PTB) and 2 (Wiki). Table 1 confirms that tying generally brings gains with respect to not tying. This is also true for the cases when the hidden and embedding sizes are different (e.g. 400/200 and 600/400), where our tied+L model outperforms the non-tied model by 5 to 6.4 points having around 40% less parameters. Furthermore, our decoupled model slightly but consistently improves results with respect to standard tying, confirming our intuition that the coupling of the hidden state to the embedding representation is a limiting constraint. Smaller tied+L models perform well compared to larger tied models. In particular, the tied+L model with 600/400 units has perplexity of 76.0, compared to 76.1 of the tied 600/600 model, with 55% the number of parametres. Note that our results are comparable to previously reported perplexity values on PTB for similar models. Our best results of 75.5 test perplexity is only 1.2 points behind the large tied model with 1500 units reported in Press and Wolf (2017) and is only 1.6 points behind the medium tied model with 650 units and variational dropout (Gal and Ghahramani, 2016) reported in Inan et al. (2017).
[2, 1, 1, 1, 1, 1, 1, 1, 1]
['3.3 Language modelling results.', 'We present the LM results for the standard nontied model, the tied model as in Inan et al. (2017) and Press and Wolf (2017), and our tied model with an additional linear transformation (tied+L) in Tables 1 (PTB) and 2 (Wiki).', 'Table 1 confirms that tying generally brings gains with respect to not tying.', 'This is also true for the cases when the hidden and embedding sizes are different (e.g. 400/200 and 600/400), where our tied+L model outperforms the non-tied model by 5 to 6.4 points having around 40% less parameters.', 'Furthermore, our decoupled model slightly but consistently improves results with respect to standard tying, confirming our intuition that the coupling of the hidden state to the embedding representation is a limiting constraint.', 'Smaller tied+L models perform well compared to larger tied models.', 'In particular, the tied+L model with 600/400 units has perplexity of 76.0, compared to 76.1 of the tied 600/600 model, with 55% the number of parametres.', 'Note that our results are comparable to previously reported perplexity values on PTB for similar models.', 'Our best results of 75.5 test perplexity is only 1.2 points behind the large tied model with 1500 units reported in Press and Wolf (2017) and is only 1.6 points behind the medium tied model with 650 units and variational dropout (Gal and Ghahramani, 2016) reported in Inan et al. (2017).']
[None, ['tied+L', 'Inan2017 VD tied 650', 'P&W2016 tied 1500'], None, ['tied+L', 'non-tied', 'Hid', 'Emb'], ['tied+L', 'tied'], ['tied', 'tied+L', 'Hid', 'Emb'], ['tied', 'tied+L', 'Hid', '600', 'Emb', '400', 'Test'], ['Inan2017 VD tied 650', 'Zaremba2014 1500', 'P&W2016 tied 1500'], ['non-tied', 'Hid', '600', 'Emb', 'Test', 'Inan2017 VD tied 650', 'P&W2016 tied 1500']]
1
D18-1325table_3
Performance for variable context sizes k with the HAN encoder + HAN decoder.
2
[['k', '1'], ['k', '3'], ['k', '5'], ['k', '7']]
2
[['TED Talks', 'Zh-En'], ['TED Talks', 'Es-En'], ['Subtitles', 'Zh-En'], ['Subtitles', 'Es-En'], ['News', 'Es-En']]
[['17.70', '37.20', '29.35', '36.20', '22.46'], ['17.79', '37.24', '29.67', '36.23', '22.76'], ['17.49', '37.11', '29.69', '36.22', '22.54'], ['17.00', '37.22', '29.64', '36.21', '22.64']]
column
['accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy']
['k']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>TED Talks || Zh-En</th> <th>TED Talks || Es-En</th> <th>Subtitles || Zh-En</th> <th>Subtitles || Es-En</th> <th>News || Es-En</th> </tr> </thead> <tbody> <tr> <td>k || 1</td> <td>17.70</td> <td>37.20</td> <td>29.35</td> <td>36.20</td> <td>22.46</td> </tr> <tr> <td>k || 3</td> <td>17.79</td> <td>37.24</td> <td>29.67</td> <td>36.23</td> <td>22.76</td> </tr> <tr> <td>k || 5</td> <td>17.49</td> <td>37.11</td> <td>29.69</td> <td>36.22</td> <td>22.54</td> </tr> <tr> <td>k || 7</td> <td>17.00</td> <td>37.22</td> <td>29.64</td> <td>36.21</td> <td>22.64</td> </tr> </tbody></table>
Table 3
table_3
D18-1325
4
emnlp2018
Table 3 shows the performance of our best HAN model with a varying number k of previous sentences in the test-set. We can see that the best performance for TED talks and news is archived with 3, while for subtitles it is similar between 3 and 7.
[1, 1]
['Table 3 shows the performance of our best HAN model with a varying number k of previous sentences in the test-set.', 'We can see that the best performance for TED talks and news is archived with 3, while for subtitles it is similar between 3 and 7.']
[['k'], ['k', '3', '7', 'TED Talks', 'News', 'Subtitles']]
1
D18-1326table_1
Translation performance of our methods on Zh→En/Ja and En→De/Fr tasks. Indiv means translation model of individual pair. O2M is the our baseline system. 1 , 2 and 3 denote our proposed three strategies of special label initialization, language-dependent positional embedding and the new parameter-sharing mecha2 (Dyn) and 2 (Fixed) represent the two ways of language-dependent positional embedding nism separately. method. For shared and language-dependent method, we set one-half of hidden units as shared units, and for another half, we use a quarter hidden units to denote two output languages respectively.
2
[['Methods', 'Indiv'], ['Methods', 'O2M'], ['Methods', 'O2M + ①'], ['Methods', 'O2M + ① + ② (Dyn)'], ['Methods', 'O2M + ① + ② (Fixed)'], ['Methods', 'O2M + ① + ③'], ['Methods', 'O2M + ① + ② (Dyn)+ 3']]
2
[['Zh→En', 'MT03'], ['Zh→En', 'MT04'], ['Zh→En', 'MT05'], ['Zh→En', 'MT06'], ['Zh→En', 'Ave'], ['Zh→Ja', 'test'], ['En→De', 'test'], ['En→Fr', 'test']]
[['43.59', '43.95', '45.34', '44.05', '44.23', '40.71', '27.84', '41.50'], ['43.20', '43.55', '44.68', '43.93', '43.84', '42.09', '26.42', '41.32'], ['43.91', '44.01', '45.12', '44.14', '44.30', '42.54', '26.78', '41.56'], ['44.24', '44.45', '45.43', '44.51', '44.66', '42.77', '26.98', '41.78'], ['44.13', '44.57', '45.22', '44.68', '44.65', '42.70', '26.90', '41.75'], ['44.78', '45.23', '45.78', '45.22', '45.25', '42.97', '27.11', '41.98'], ['44.85', '45.51', '45.91', '45.38', '45.41', '43.03', '27.23', '41.92']]
column
['accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy']
['O2M + ①', 'O2M + ① + ② (Dyn)', 'O2M + ① + ② (Fixed)', 'O2M + ① + ③', 'O2M + ① + ② (Dyn)+ 3']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Zh→En || MT03</th> <th>Zh→En || MT04</th> <th>Zh→En || MT05</th> <th>Zh→En || MT06</th> <th>Zh→En || Ave</th> <th>Zh→Ja || test</th> <th>En→De || test</th> <th>En→Fr || test</th> </tr> </thead> <tbody> <tr> <td>Methods || Indiv</td> <td>43.59</td> <td>43.95</td> <td>45.34</td> <td>44.05</td> <td>44.23</td> <td>40.71</td> <td>27.84</td> <td>41.50</td> </tr> <tr> <td>Methods || O2M</td> <td>43.20</td> <td>43.55</td> <td>44.68</td> <td>43.93</td> <td>43.84</td> <td>42.09</td> <td>26.42</td> <td>41.32</td> </tr> <tr> <td>Methods || O2M + ①</td> <td>43.91</td> <td>44.01</td> <td>45.12</td> <td>44.14</td> <td>44.30</td> <td>42.54</td> <td>26.78</td> <td>41.56</td> </tr> <tr> <td>Methods || O2M + ① + ② (Dyn)</td> <td>44.24</td> <td>44.45</td> <td>45.43</td> <td>44.51</td> <td>44.66</td> <td>42.77</td> <td>26.98</td> <td>41.78</td> </tr> <tr> <td>Methods || O2M + ① + ② (Fixed)</td> <td>44.13</td> <td>44.57</td> <td>45.22</td> <td>44.68</td> <td>44.65</td> <td>42.70</td> <td>26.90</td> <td>41.75</td> </tr> <tr> <td>Methods || O2M + ① + ③</td> <td>44.78</td> <td>45.23</td> <td>45.78</td> <td>45.22</td> <td>45.25</td> <td>42.97</td> <td>27.11</td> <td>41.98</td> </tr> <tr> <td>Methods || O2M + ① + ② (Dyn)+ 3</td> <td>44.85</td> <td>45.51</td> <td>45.91</td> <td>45.38</td> <td>45.41</td> <td>43.03</td> <td>27.23</td> <td>41.92</td> </tr> </tbody></table>
Table 1
table_1
D18-1326
4
emnlp2018
5.1 Our Strategies vs. Baseline. Table 1 reports the main translation results of Zh→En/Ja and En→De/Fr translation tasks. Our ultimate goal is to make the universal one-to-many framework as good as or better than the individually trained systems. We conduct universal one-to-many translation using Johnson et al. (2017) method on Transformer framework as our baseline system (briefly, O2M method). From the first two lines, we can see that the O2M method cannot perform on par with the individually trained systems in most cases. We mentioned before that our goal is to improve the universal one-to-many multilingual translation framework while maintaining the parameter sharing property. We can observe from the table that all our proposed strategies (last part in Table 1) improve the translation performance compared to the baseline (O2M). Specifically, the combined use of three strategies performs best and it can achieve the improvements up to 1.96 BLEU points (45.51 vs. 43.55 on Zh→En MT04). As for languagedependent positional embedding, we find that both fixed and dynamic styles perform similarly. Our ultimate goal is to make the universal oneto-many framework as good as or better than the individually trained systems. Table 1 demonstrates some encouraging results. It is shown in the table that the universal one-to-many architecture enhanced with our strategies can outperform the individually trained models on three out of four language translations (Zh→En, Zh→Ja, En→Fr). The results verify the effectiveness of our proposed methods.
[2, 1, 2, 2, 1, 2, 1, 1, 1, 2, 1, 1, 2]
['5.1 Our Strategies vs. Baseline.', 'Table 1 reports the main translation results of Zh→En/Ja and En→De/Fr translation tasks.', 'Our ultimate goal is to make the universal one-to-many framework as good as or better than the individually trained systems.', 'We conduct universal one-to-many translation using Johnson et al. (2017) method on Transformer framework as our baseline system (briefly, O2M method).', 'From the first two lines, we can see that the O2M method cannot perform on par with the individually trained systems in most cases.', 'We mentioned before that our goal is to improve the universal one-to-many multilingual translation framework while maintaining the parameter sharing property.', 'We can observe from the table that all our proposed strategies (last part in Table 1) improve the translation performance compared to the baseline (O2M).', 'Specifically, the combined use of three strategies performs best and it can achieve the improvements up to 1.96 BLEU points (45.51 vs. 43.55 on Zh→En MT04).', 'As for languagedependent positional embedding, we find that both fixed and dynamic styles perform similarly.', 'Our ultimate goal is to make the universal oneto-many framework as good as or better than the individually trained systems.', 'Table 1 demonstrates some encouraging results.', 'It is shown in the table that the universal one-to-many architecture enhanced with our strategies can outperform the individually trained models on three out of four language translations (Zh→En, Zh→Ja, En→Fr).', 'The results verify the effectiveness of our proposed methods.']
[None, ['Zh→En', 'Zh→Ja', 'En→De', 'En→Fr'], ['Indiv', 'O2M'], ['O2M'], ['Indiv', 'O2M'], ['O2M'], ['O2M', 'O2M + ①', 'O2M + ① + ② (Dyn)', 'O2M + ① + ② (Fixed)', 'O2M + ① + ③', 'O2M + ① + ② (Dyn)+ 3'], ['O2M + ① + ② (Dyn)+ 3', 'O2M', 'Zh→En', 'MT04'], ['O2M + ① + ② (Dyn)', 'O2M + ① + ② (Fixed)'], ['Indiv', 'O2M'], None, ['Indiv', 'O2M + ①', 'O2M + ① + ② (Dyn)', 'O2M + ① + ② (Fixed)', 'O2M + ① + ③', 'O2M + ① + ② (Dyn)+ 3', 'Zh→En', 'Zh→Ja', 'En→Fr'], ['O2M + ①', 'O2M + ① + ② (Dyn)', 'O2M + ① + ② (Fixed)', 'O2M + ① + ③', 'O2M + ① + ② (Dyn)+ 3']]
1
D18-1330table_1
Comparison between RCSLS, Least Square Error, Procrustes and unsupervised approaches in the setting of Conneau et al. (2017). All the methods use the CSLS criterion for retrieval. “Refine” is the refinement step of Conneau et al. (2017). Adversarial, ICP and Wassertsein Proc. are unsupervised (Conneau et al., 2017; Hoshen and Wolf, 2018; Grave et al., 2018).
2
[['Method', 'Adversarial + refine'], ['Method', 'ICP + refine'], ['Method', 'Wass. Proc. + refine'], ['Method', 'Least Square Error'], ['Method', 'Procrustes'], ['Method', 'Procrustes + refine'], ['Method', 'RCSLS + spectral'], ['Method', 'RCSLS']]
1
[['en-es'], ['es-en'], ['en-fr'], ['fr-en'], ['en-de'], ['de-en'], ['en-ru'], ['ru-en'], ['en-zh'], ['zh-en'], ['avg.']]
[['81.7', '83.3', '82.3', '82.1', '74.0', '72.2', '44.0', '59.1', '32.5', '31.4', '64.3'], ['82.2', '83.8', '82.5', '82.5', '74.8', '73.1', '46.3', '61.6', '-', '-', '-'], ['82.8', '84.1', '82.6', '82.9', '75.4', '73.3', '43.7', '59.1', '-', '-', '-'], ['78.9', '80.7', '79.3', '80.7', '71.5', '70.1', '47.2', '60.2', '42.3', '4.0', '61.5'], ['81.4', '82.9', '81.1', '82.4', '73.5', '72.4', '51.7', '63.7', '42.7', '36.7', '66.8'], ['82.4', '83.9', '82.3', '83.2', '75.3', '73.2', '50.1', '63.5', '40.3', '35.5', '66.9'], ['83.5', '85.7', '82.3', '84.1', '78.2', '75.8', '56.1', '66.5', '44.9', '45.7', '70.2'], ['84.1', '86.3', '83.3', '84.1', '79.1', '76.3', '57.9', '67.2', '45.9', '46.4', '71.0']]
column
['accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy']
['RCSLS', 'RCSLS + spectral']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>en-es</th> <th>es-en</th> <th>en-fr</th> <th>fr-en</th> <th>en-de</th> <th>de-en</th> <th>en-ru</th> <th>ru-en</th> <th>en-zh</th> <th>zh-en</th> <th>avg.</th> </tr> </thead> <tbody> <tr> <td>Method || Adversarial + refine</td> <td>81.7</td> <td>83.3</td> <td>82.3</td> <td>82.1</td> <td>74.0</td> <td>72.2</td> <td>44.0</td> <td>59.1</td> <td>32.5</td> <td>31.4</td> <td>64.3</td> </tr> <tr> <td>Method || ICP + refine</td> <td>82.2</td> <td>83.8</td> <td>82.5</td> <td>82.5</td> <td>74.8</td> <td>73.1</td> <td>46.3</td> <td>61.6</td> <td>-</td> <td>-</td> <td>-</td> </tr> <tr> <td>Method || Wass. Proc. + refine</td> <td>82.8</td> <td>84.1</td> <td>82.6</td> <td>82.9</td> <td>75.4</td> <td>73.3</td> <td>43.7</td> <td>59.1</td> <td>-</td> <td>-</td> <td>-</td> </tr> <tr> <td>Method || Least Square Error</td> <td>78.9</td> <td>80.7</td> <td>79.3</td> <td>80.7</td> <td>71.5</td> <td>70.1</td> <td>47.2</td> <td>60.2</td> <td>42.3</td> <td>4.0</td> <td>61.5</td> </tr> <tr> <td>Method || Procrustes</td> <td>81.4</td> <td>82.9</td> <td>81.1</td> <td>82.4</td> <td>73.5</td> <td>72.4</td> <td>51.7</td> <td>63.7</td> <td>42.7</td> <td>36.7</td> <td>66.8</td> </tr> <tr> <td>Method || Procrustes + refine</td> <td>82.4</td> <td>83.9</td> <td>82.3</td> <td>83.2</td> <td>75.3</td> <td>73.2</td> <td>50.1</td> <td>63.5</td> <td>40.3</td> <td>35.5</td> <td>66.9</td> </tr> <tr> <td>Method || RCSLS + spectral</td> <td>83.5</td> <td>85.7</td> <td>82.3</td> <td>84.1</td> <td>78.2</td> <td>75.8</td> <td>56.1</td> <td>66.5</td> <td>44.9</td> <td>45.7</td> <td>70.2</td> </tr> <tr> <td>Method || RCSLS</td> <td>84.1</td> <td>86.3</td> <td>83.3</td> <td>84.1</td> <td>79.1</td> <td>76.3</td> <td>57.9</td> <td>67.2</td> <td>45.9</td> <td>46.4</td> <td>71.0</td> </tr> </tbody></table>
Table 1
table_1
D18-1330
4
emnlp2018
4.2 The MUSE benchmark,2, Table 1 reports the comparison of RCSLS with standard supervised and unsupervised approaches on 5 language pairs (in both directions) of the MUSE benchmark (Conneau et al., 2017). Every approach uses the Wikipedia fastText vectors and supervision comes in the form of a lexicon composed of 5k words and their translations. Regardless of the relaxation, RCSLS outperforms the state of the art by, on average, 3 to 4% in accuracy. This shows the importance of using the same criterion during training and inference. Note that the refinement step (“refine”) also uses CSLS to finetune the alignments but leads to a marginal gain for supervised methods. Interestingly, RCSLS achieves a better performance without constraints (+0.8%) for all pairs. Contrary to observations made in previous works, this result suggests that preserving the distance between word vectors is not essential for word translation. Indeed, previous works used a l2 loss where, indeed, orthogonal constraints lead to an improvement of +5.3% (Procrustes versus Least Square Error). This suggests that a linear mapping W with no constraints works well only if it is learned with a proper criterion.
[1, 2, 1, 2, 2, 1, 2, 1, 2]
['4.2 The MUSE benchmark,2,\nTable 1 reports the comparison of RCSLS with standard supervised and unsupervised approaches on 5 language pairs (in both directions) of the MUSE benchmark (Conneau et al., 2017).', 'Every approach uses the Wikipedia fastText vectors and supervision comes in the form of a lexicon composed of 5k words and their translations.', 'Regardless of the relaxation, RCSLS outperforms the state of the art by, on average, 3 to 4% in accuracy.', 'This shows the importance of using the same criterion during training and inference.', 'Note that the refinement step (“refine”) also uses CSLS to finetune the alignments but leads to a marginal gain for supervised methods.', 'Interestingly, RCSLS achieves a better performance without constraints (+0.8%) for all pairs.', 'Contrary to observations made in previous works, this result suggests that preserving the distance between word vectors is not essential for word translation.', 'Indeed, previous works used a l2 loss where, indeed, orthogonal constraints lead to an improvement of +5.3% (Procrustes versus Least Square Error).', 'This suggests that a linear mapping W with no constraints works well only if it is learned with a proper criterion.']
[None, None, ['RCSLS', 'Adversarial + refine'], ['RCSLS'], None, ['RCSLS', 'RCSLS + spectral', 'avg.'], ['RCSLS + spectral', 'RCSLS'], ['Least Square Error', 'Procrustes', 'avg.'], None]
1
D18-1330table_3
Accuracy on English and Italian with the setting of Dinu et al. (2014). “Adversarial” is an unsupervised technique. The adversarial and Procrustes results are from Conneau et al. (2017). We use a CSLS criterion for retrieval.
1
[['Adversarial + refine + CSLS'], ['Mikolov et al. (2013b)'], ['Dinu et al. (2014)'], ['Artetxe et al. (2016)'], ['Smith et al. (2017)'], ['Procrustes + CSLS'], ['RCSLS']]
1
[['en-it'], ['it-en']]
[['45.1', '38.3'], ['33.8', '24.9'], ['38.5', '24.6'], ['39.7', '33.8'], ['43.1', '38.0'], ['44.9', '38.5'], ['45.5', '38.0']]
column
['accuracy', 'accuracy']
['RCSLS']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>en-it</th> <th>it-en</th> </tr> </thead> <tbody> <tr> <td>Adversarial + refine + CSLS</td> <td>45.1</td> <td>38.3</td> </tr> <tr> <td>Mikolov et al. (2013b)</td> <td>33.8</td> <td>24.9</td> </tr> <tr> <td>Dinu et al. (2014)</td> <td>38.5</td> <td>24.6</td> </tr> <tr> <td>Artetxe et al. (2016)</td> <td>39.7</td> <td>33.8</td> </tr> <tr> <td>Smith et al. (2017)</td> <td>43.1</td> <td>38.0</td> </tr> <tr> <td>Procrustes + CSLS</td> <td>44.9</td> <td>38.5</td> </tr> <tr> <td>RCSLS</td> <td>45.5</td> <td>38.0</td> </tr> </tbody></table>
Table 3
table_3
D18-1330
4
emnlp2018
4.3 The WaCky dataset. Dinu et al. (2014) introduce a setting where word vectors are learned on the WaCky datasets (Baroni et al., 2009) and aligned with a noisy bilingual lexicon. We select the number of epochs within {1, 2, 5, 10} on a validation set. Table 3 shows that RCSLS is on par with the state of the art. RCSLS is thus robust to relatively poor word vectors and noisy lexicons.
[2, 2, 2, 1, 2]
['4.3 The WaCky dataset.', 'Dinu et al. (2014) introduce a setting where word vectors are learned on the WaCky datasets (Baroni et al., 2009) and aligned with a noisy bilingual lexicon.', 'We select the number of epochs within {1, 2, 5, 10} on a validation set.', 'Table 3 shows that RCSLS is on par with the state of the art.', 'RCSLS is thus robust to relatively poor word vectors and noisy lexicons.']
[None, None, None, ['Adversarial + refine + CSLS', 'RCSLS', 'Procrustes + CSLS'], ['RCSLS']]
1
D18-1341table_1
TER and BLEU scores of our model (MT+AG+LM) v.s. the rest on various data conditions for the EN-DE post-editing task. bold: Best results within a data condition; (cid:63): Best results across data conditions
3
[['-', 'Model', 'Original MT'], ['12K', 'Model', 'TGT → PE'], ['12K', 'Model', 'SRC+TGT → PE'], ['12K', 'Model', 'MT+AG'], ['12K', 'Model', 'MT+AG+LM'], ['500K+12K', 'Model', 'TGT → PE'], ['500K+12K', 'Model', 'SRC+TGT → PE'], ['500K+12K', 'Model', 'MT+AG'], ['500K+12K', 'Model', 'MT+AG+LM'], ['23K', 'Model', 'TGT → PE'], ['23K', 'Model', 'SRC+TGT → PE'], ['23K', 'Model', 'MT+AG'], ['23K', 'Model', 'MT+AG+LM'], ['500K+23K', 'Model', 'TGT → PE'], ['500K+23K', 'Model', 'SRC+TGT → PE'], ['500K+23K', 'Model', 'MT+AG'], ['500K+23K', 'Model', 'MT+AG+LM']]
2
[['dev', 'TER'], ['dev', 'BLEU'], ['test2016', 'TER'], ['test2016', 'BLEU'], ['test2017', 'TER'], ['test2017', 'BLEU']]
[['24.81', '62.92', '24.76', '62.11', '24.48', '62.49'], ['63.76', '21.32', '60.96', '22.11', '65.13', '18.13'], ['51.41', '34.04', '48.27', '35.24', '50.98', '31.52'], ['23.74', '65.95', '23.53', '65.22', '23.77', '64.34'], ['23.36†', '66.24', '23.24†', '65.53†', '23.45†', '64.65†'], ['50.91', '30.88', '48.62', '32.55', '52.07', '27.98'], ['30.97', '53.97', '30.20', '53.92', '32.82', '50.30'], ['22.82', '66.51', '22.87', '65.67', '23.58', '64.35'], ['22.67', '67.17†', '22.53†', '66.30†', '23.03†', '65.31†'], ['57.02', '27.87', '55.52', '27.8', '60.06', '22.78'], ['38.06', '47.42', '36.61', '47.93', '39.86', '43.52'], ['23.10', '66.60', '22.82', '66.15', '23.14', '65.19'], ['22.61†', '67.19†', '22.42†', '66.53†', '22.84†', '65.52*†'], ['48.89', '34.29', '27.12', '34.75', '51.13', '25.59'], ['28.61', '57.47', '27.79', '57.61', '30.34', '53.26'], ['22.38', '67.34', '22.14', '66.53', '22.71', '65.34'], ['21.99*†', '67.50*', '22.07*', '66.67*', '22.58*', '65.50']]
column
['TER', 'BLEU', 'TER', 'BLEU', 'TER', 'BLEU']
['MT+AG+LM']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>dev || TER</th> <th>dev || BLEU</th> <th>test2016 || TER</th> <th>test2016 || BLEU</th> <th>test2017 || TER</th> <th>test2017 || BLEU</th> </tr> </thead> <tbody> <tr> <td>- || Model || Original MT</td> <td>24.81</td> <td>62.92</td> <td>24.76</td> <td>62.11</td> <td>24.48</td> <td>62.49</td> </tr> <tr> <td>12K || Model || TGT → PE</td> <td>63.76</td> <td>21.32</td> <td>60.96</td> <td>22.11</td> <td>65.13</td> <td>18.13</td> </tr> <tr> <td>12K || Model || SRC+TGT → PE</td> <td>51.41</td> <td>34.04</td> <td>48.27</td> <td>35.24</td> <td>50.98</td> <td>31.52</td> </tr> <tr> <td>12K || Model || MT+AG</td> <td>23.74</td> <td>65.95</td> <td>23.53</td> <td>65.22</td> <td>23.77</td> <td>64.34</td> </tr> <tr> <td>12K || Model || MT+AG+LM</td> <td>23.36†</td> <td>66.24</td> <td>23.24†</td> <td>65.53†</td> <td>23.45†</td> <td>64.65†</td> </tr> <tr> <td>500K+12K || Model || TGT → PE</td> <td>50.91</td> <td>30.88</td> <td>48.62</td> <td>32.55</td> <td>52.07</td> <td>27.98</td> </tr> <tr> <td>500K+12K || Model || SRC+TGT → PE</td> <td>30.97</td> <td>53.97</td> <td>30.20</td> <td>53.92</td> <td>32.82</td> <td>50.30</td> </tr> <tr> <td>500K+12K || Model || MT+AG</td> <td>22.82</td> <td>66.51</td> <td>22.87</td> <td>65.67</td> <td>23.58</td> <td>64.35</td> </tr> <tr> <td>500K+12K || Model || MT+AG+LM</td> <td>22.67</td> <td>67.17†</td> <td>22.53†</td> <td>66.30†</td> <td>23.03†</td> <td>65.31†</td> </tr> <tr> <td>23K || Model || TGT → PE</td> <td>57.02</td> <td>27.87</td> <td>55.52</td> <td>27.8</td> <td>60.06</td> <td>22.78</td> </tr> <tr> <td>23K || Model || SRC+TGT → PE</td> <td>38.06</td> <td>47.42</td> <td>36.61</td> <td>47.93</td> <td>39.86</td> <td>43.52</td> </tr> <tr> <td>23K || Model || MT+AG</td> <td>23.10</td> <td>66.60</td> <td>22.82</td> <td>66.15</td> <td>23.14</td> <td>65.19</td> </tr> <tr> <td>23K || Model || MT+AG+LM</td> <td>22.61†</td> <td>67.19†</td> <td>22.42†</td> <td>66.53†</td> <td>22.84†</td> <td>65.52*†</td> </tr> <tr> <td>500K+23K || Model || TGT → PE</td> <td>48.89</td> <td>34.29</td> <td>27.12</td> <td>34.75</td> <td>51.13</td> <td>25.59</td> </tr> <tr> <td>500K+23K || Model || SRC+TGT → PE</td> <td>28.61</td> <td>57.47</td> <td>27.79</td> <td>57.61</td> <td>30.34</td> <td>53.26</td> </tr> <tr> <td>500K+23K || Model || MT+AG</td> <td>22.38</td> <td>67.34</td> <td>22.14</td> <td>66.53</td> <td>22.71</td> <td>65.34</td> </tr> <tr> <td>500K+23K || Model || MT+AG+LM</td> <td>21.99*†</td> <td>67.50*</td> <td>22.07*</td> <td>66.67*</td> <td>22.58*</td> <td>65.50</td> </tr> </tbody></table>
Table 1
table_1
D18-1341
4
emnlp2018
4.1 Results. Table 1 shows the result on different training datasets to compare our model against the baselines. Original MT is the strong standard donothing baseline, copying the MT translation as the PE output. In all settings, our MT+AG+LM models outperforms the MT+AG and monolingual/multi-source SEQ2SEQ models. Specifically, our model outperform MT+AG in 500K+12K training condition by almost 1 BLEU score on test2017. As expected, the models trained on 23K data perform better than those trained on 12K; further gains are obtained by adding 500K synthetic data. Interestingly, training MT+AG and MT+AG+LM models on 23K data lead to better TER/BLEU than those trained on 500K+12K. This implies the importance of in-domain training data, as the synthetic corpus is created using general domain Common-Crawl corpus.
[2, 1, 2, 1, 1, 1, 1, 2]
['4.1 Results.', 'Table 1 shows the result on different training datasets to compare our model against the baselines.', 'Original MT is the strong standard donothing baseline, copying the MT translation as the PE output.', 'In all settings, our MT+AG+LM models outperforms the MT+AG and monolingual/multi-source SEQ2SEQ models.', 'Specifically, our model outperform MT+AG in 500K+12K training condition by almost 1 BLEU score on test2017.', 'As expected, the models trained on 23K data perform better than those trained on 12K; further gains are obtained by adding 500K synthetic data.', 'Interestingly, training MT+AG and MT+AG+LM models on 23K data lead to better TER/BLEU than those trained on 500K+12K.', 'This implies the importance of in-domain training data, as the synthetic corpus is created using general domain Common-Crawl corpus.']
[None, ['Original MT', 'TGT → PE', 'SRC+TGT → PE', 'MT+AG', 'MT+AG+LM', 'dev', 'test2016', 'test2017'], ['Original MT'], ['MT+AG+LM', 'TGT → PE', 'SRC+TGT → PE', 'MT+AG'], ['MT+AG+LM', 'MT+AG', '500K+12K', 'test2017', 'BLEU'], ['12K', '500K+12K', '23K', '500K+23K'], ['MT+AG', 'MT+AG+LM', '23K', '500K+12K', 'TER', 'BLEU'], None]
1
D18-1345table_2
Token level identification F1 scores. Averages are computed over all languages other than English. Two baselines are also compared here: Capitalization tags a token in test as entity if it is capitalized; and Exact Match keeps track of entities seen in training, tagging tokens in Test that exactly match some entity in Train. The bottom section shows state-of-the-art models which use complex features for names, including contextual information. Languages in order are: English, Amharic, Arabic, Bengali, Farsi, Hindi, Somali, and Tagalog. The rightmost column is the average of all columns excluding English.
2
[['Model', 'Exact Match'], ['Model', 'Capitalization'], ['Model', 'SRILM'], ['Model', 'Skip-gram'], ['Model', 'CBOW'], ['Model', 'Log-Bilinear'], ['Model', 'CogCompNER (ceiling)'], ['Model', 'Lample et al. (2016) (ceiling)']]
1
[['eng'], ['amh'], ['ara'], ['ben'], ['fas'], ['hin'], ['som'], ['tgl'], ['avg']]
[['43.4', '54.4', '29.3', '47.7', '30.5', '30.9', '46.0', '23.7', '37.5'], ['79.5', '-', '-', '-', '-', '-', '69.5', '77.6', '-'], ['92.8', '69.9', '54.7', '79.4', '60.8', '63.8', '84.1', '80.5', '70.5'], ['76.0', '53.0', '29.7', '41.4', '30.8', '29.0', '51.1', '61.5', '42.4'], ['73.7', '50.0', '28.1', '40.6', '32.6', '26.5', '56.4', '62.5', '42.4'], ['82.8', '64.5', '46.1', '70.8', '50.4', '54.8', '78.1', '74.9', '62.8'], ['96.5', '73.8', '64.9', '80.6', '64.1', '75.9', '89.4', '88.6', '76.8'], ['96.4', '84.4', '69.8', '87.6', '76.4', '86.3', '90.9', '91.2', '83.8']]
column
['F1', 'F1', 'F1', 'F1', 'F1', 'F1', 'F1', 'F1', 'F1']
['SRILM']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>eng</th> <th>amh</th> <th>ara</th> <th>ben</th> <th>fas</th> <th>hin</th> <th>som</th> <th>tgl</th> <th>avg</th> </tr> </thead> <tbody> <tr> <td>Model || Exact Match</td> <td>43.4</td> <td>54.4</td> <td>29.3</td> <td>47.7</td> <td>30.5</td> <td>30.9</td> <td>46.0</td> <td>23.7</td> <td>37.5</td> </tr> <tr> <td>Model || Capitalization</td> <td>79.5</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>69.5</td> <td>77.6</td> <td>-</td> </tr> <tr> <td>Model || SRILM</td> <td>92.8</td> <td>69.9</td> <td>54.7</td> <td>79.4</td> <td>60.8</td> <td>63.8</td> <td>84.1</td> <td>80.5</td> <td>70.5</td> </tr> <tr> <td>Model || Skip-gram</td> <td>76.0</td> <td>53.0</td> <td>29.7</td> <td>41.4</td> <td>30.8</td> <td>29.0</td> <td>51.1</td> <td>61.5</td> <td>42.4</td> </tr> <tr> <td>Model || CBOW</td> <td>73.7</td> <td>50.0</td> <td>28.1</td> <td>40.6</td> <td>32.6</td> <td>26.5</td> <td>56.4</td> <td>62.5</td> <td>42.4</td> </tr> <tr> <td>Model || Log-Bilinear</td> <td>82.8</td> <td>64.5</td> <td>46.1</td> <td>70.8</td> <td>50.4</td> <td>54.8</td> <td>78.1</td> <td>74.9</td> <td>62.8</td> </tr> <tr> <td>Model || CogCompNER (ceiling)</td> <td>96.5</td> <td>73.8</td> <td>64.9</td> <td>80.6</td> <td>64.1</td> <td>75.9</td> <td>89.4</td> <td>88.6</td> <td>76.8</td> </tr> <tr> <td>Model || Lample et al. (2016) (ceiling)</td> <td>96.4</td> <td>84.4</td> <td>69.8</td> <td>87.6</td> <td>76.4</td> <td>86.3</td> <td>90.9</td> <td>91.2</td> <td>83.8</td> </tr> </tbody></table>
Table 2
table_2
D18-1345
3
emnlp2018
We compare the CLM’s Entity Identification against two state-of-the-art NER systems: CogCompNER (Khashabi et al., 2018) and LSTMCRF (Lample et al., 2016). We train the NER systems as usual, but at test time we convert all predictions into binary token-level annotations to get the final score. As Table 2 shows, the result of Ngram CLM, which yields the highest performance, is remarkably close to the result of state-of-theart NER systems (especially for English) given the simplicity of the model.
[1, 2, 1]
['We compare the CLM’s Entity Identification against two state-of-the-art NER systems: CogCompNER (Khashabi et al., 2018) and LSTMCRF (Lample et al., 2016).', 'We train the NER systems as usual, but at test time we convert all predictions into binary token-level annotations to get\nthe final score.', 'As Table 2 shows, the result of Ngram CLM, which yields the highest performance, is remarkably close to the result of state-of-theart NER systems (especially for English) given the simplicity of the model.']
[['SRILM', 'CogCompNER (ceiling)', 'Lample et al. (2016) (ceiling)'], None, ['SRILM']]
1
D18-1345table_3
NER results on 8 languages show that even a simplistic addition of CLM features to a standard NER model boosts performance. CogCompNER is run with standard features, including Brown clusters; (Lample et al., 2016) is run with default parameters and pre-trained embeddings. Unseen refers to performance on named entities in Test that were not seen in the training data. Full is performance on all entities in Test. Averages are computed over all languages other than English.
3
[['Model', 'Lample et al. (2016)', 'Full'], ['Model', 'Lample et al. (2016)', 'Unseen'], ['Model', 'CogCompNER', 'Full'], ['Model', 'CogCompNER', 'Unseen'], ['Model', 'CogCompNER+LM', 'Full'], ['Model', 'CogCompNER+LM', 'Unseen']]
1
[['eng'], ['amh'], ['ara'], ['ben'], ['fas'], ['hin'], ['som'], ['tgl'], ['avg']]
[['90.94', '73.2', '57.2', '77.7', '61.2', '77.7', '81.3', '83.2', '73.1'], ['86.11', '51.9', '30.2', '57.9', '41.4', '62.2', '66.5', '72.8', '54.7'], ['90.88', '67.5', '54.8', '74.5', '57.8', '73.5', '82.0', '80.9', '70.1'], ['84.40', '42.7', '25.0', '51.9', '31.5', '53.9', '67.2', '68.3', '48.6'], ['91.21', '71.3', '59.1', '75.5', '59.0', '74.2', '82.1', '78.5', '71.4'], ['85.20', '48.4', '32.0', '54.0', '31.2', '55.4', '68.0', '65.2', '50.6']]
column
['accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy']
['CogCompNER+LM']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>eng</th> <th>amh</th> <th>ara</th> <th>ben</th> <th>fas</th> <th>hin</th> <th>som</th> <th>tgl</th> <th>avg</th> </tr> </thead> <tbody> <tr> <td>Model || Lample et al. (2016) || Full</td> <td>90.94</td> <td>73.2</td> <td>57.2</td> <td>77.7</td> <td>61.2</td> <td>77.7</td> <td>81.3</td> <td>83.2</td> <td>73.1</td> </tr> <tr> <td>Model || Lample et al. (2016) || Unseen</td> <td>86.11</td> <td>51.9</td> <td>30.2</td> <td>57.9</td> <td>41.4</td> <td>62.2</td> <td>66.5</td> <td>72.8</td> <td>54.7</td> </tr> <tr> <td>Model || CogCompNER || Full</td> <td>90.88</td> <td>67.5</td> <td>54.8</td> <td>74.5</td> <td>57.8</td> <td>73.5</td> <td>82.0</td> <td>80.9</td> <td>70.1</td> </tr> <tr> <td>Model || CogCompNER || Unseen</td> <td>84.40</td> <td>42.7</td> <td>25.0</td> <td>51.9</td> <td>31.5</td> <td>53.9</td> <td>67.2</td> <td>68.3</td> <td>48.6</td> </tr> <tr> <td>Model || CogCompNER+LM || Full</td> <td>91.21</td> <td>71.3</td> <td>59.1</td> <td>75.5</td> <td>59.0</td> <td>74.2</td> <td>82.1</td> <td>78.5</td> <td>71.4</td> </tr> <tr> <td>Model || CogCompNER+LM || Unseen</td> <td>85.20</td> <td>48.4</td> <td>32.0</td> <td>54.0</td> <td>31.2</td> <td>55.4</td> <td>68.0</td> <td>65.2</td> <td>50.6</td> </tr> </tbody></table>
Table 3
table_3
D18-1345
4
emnlp2018
The results in Table 3 show that for six of the eight languages we studied, the baseline NER can be significantly improved by adding simple CLM features; for English and Arabic, it performs better even than the neural NER model of (Lample et al., 2016). For Tagalog, however, adding CLM features actually impairs system performance. In the same table, the rows marked “unseen” report systems’ performance on named entities in Test that were not seen in the training data. This setting more directly assesses the robustness of a system to identify named entities in new data. By this measure, Farsi NER is not improved by nameonly CLM features and Tagalog is impaired. Benefits for English, Hindi, and Somali are limited, but are quite significant for Amharic, Arabic, and Bengali.
[1, 1, 1, 2, 1, 1]
['The results in Table 3 show that for six of the eight languages we studied, the baseline NER can be significantly improved by adding simple CLM features; for English and Arabic, it performs better even than the neural NER model of (Lample et al., 2016).', 'For Tagalog, however, adding CLM features actually impairs system performance.', 'In the same table, the rows marked “unseen” report systems’ performance on named entities in Test that were not seen in the training data.', 'This setting more directly assesses the robustness of a system to identify named entities in new data.', 'By this measure, Farsi NER is not improved by nameonly CLM features and Tagalog is impaired.', 'Benefits for English, Hindi, and Somali are limited, but are quite significant for Amharic, Arabic, and Bengali.']
[['CogCompNER+LM', 'Lample et al. (2016)', 'eng', 'ara'], ['CogCompNER', 'tgl'], ['Unseen'], ['Unseen'], ['CogCompNER+LM', 'Unseen', 'fas', 'tgl'], ['CogCompNER+LM', 'Unseen', 'eng', 'amh', 'ara', 'ben', 'hin', 'som']]
1
D18-1349table_4
Comparison of F1 scores (weighted average by support (the number of true instances for each label)) between our model and the best published methods. The presented results of our model are evaluated on the test set of the run with the highest F1 score on the validation set.
3
[['Model', 'Best Published', 'Marco Lui (Lui 2012)'], ['Model', 'Best Published', 'bi-ANN (Dernoncourt et al. 2016)'], ['Model', 'Our Models', 'HSLN-CNN'], ['Model', 'Our Models', 'HSLN-RNN']]
2
[['PubMed', '20k'], ['PubMed', '200k'], ['NICTA', '-']]
[['-', '-', '82.0'], ['90.0', '91.6', '82.7'], ['92.2', '92.8', '84.7'], ['92.6', '93.9', '84.3']]
column
['F1', 'F1', 'F1']
['HSLN-CNN', 'HSLN-RNN']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>PubMed || 20k</th> <th>PubMed || 200k</th> <th>NICTA || -</th> </tr> </thead> <tbody> <tr> <td>Model || Best Published || Marco Lui (Lui 2012)</td> <td>-</td> <td>-</td> <td>82.0</td> </tr> <tr> <td>Model || Best Published || bi-ANN (Dernoncourt et al. 2016)</td> <td>90.0</td> <td>91.6</td> <td>82.7</td> </tr> <tr> <td>Model || Our Models || HSLN-CNN</td> <td>92.2</td> <td>92.8</td> <td>84.7</td> </tr> <tr> <td>Model || Our Models || HSLN-RNN</td> <td>92.6</td> <td>93.9</td> <td>84.3</td> </tr> </tbody></table>
Table 4
table_4
D18-1349
6
emnlp2018
5 Results and Discussion. Table 4 compares our model against the best performing models in the literature (Dernoncourt et al. 2016; Liu et al. 2013). There are two variants of our model in terms of different implementations of the sentence encoding layer: the model that uses bi-RNN to encode the sentence is called HSLN-RNN; while the model that uses the CNN module is named HSLN-CNN. We have evaluated both model variants on all datasets. And as evidenced by Table 4, our best model can improve the F1 scores by 2%-3% in absolute number compared with the previous best published results for all datasets. For the PubMed 20k and 200k datasets, our HSLN-RNN model achieves better results; however, for the NICTA dataset, the HSLN-CNN model performs better. This makes sense because the CNN sentence encoder has fewer parameters to be optimized, thus the HSLN-CNN model is less likely to over-fit in a smaller dataset such as NICTA. With sufficient data, however, the increased capacity of the HSLN-RNN model offers performance benefits. To be noted, this performance gap between RNN and CNN sentence encoder gets larger as the dataset size increases from 20k to 200k for the PubMed dataset.
[2, 1, 2, 1, 1, 1, 2, 2, 2]
['5 Results and Discussion.', 'Table 4 compares our model against the best performing models in the literature (Dernoncourt et al. 2016; Liu et al. 2013).', 'There are two variants of our model in terms of different implementations of the sentence encoding layer: the model that uses bi-RNN to encode the sentence is called HSLN-RNN; while the model that uses the CNN module is named HSLN-CNN.', 'We have evaluated both model variants on all datasets.', 'And as evidenced by Table 4, our best model can improve the F1 scores by 2%-3% in absolute number compared with the previous best published results for all datasets.', 'For the PubMed 20k and 200k datasets, our HSLN-RNN model achieves better results; however, for the NICTA dataset, the HSLN-CNN model performs better.', 'This makes sense because the CNN sentence encoder has fewer parameters to be optimized, thus the HSLN-CNN model is less likely to over-fit in a smaller dataset such as NICTA.', 'With sufficient data, however, the increased capacity of the HSLN-RNN model offers performance benefits.', 'To be noted, this performance gap between RNN and CNN sentence encoder gets larger as the dataset size increases from 20k to 200k for the PubMed dataset.']
[None, ['Best Published', 'Our Models'], ['HSLN-CNN', 'HSLN-RNN'], ['HSLN-CNN', 'HSLN-RNN', 'PubMed', 'NICTA'], ['Marco Lui (Lui 2012)', 'bi-ANN (Dernoncourt et al. 2016)', 'HSLN-CNN', 'HSLN-RNN'], ['HSLN-CNN', 'HSLN-RNN', 'PubMed', '20k', '200k', 'NICTA'], ['HSLN-CNN', 'NICTA'], ['HSLN-RNN'], ['HSLN-CNN', 'HSLN-RNN', 'PubMed', '20k', '200k']]
1
D18-1349table_9
Comparison of performance with different choices of word embeddings for our HSLN-RNN model trained on the PubMed 20k dataset (reported on F1-scores on the test set). “P.M.” means PubMed.
2
[['Embedding', 'Glove-wiki'], ['Embedding', 'FastText-wiki'], ['Embedding', 'FastText-P.M.+MIMIC'], ['Embedding', 'Word2vec-News'], ['Embedding', 'Word2vec-wiki'], ['Embedding', 'Word2vec-wiki+P.M.']]
1
[['Dimension'], ['P.M. 20k']]
[['200', '92.0'], ['300', '92.2'], ['300', '92.0'], ['300', '92.2'], ['200', '92.1'], ['200', '92.6']]
column
['Dimension', 'F1-scores']
['Glove-wiki', 'FastText-wiki', 'FastText-P.M.+MIMIC', 'Word2vec-News', 'Word2vec-wiki', 'Word2vec-wiki+P.M.']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Dimension</th> <th>P.M. 20k</th> </tr> </thead> <tbody> <tr> <td>Embedding || Glove-wiki</td> <td>200</td> <td>92.0</td> </tr> <tr> <td>Embedding || FastText-wiki</td> <td>300</td> <td>92.2</td> </tr> <tr> <td>Embedding || FastText-P.M.+MIMIC</td> <td>300</td> <td>92.0</td> </tr> <tr> <td>Embedding || Word2vec-News</td> <td>300</td> <td>92.2</td> </tr> <tr> <td>Embedding || Word2vec-wiki</td> <td>200</td> <td>92.1</td> </tr> <tr> <td>Embedding || Word2vec-wiki+P.M.</td> <td>200</td> <td>92.6</td> </tr> </tbody></table>
Table 9
table_9
D18-1349
8
emnlp2018
In order to test the importance of pretrained word embeddings, we performed experiments with different sets of publicly published word embeddings, as well as our locally curated word embeddings, to initialize our model. Table 9 gives the performance of six different word embeddings for our HSLN-RNN model trained on the PubMed 20k dataset. According to Table 9, the training methods that create the word embeddings do not have a strong influence on model performance, but the corpus they are trained on does. The combination of Wikipedia and PubMed abstracts as the corpus for unsupervised word embedding training yields the best result, and the individual use of either the Wikipedia corpus or the PubMed abstracts performs much worse. Although the dataset we are using for evaluation is also from PubMed abstracts, using only the PubMed abstracts together with MIMIC notes without the Wikipedia corpus does not guarantee better result (see the “FastTextP.M.+MIMIC” embeddings in Table 9), which may be because the corpus size of PubMed abstracts plus MIMIC notes (about 12.8 million abstracts and 1 million notes) is not large enough for good embedding training compared with the corpus consisting of at least billion tokens such as the Wikipedia.
[2, 1, 1, 1, 1]
['In order to test the importance of pretrained word embeddings, we performed experiments with different sets of publicly published word embeddings, as well as our locally curated word embeddings, to initialize our model.', 'Table 9 gives the performance of six different word embeddings for our HSLN-RNN model trained on the PubMed 20k dataset.', 'According to Table 9, the training methods that create the word embeddings do not have a strong influence on model performance, but the corpus they are trained on does.', 'The combination of Wikipedia and PubMed abstracts as the corpus for unsupervised word embedding training yields the best result, and the individual use of either the Wikipedia corpus or the PubMed abstracts performs much worse.', 'Although the dataset we are using for evaluation is also from PubMed abstracts, using only the PubMed abstracts together with MIMIC notes without the Wikipedia corpus does not guarantee better result (see the “FastTextP.M.+MIMIC” embeddings in Table 9), which may be because the corpus size of PubMed abstracts plus MIMIC notes (about 12.8 million abstracts and 1 million notes) is not large enough for good embedding training compared with the corpus consisting of at least billion tokens such as the Wikipedia.']
[None, ['Glove-wiki', 'FastText-wiki', 'FastText-P.M.+MIMIC', 'Word2vec-News', 'Word2vec-wiki', 'Word2vec-wiki+P.M.', 'P.M. 20k'], ['Glove-wiki', 'FastText-wiki', 'FastText-P.M.+MIMIC', 'Word2vec-News', 'Word2vec-wiki', 'Word2vec-wiki+P.M.', 'P.M. 20k'], ['Word2vec-News', 'Word2vec-wiki', 'Word2vec-wiki+P.M.'], ['FastText-P.M.+MIMIC', 'P.M. 20k']]
1
D18-1352table_2
MIMIC II results across frequent (S), few-shot (F), and zero-shot (Z) groups. We mark prior methods for MIMIC datasets that we implemented with a *.
1
[['Random'], ['Logistic (Vani et al. 2017) *'], ['CNN (Baumel et al. 2018) *'], ['ACNN (Mullenbach et al. 2018) *'], ['Match-CNN (Rios and Kavuluru, 2018)'], ['ESZSL + W2V'], ['ESZSL + W2V 2'], ['ESZSL + GRALS'], ['ZACNN'], ['ZAGCNN']]
2
[['S', 'R@5'], ['S', 'R@10'], ['F', 'R@5'], ['F', 'R@10'], ['Z', 'R@5'], ['Z', 'R@10'], ['Harmonic Average', 'R@5'], ['Harmonic Average', 'R@10']]
[['0.000', '0.000', '0.000', '0.000', '0.011', '0.032', '0.000', '0.000'], ['0.137', '0.247', '0.001', '0.003', '-', '-', '-', '-'], ['0.138', '0.250', '0.050', '0.082', '-', '-', '-', '-'], ['0.138', '0.255', '0.046', '0.081', '-', '-', '-', '-'], ['0.137', '0.247', '0.031', '0.042', '-', '-', '-', '-'], ['0.074', '0.119', '0.008', '0.017', '0.080', '0.172', '0.020', '0.041'], ['0.050', '0.086', '0.025', '0.044', '0.103', '0.189', '0.043', '0.076'], ['0.135', '0.238', '0.081', '0.123', '0.085', '0.136', '0.095', '0.152'], ['0.135', '0.245', '0.103', '0.149', '0.147', '0.221', '0.128', '0.205'], ['0.135', '0.247', '0.130', '0.185', '0.269', '0.362', '0.160', '0.246']]
column
['R@5', 'R@10', 'R@5', 'R@10', 'R@5', 'R@10', 'R@5', 'R@10']
['ZAGCNN', 'ZACNN']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>S || R@5</th> <th>S || R@10</th> <th>F || R@5</th> <th>F || R@10</th> <th>Z || R@5</th> <th>Z || R@10</th> <th>Harmonic Average || R@5</th> <th>Harmonic Average || R@10</th> </tr> </thead> <tbody> <tr> <td>Random</td> <td>0.000</td> <td>0.000</td> <td>0.000</td> <td>0.000</td> <td>0.011</td> <td>0.032</td> <td>0.000</td> <td>0.000</td> </tr> <tr> <td>Logistic (Vani et al. 2017) *</td> <td>0.137</td> <td>0.247</td> <td>0.001</td> <td>0.003</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> </tr> <tr> <td>CNN (Baumel et al. 2018) *</td> <td>0.138</td> <td>0.250</td> <td>0.050</td> <td>0.082</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> </tr> <tr> <td>ACNN (Mullenbach et al. 2018) *</td> <td>0.138</td> <td>0.255</td> <td>0.046</td> <td>0.081</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> </tr> <tr> <td>Match-CNN (Rios and Kavuluru, 2018)</td> <td>0.137</td> <td>0.247</td> <td>0.031</td> <td>0.042</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> </tr> <tr> <td>ESZSL + W2V</td> <td>0.074</td> <td>0.119</td> <td>0.008</td> <td>0.017</td> <td>0.080</td> <td>0.172</td> <td>0.020</td> <td>0.041</td> </tr> <tr> <td>ESZSL + W2V 2</td> <td>0.050</td> <td>0.086</td> <td>0.025</td> <td>0.044</td> <td>0.103</td> <td>0.189</td> <td>0.043</td> <td>0.076</td> </tr> <tr> <td>ESZSL + GRALS</td> <td>0.135</td> <td>0.238</td> <td>0.081</td> <td>0.123</td> <td>0.085</td> <td>0.136</td> <td>0.095</td> <td>0.152</td> </tr> <tr> <td>ZACNN</td> <td>0.135</td> <td>0.245</td> <td>0.103</td> <td>0.149</td> <td>0.147</td> <td>0.221</td> <td>0.128</td> <td>0.205</td> </tr> <tr> <td>ZAGCNN</td> <td>0.135</td> <td>0.247</td> <td>0.130</td> <td>0.185</td> <td>0.269</td> <td>0.362</td> <td>0.160</td> <td>0.246</td> </tr> </tbody></table>
Table 2
table_2
D18-1352
7
emnlp2018
Results. Table 2 shows the results for MIMIC II. Because the label set for each medical record is augmented using the ICD-9 hierarchy, we expect methods that use the hierarchy to have an advantage. Table 2 results do not rely on thresholding because we evaluate using the relative ranking of groups with similar frequencies. ACNN performs best on frequent labels. For few-shot labels, ZAGCNN outperforms ACNN by over 10% in R@10 and by 8% in R@5; compared to these R@k gains for few-shot labels, our loss on frequent labels is minimal (< 1%). We find that the word embedding derived label vectors work best for ESZSL on zero-shot labels. However, this setup is outperformed by GRALS derived label vectors on the frequent and few-shot labels. On zero-shot labels, ZAGCNN outperforms the best ESZSL variant by over 16% for both R@5 and R@10. Also, we find that the GCNN layers help both fe- and zero- shot labels. Finally, similar to the setup in Xian et al. (2017), we also compute the harmonic average across all R@5 and all R@10 scores. The metric is only computed for methods that can predict zero-shot classes. We find that ZAGCNN outperforms ZACNN by 4% for R@10.
[2, 1, 2, 1, 1, 1, 1, 1, 1, 2, 1, 2, 1]
['Results.', 'Table 2 shows the results for MIMIC II.', 'Because the label set for each medical record is augmented using the ICD-9 hierarchy, we expect methods that use the hierarchy to have an advantage.', 'Table 2 results do not rely on thresholding because we evaluate using the relative ranking of groups with similar frequencies.', 'ACNN performs best on frequent labels.', 'For few-shot labels, ZAGCNN outperforms ACNN by over 10% in R@10 and by 8% in R@5; compared to these R@k gains for few-shot labels, our loss on frequent labels is minimal (< 1%).', 'We find that the word embedding derived label vectors work best for ESZSL on zero-shot labels.', 'However, this setup is outperformed by GRALS derived label vectors on the frequent and few-shot labels.', 'On zero-shot labels, ZAGCNN outperforms the best ESZSL variant by over 16% for both R@5 and R@10.', 'Also, we find that the GCNN layers help both fe- and zero- shot labels.', 'Finally, similar to the setup in Xian et al. (2017), we also compute the harmonic average across all R@5 and all R@10 scores.', 'The metric is only computed for methods that can predict zero-shot classes.', 'We find that ZAGCNN outperforms ZACNN by 4% for R@10.']
[None, None, None, None, ['ACNN (Mullenbach et al. 2018) *', 'S'], ['ZAGCNN', 'ACNN (Mullenbach et al. 2018) *', 'F', 'R@5', 'R@10', 'S'], ['ESZSL + W2V', 'ESZSL + W2V 2', 'Z'], ['ESZSL + GRALS', 'F', 'S'], ['Z', 'ZAGCNN', 'ESZSL + W2V 2', 'R@5', 'R@10'], ['ZAGCNN', 'F', 'Z'], ['Harmonic Average', 'R@5', 'R@10'], ['Harmonic Average'], ['ZAGCNN', 'ZACNN', 'Harmonic Average', 'R@10']]
1
D18-1353table_3
Human evaluation results. Diacritic ** (p < 0:01) indicates MRL significantly outperforms baselines; ++ (p < 0:01) indicates GT is significantly better than all models.
2
[['Models', 'Base'], ['Models', 'Mem'], ['Models', 'MRL'], ['Models', 'GT']]
1
[['Fluency'], ['Coherence'], ['Meaning'], ['Overall Quality']]
[['3.28', '2.77', '2.63', '2.58'], ['3.23', '2.88', '2.68', '2.68'], ['4.05', '3.81', '3.68', '3.60'], ['4.14', '4.11', '4.16', '3.97']]
column
['Fluency', 'Coherence', 'Meaning', 'Overall Quality']
['MRL']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Fluency</th> <th>Coherence</th> <th>Meaning</th> <th>Overall Quality</th> </tr> </thead> <tbody> <tr> <td>Models || Base</td> <td>3.28</td> <td>2.77</td> <td>2.63</td> <td>2.58</td> </tr> <tr> <td>Models || Mem</td> <td>3.23</td> <td>2.88</td> <td>2.68</td> <td>2.68</td> </tr> <tr> <td>Models || MRL</td> <td>4.05</td> <td>3.81</td> <td>3.68</td> <td>3.60</td> </tr> <tr> <td>Models || GT</td> <td>4.14</td> <td>4.11</td> <td>4.16</td> <td>3.97</td> </tr> </tbody></table>
Table 3
table_3
D18-1353
7
emnlp2018
Table 3 gives human evaluation results. MRL achieves better results than the other two models. Since fluency is quite easy to be optimized, our method gets close to human-authored poems on Fluency. The biggest gap between MRL and GT lies on Meaning. It’s a complex criterion involving the use of words, topic, emotion expression and so on. The utilization of TF-IDF does ameliorate the use of words on diversity and innovation, hence improving Meaningfulness to some extent, but there are still lots to do.
[1, 1, 2, 1, 2, 2]
['Table 3 gives human evaluation results.', 'MRL achieves better results than the other two models.', 'Since fluency is quite easy to be optimized, our method gets close to human-authored poems on Fluency.', 'The biggest gap between MRL and GT lies on Meaning.', 'It’s a complex criterion involving the use of words, topic, emotion expression and so on.', 'The utilization of TF-IDF does ameliorate the use of words on diversity and innovation, hence improving Meaningfulness to some extent, but there are still lots to do.']
[None, ['MRL', 'Base', 'Mem'], ['MRL'], ['MRL', 'GT'], None, None]
1
D18-1358table_6
Triple Classification Results. The results of baselines on WN11 and FB13 are directly taken from the original paper except DistMult. We obtain other results by ourselves.
2
[['Model', 'CTransR'], ['Model', 'TransD'], ['Model', 'TransG'], ['Model', 'TransE'], ['Model', 'TransH'], ['Model', 'DistMult'], ['Model', 'TransE-HRS'], ['Model', 'TransH-HRS'], ['Model', 'DistMult-HRS']]
1
[['WN11'], ['FB13'], ['FB15k'], ['Avg']]
[['85.7', '-', '84.4', '-'], ['86.4', '89.1', '88.2', '87.9'], ['87.4', '87.3', '88.5', '87.7'], ['75.9', '81.5', '78.7', '78.7'], ['78.8', '83.3', '81.1', '81.1'], ['87.1', '86.2', '86.3', '86.5'], ['86.8', '88.4', '87.6', '87.6'], ['87.6', '88.9', '88.7', '88.4'], ['88.9', '89.0', '89.1', '89.0']]
column
['accuracy', 'accuracy', 'accuracy', 'accuracy']
['TransE-HRS', 'TransH-HRS', 'DistMult-HRS']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>WN11</th> <th>FB13</th> <th>FB15k</th> <th>Avg</th> </tr> </thead> <tbody> <tr> <td>Model || CTransR</td> <td>85.7</td> <td>-</td> <td>84.4</td> <td>-</td> </tr> <tr> <td>Model || TransD</td> <td>86.4</td> <td>89.1</td> <td>88.2</td> <td>87.9</td> </tr> <tr> <td>Model || TransG</td> <td>87.4</td> <td>87.3</td> <td>88.5</td> <td>87.7</td> </tr> <tr> <td>Model || TransE</td> <td>75.9</td> <td>81.5</td> <td>78.7</td> <td>78.7</td> </tr> <tr> <td>Model || TransH</td> <td>78.8</td> <td>83.3</td> <td>81.1</td> <td>81.1</td> </tr> <tr> <td>Model || DistMult</td> <td>87.1</td> <td>86.2</td> <td>86.3</td> <td>86.5</td> </tr> <tr> <td>Model || TransE-HRS</td> <td>86.8</td> <td>88.4</td> <td>87.6</td> <td>87.6</td> </tr> <tr> <td>Model || TransH-HRS</td> <td>87.6</td> <td>88.9</td> <td>88.7</td> <td>88.4</td> </tr> <tr> <td>Model || DistMult-HRS</td> <td>88.9</td> <td>89.0</td> <td>89.1</td> <td>89.0</td> </tr> </tbody></table>
Table 6
table_6
D18-1358
8
emnlp2018
4.4.2 Experimental Results. Finally, the evaluation results in Table 6 lead to the following findings:. (1) Our models outperform other baselines on WN11 and FB15k, and obtain comparable results with baselines on FB13, which validate the effectiveness of our models;. (2) The extended models TransE-HRS, TransH-HRS and DistMult-HRS achieve substantial improvements against the original models. On WN11, TransE-HRS outperforms TransE with a margin as large as 10.9%. These improvements indicates the technique of utilizing the HRS information is capable to be extended to different KGE models.
[2, 1, 1, 1, 1, 2]
['4.4.2 Experimental Results.', 'Finally, the evaluation results in Table 6 lead to the following findings:.', '(1) Our models outperform other baselines on WN11 and FB15k, and obtain comparable results with baselines on FB13, which validate the effectiveness of our models;.', '(2) The extended models TransE-HRS, TransH-HRS and DistMult-HRS achieve substantial improvements against the original models.', 'On WN11, TransE-HRS outperforms TransE with a margin as large as 10.9%.', 'These improvements indicates the technique of utilizing the HRS information is capable to be extended to different KGE models.']
[None, None, ['TransE-HRS', 'TransH-HRS', 'DistMult-HRS', 'WN11', 'FB13', 'FB15k'], ['TransE-HRS', 'TransH-HRS', 'DistMult-HRS', 'TransE', 'TransH', 'DistMult'], ['WN11', 'TransE-HRS', 'TransE'], None]
1
D18-1359table_4
Per-Relation Breakdown showing performance of each model on different relations.
2
[['Relation', 'isAffiliatedTo'], ['Relation', 'playsFor'], ['Relation', 'hasGender'], ['Relation', 'isConnectedTo'], ['Relation', 'isMarriedTo']]
2
[['Links Only', 'MRR'], ['Links Only', 'Hits@1'], ['+Numbers', 'MRR'], ['+Numbers', 'Hits@1'], ['+Description', 'MRR'], ['+Description', 'Hits@1'], ['+Images', 'MRR'], ['+Images', 'Hits@1']]
[['0.524', '0.401', '0.551', '0.467', '0.572', '0.481', '0.569', '0.478'], ['0.528', '0.413', '0.554', '0.471', '0.574', '0.486', '0.566', '0.476'], ['0.798', '0.596', '0.799', '0.599', '0.813', '0.627', '0.842', '0.683'], ['0.482', '0.367', '0.497', '0.379', '0.492', '0.384', '0.484', '0.372'], ['0.365', '0.207', '0.387', '0.221', '0.404', '0.296', '0.413', '0.326']]
column
['MRR', 'Hits@1', 'MRR', 'Hits@1', 'MRR', 'Hits@1', 'MRR', 'Hits@1']
['isAffiliatedTo', 'playsFor', 'hasGender', 'isConnectedTo', 'isMarriedTo']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Links Only || MRR</th> <th>Links Only || Hits@1</th> <th>+Numbers || MRR</th> <th>+Numbers || Hits@1</th> <th>+Description || MRR</th> <th>+Description || Hits@1</th> <th>+Images || MRR</th> <th>+Images || Hits@1</th> </tr> </thead> <tbody> <tr> <td>Relation || isAffiliatedTo</td> <td>0.524</td> <td>0.401</td> <td>0.551</td> <td>0.467</td> <td>0.572</td> <td>0.481</td> <td>0.569</td> <td>0.478</td> </tr> <tr> <td>Relation || playsFor</td> <td>0.528</td> <td>0.413</td> <td>0.554</td> <td>0.471</td> <td>0.574</td> <td>0.486</td> <td>0.566</td> <td>0.476</td> </tr> <tr> <td>Relation || hasGender</td> <td>0.798</td> <td>0.596</td> <td>0.799</td> <td>0.599</td> <td>0.813</td> <td>0.627</td> <td>0.842</td> <td>0.683</td> </tr> <tr> <td>Relation || isConnectedTo</td> <td>0.482</td> <td>0.367</td> <td>0.497</td> <td>0.379</td> <td>0.492</td> <td>0.384</td> <td>0.484</td> <td>0.372</td> </tr> <tr> <td>Relation || isMarriedTo</td> <td>0.365</td> <td>0.207</td> <td>0.387</td> <td>0.221</td> <td>0.404</td> <td>0.296</td> <td>0.413</td> <td>0.326</td> </tr> </tbody></table>
Table 4
table_4
D18-1359
8
emnlp2018
Relation Breakdown. We perform additional analysis on the YAGO dataset to gain a deeper understanding of the performance of our model using ConvE method. Table 4 compares our models on some of the most frequent relations. As shown, the model that includes textual description significantly benefits isAffiliatedTo, and playsFor relations, as this information often appears in text. Moreover, images are useful for hasGender and isMarriedTo, while for the relation isConnectedTo, numerical (dates) are more effective than images.
[2, 2, 1, 1, 1]
['Relation Breakdown.', 'We perform additional analysis on the YAGO dataset to gain a deeper understanding of the performance of our model using ConvE method.', 'Table 4 compares our models on some of the most frequent relations.', 'As shown, the model that includes textual description significantly benefits isAffiliatedTo, and playsFor relations, as this information often appears in text.', 'Moreover, images are useful for hasGender and isMarriedTo, while for the relation isConnectedTo, numerical (dates) are more effective than images.']
[None, None, ['isAffiliatedTo', 'playsFor', 'hasGender', 'isConnectedTo', 'isMarriedTo'], ['isAffiliatedTo', 'playsFor', '+Description'], ['hasGender', 'isMarriedTo', 'isConnectedTo', '+Images', '+Numbers']]
1
D18-1360table_4
Results for scientific keyphrase extraction and extraction on SemEval 2017 Task 10, comparing with previous best systems.
2
[['Model', '(Luan 2017)'], ['Model', 'Best SemEval'], ['Model', 'SCIIE']]
2
[['Span Indentification', 'P'], ['Span Indentification', 'R'], ['Span Indentification', 'F1'], ['Keyphrase Extraction', 'P'], ['Keyphrase Extraction', 'R'], ['Keyphrase Extraction', 'F1'], ['Relation Extraction', 'P'], ['Relation Extraction', 'R'], ['Relation Extraction', 'F1'], [' Overall', 'P'], ['Overall', 'R'], ['Overall', 'F1']]
[['-', '-', '56.9', '-', '-', '45.3', '-', '-', '-', '-', '-', '-'], ['55', '54', '55', '44', '43', '44', '36', '23', '28', '44', '41', '43'], ['62.2', '55.4', '58.6', '48.5', '43.8', '46.0', '40.4', '21.2', '27.8', '48.1', '41.8', '44.7']]
column
['P', 'R', 'F1', 'P', 'R', 'F1', 'P', 'R', 'F1', 'P', 'R', 'F1']
['SCIIE']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Span Indentification || P</th> <th>Span Indentification || R</th> <th>Span Indentification || F1</th> <th>Keyphrase Extraction || P</th> <th>Keyphrase Extraction || R</th> <th>Keyphrase Extraction || F1</th> <th>Relation Extraction || P</th> <th>Relation Extraction || R</th> <th>Relation Extraction || F1</th> <th>Overall || P</th> <th>Overall || R</th> <th>Overall || F1</th> </tr> </thead> <tbody> <tr> <td>Model || (Luan 2017)</td> <td>-</td> <td>-</td> <td>56.9</td> <td>-</td> <td>-</td> <td>45.3</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> </tr> <tr> <td>Model || Best SemEval</td> <td>55</td> <td>54</td> <td>55</td> <td>44</td> <td>43</td> <td>44</td> <td>36</td> <td>23</td> <td>28</td> <td>44</td> <td>41</td> <td>43</td> </tr> <tr> <td>Model || SCIIE</td> <td>62.2</td> <td>55.4</td> <td>58.6</td> <td>48.5</td> <td>43.8</td> <td>46.0</td> <td>40.4</td> <td>21.2</td> <td>27.8</td> <td>48.1</td> <td>41.8</td> <td>44.7</td> </tr> </tbody></table>
Table 4
table_4
D18-1360
9
emnlp2018
Results on SemEval 17. Table 4 compares the results of our model with the state of the art on the SemEval 17 dataset for tasks of span identification, keyphrase extraction and relation extraction as well as the overall score. Span identification aims at identifying spans of entities. Keyphrase classification and relation extraction has the same setting with the entity and relation extraction in SCIERC. Our model outperforms all the previous models that use hand-designed features. We observe more significant improvement in span identification than keyphrase classification. This confirms the benefit of our model in enumerating spans (rather than BIO tagging in state-of-the-art systems). Moreover, we have competitive results compared to the previous state of the art in relation extraction. We observe less gain compared to the SCIERC dataset mainly because there are no coference links, and the relation types are not comprehensive.
[2, 1, 2, 2, 1, 1, 2, 1, 2]
['Results on SemEval 17.', 'Table 4 compares the results of our model with the state of the art on the SemEval 17 dataset for tasks of span identification, keyphrase extraction and relation extraction as well as the overall score.', 'Span identification aims at identifying spans of entities.', 'Keyphrase classification and relation extraction has the same setting with the entity and relation extraction in SCIERC.', 'Our model outperforms all the previous models that use hand-designed features.', 'We observe more significant improvement in span identification than keyphrase classification.', 'This confirms the benefit of our model in enumerating spans (rather than BIO tagging in state-of-the-art systems).', 'Moreover, we have competitive results compared to the previous state of the art in relation extraction.', 'We observe less gain compared to the SCIERC dataset mainly because there are no coference links, and the relation types are not comprehensive.']
[None, ['Span Indentification', 'Keyphrase Extraction', 'Relation Extraction', ' Overall'], ['Span Indentification'], ['Keyphrase Extraction', 'Relation Extraction'], ['SCIIE'], ['SCIIE', 'Span Indentification', 'Keyphrase Extraction'], ['SCIIE', 'Span Indentification'], ['SCIIE', 'Best SemEval', 'Relation Extraction'], None]
1
D18-1362table_2
Query answering performance compared to state-of-the-art embedding based approaches (top part) and multi-hop reasoning approaches (bottom part). The @1, @10 and MRR metrics were multiplied by 100. We highlight the best approach in each category.
2
[['Model', 'DistMult (Yang et al. 2014)'], ['Model', 'ComplEx (Trouillon et al. 2016)'], ['Model', 'ConvE (Dettmers et al. 2018)'], ['Model', 'NeuralLP (Yang et al. 2017)'], ['Model', 'NTP-λ (Rocktaschel et. al. 2017)'], ['Model', 'MINERVA (Das et al. 2018)'], ['Model', 'Ours(ComplEx)'], ['Model', 'Ours(ConvE)']]
2
[['UMLS', '@1'], ['UMLS', '@10'], ['UMLS', 'MRR'], ['Kinship', '@1'], ['Kinship', '@10'], ['Kinship', 'MRR'], ['FB15k-237', '@1'], ['FB15k-237', '@10'], ['FB15k-237', 'MRR'], ['WN18RR', '@1'], ['WN18RR', '@10'], ['WN18RR', 'MRR'], ['NELL-995', '@1'], ['NELL-995', '@10'], ['NELL-995', 'MRR']]
[['82.1', '96.7', '86.8', '48.7', '90.4', '61.4', '32.4', '60.0', '41.7', '43.1', '52.4', '46.2', '55.2', '78.3', '64.1'], ['89.0', '99.2', '93.4', '81.8', '98.1', '88.4', '32.8', '61.6', '42.5', '41.8', '48.0', '43.7', '64.3', '86.0', '72.6'], ['93.2', '99.4', '95.7', '79.7', '98.1', '87.1', '34.1', '62.2', '43.5', '40.3', '54.0', '44.9', '67.8', '88.6', '76.1'], ['64.3', '96.2', '77.8', '47.5', '91.2', '61.9', '16.6', '34.8', '22.7', '37.6', '65.7', '46.3', '-', '-', '-'], ['84.3', '100', '91.2', '75.9', '87.8', '79.3', '-', '-', '-', '-', '-', '-', '-', '-', '-'], ['72.8', '96.8', '82.5', '60.5', '92.4', '72.0', '21.7', '45.6', '29.3', '41.3', '51.3', '44.8', '66.3', '83.1', '72.5'], ['88.7', '98.5', '92.9', '81.1', '98.2', '87.8', '32.9', '54.4', '39.3', '43.7', '54.2', '47.2', '65.5', '83.6', '72.2'], ['90.2', '99.2', '94.0', '78.9', '98.2', '86.5', '32.7', '56.4', '40.7', '41.8', '51.7', '45.0', '65.6', '84.4', '72.7']]
column
['@1', '@10', 'MRR', '@1', '@10', 'MRR', '@1', '@10', 'MRR', '@1', '@10', 'MRR', '@1', '@10', 'MRR']
['Ours(ComplEx)', 'Ours(ConvE)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>UMLS || @1</th> <th>UMLS || @10</th> <th>UMLS || MRR</th> <th>Kinship || @1</th> <th>Kinship || @10</th> <th>Kinship || MRR</th> <th>FB15k-237 || @1</th> <th>FB15k-237 || @10</th> <th>FB15k-237 || MRR</th> <th>WN18RR || @1</th> <th>WN18RR || @10</th> <th>WN18RR || MRR</th> <th>NELL-995 || @1</th> <th>NELL-995 || @10</th> <th>NELL-995 || MRR</th> </tr> </thead> <tbody> <tr> <td>Model || DistMult (Yang et al. 2014)</td> <td>82.1</td> <td>96.7</td> <td>86.8</td> <td>48.7</td> <td>90.4</td> <td>61.4</td> <td>32.4</td> <td>60.0</td> <td>41.7</td> <td>43.1</td> <td>52.4</td> <td>46.2</td> <td>55.2</td> <td>78.3</td> <td>64.1</td> </tr> <tr> <td>Model || ComplEx (Trouillon et al. 2016)</td> <td>89.0</td> <td>99.2</td> <td>93.4</td> <td>81.8</td> <td>98.1</td> <td>88.4</td> <td>32.8</td> <td>61.6</td> <td>42.5</td> <td>41.8</td> <td>48.0</td> <td>43.7</td> <td>64.3</td> <td>86.0</td> <td>72.6</td> </tr> <tr> <td>Model || ConvE (Dettmers et al. 2018)</td> <td>93.2</td> <td>99.4</td> <td>95.7</td> <td>79.7</td> <td>98.1</td> <td>87.1</td> <td>34.1</td> <td>62.2</td> <td>43.5</td> <td>40.3</td> <td>54.0</td> <td>44.9</td> <td>67.8</td> <td>88.6</td> <td>76.1</td> </tr> <tr> <td>Model || NeuralLP (Yang et al. 2017)</td> <td>64.3</td> <td>96.2</td> <td>77.8</td> <td>47.5</td> <td>91.2</td> <td>61.9</td> <td>16.6</td> <td>34.8</td> <td>22.7</td> <td>37.6</td> <td>65.7</td> <td>46.3</td> <td>-</td> <td>-</td> <td>-</td> </tr> <tr> <td>Model || NTP-λ (Rocktaschel et. al. 2017)</td> <td>84.3</td> <td>100</td> <td>91.2</td> <td>75.9</td> <td>87.8</td> <td>79.3</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> </tr> <tr> <td>Model || MINERVA (Das et al. 2018)</td> <td>72.8</td> <td>96.8</td> <td>82.5</td> <td>60.5</td> <td>92.4</td> <td>72.0</td> <td>21.7</td> <td>45.6</td> <td>29.3</td> <td>41.3</td> <td>51.3</td> <td>44.8</td> <td>66.3</td> <td>83.1</td> <td>72.5</td> </tr> <tr> <td>Model || Ours(ComplEx)</td> <td>88.7</td> <td>98.5</td> <td>92.9</td> <td>81.1</td> <td>98.2</td> <td>87.8</td> <td>32.9</td> <td>54.4</td> <td>39.3</td> <td>43.7</td> <td>54.2</td> <td>47.2</td> <td>65.5</td> <td>83.6</td> <td>72.2</td> </tr> <tr> <td>Model || Ours(ConvE)</td> <td>90.2</td> <td>99.2</td> <td>94.0</td> <td>78.9</td> <td>98.2</td> <td>86.5</td> <td>32.7</td> <td>56.4</td> <td>40.7</td> <td>41.8</td> <td>51.7</td> <td>45.0</td> <td>65.6</td> <td>84.4</td> <td>72.7</td> </tr> </tbody></table>
Table 2
table_2
D18-1362
6
emnlp2018
5 Results 5.1 Model Comparison . Table 2 shows the evaluation results of our proposed approach and the baselines. The top part presents embedding based approaches and the bottom part presents multi-hop reasoning approaches. We find embedding based models perform strongly on several datasets, achieving overall best evaluation metrics on UMLS, Kinship, FB15K237 and NELL-995 despite their simplicity. While previous path based approaches achieve comparable performance on some of the datasets (WN18RR, NELL-995, and UMLS), they perform significantly worse than the embedding based models on the other datasets (9.1 and 14.2 absolute points lower on Kinship and FB15k-237 respectively). A possible reason for this is that embedding based methods map every link in the KG into the same embedding space, which implicitly encodes the connectivity of the whole graph. In contrast, path based models use the discrete represen tation of a KG as input, and therefore have to leave out a significant proportion of the combinatorial path space by selection. For some path based approaches, computation cost is a bottleneck. In particular, NeuralLP and NTP-λ failed to scale to the larger datasets and their results are omitted from the table, as Das et al. (2018) reported. Ours is the first multi-hop reasoning approach which is consistently comparable or better than embedding based approaches on all five datasets. The best single model, Ours(ConvE), improves the SOTA performance of path-based models on three datasets (UMLS, Kinship, and FB15k-237) by 4%, 9%, and 39% respectively. On NELL-995, our approach did not significantly improve over existing SOTA. The NELL-995 dataset consists of only 12 relations in the test set and, as we further detail in the analysis (§ 5.3.3), our approach is less effective for those relation types. The model variations using different reward shaping modules perform similarly. While a better reward shaping module typically results in a better overall model, an exception is WN18RR, where ComplEx performs slightly worse on its own but is more helpful for reward shaping. We left the study of the relationship between the reward shaping module accuracy and the overall model performance as future work.
[2, 1, 1, 1, 1, 2, 2, 2, 1, 1, 1, 1, 2, 2, 1, 2]
['5 Results 5.1 Model Comparison .', 'Table 2 shows the evaluation results of our proposed approach and the baselines.', 'The top part presents embedding based approaches and the bottom part presents multi-hop reasoning approaches.', 'We find embedding based models perform strongly on several datasets, achieving overall best evaluation metrics on UMLS, Kinship, FB15K237 and NELL-995 despite their simplicity.', 'While previous path based approaches achieve comparable performance on some of the datasets (WN18RR, NELL-995, and UMLS), they perform significantly worse than the embedding based models on the other datasets (9.1 and 14.2 absolute points lower on Kinship and FB15k-237 respectively).', 'A possible reason for this is that embedding based methods map every link in the KG into the same embedding space, which implicitly encodes the connectivity of the whole graph.', 'In contrast, path based models use the discrete represen tation of a KG as input, and therefore have to leave out a significant proportion of the combinatorial path space by selection.', 'For some path based approaches, computation cost is a bottleneck.', 'In particular, NeuralLP and NTP-λ failed to scale to the larger datasets and their results are omitted from the table, as Das et al. (2018) reported.', 'Ours is the first multi-hop reasoning approach which is consistently comparable or better than embedding based approaches on all five datasets.', 'The best single model, Ours(ConvE), improves the SOTA performance of path-based models on three datasets (UMLS, Kinship, and FB15k-237) by 4%, 9%, and 39% respectively.', 'On NELL-995, our approach did not significantly improve over existing SOTA.', 'The NELL-995 dataset consists of only 12 relations in the test set and, as we further detail in the analysis (§ 5.3.3), our approach is less effective for those relation types.', 'The model variations using different reward shaping modules perform similarly.', 'While a better reward shaping module typically results in a better overall model, an exception is WN18RR, where ComplEx performs slightly worse on its own but is more helpful for reward shaping.', 'We left the study of the relationship between the reward shaping module accuracy and the overall model performance as future work.']
[None, None, ['DistMult (Yang et al. 2014)', 'ComplEx (Trouillon et al. 2016)', 'ConvE (Dettmers et al. 2018)', 'NeuralLP (Yang et al. 2017)', 'NTP-λ (Rocktaschel et. al. 2017)', 'MINERVA (Das et al. 2018)', 'Ours(ComplEx)', 'Ours(ConvE)'], ['DistMult (Yang et al. 2014)', 'ComplEx (Trouillon et al. 2016)', 'ConvE (Dettmers et al. 2018)', 'UMLS', 'Kinship', 'FB15k-237', 'NELL-995'], ['DistMult (Yang et al. 2014)', 'ComplEx (Trouillon et al. 2016)', 'ConvE (Dettmers et al. 2018)', 'UMLS', 'Kinship', 'FB15k-237', 'WN18RR', 'NELL-995'], None, None, None, ['NeuralLP (Yang et al. 2017)', 'NTP-λ (Rocktaschel et. al. 2017)'], ['Ours(ComplEx)', 'Ours(ConvE)'], ['Ours(ConvE)', 'UMLS', 'Kinship', 'FB15k-237'], ['Ours(ComplEx)', 'Ours(ConvE)', 'NELL-995'], ['NELL-995'], None, ['ComplEx (Trouillon et al. 2016)'], None]
1
D18-1362table_5
MRR evaluation of seen queries vs. unseen queries on five datasets. The % columns show the percentage of examples of seen/unseen queries found in the development split of the corresponding dataset.
2
[['Dataset', 'UMLS'], ['Dataset', 'Kinship'], ['Dataset', 'FB15k-237'], ['Dataset', 'WN18RR'], ['Dataset', 'NELL-995']]
2
[['Seen Queries', '%'], ['Seen Queries', 'Ours(ConvE)'], ['Seen Queries', '-RS'], ['Seen Queries', '-AD'], ['Unseen Queries', '%'], ['Unseen Queries', 'Ours(ConvE)'], ['Unseen Queries', '-RS'], ['Unseen Queries', '-AD']]
[['97.2', '73.1', '67.9 (-7%)', '61.4 (-16%)', '2.8', '68.5', '61.5 (-10%)', '58.7 (-14%)'], ['96.8', '75.1', '66.5 (-11%)', '65.8 (-12%)', '3.2', '73.6', '64.3 (-13%)', '53.3 (-27%)'], ['76.1', '28.3', '24.3 (-14%)', '20.6 (-27%)', '23.9', '70.9', '69.1 (-2%)', '63.9 (-10%)'], ['41.8', '60.8', '62.0 (+2%)', '53.4 (-12%)', '58.2', '31.5', '33.9 (+7%)', '28.8 (-9%)'], ['15.3', '40.4', '45.9 (+14%)', '42.5 (+5%)', '84.7', '85.5', '84.7 (-1%)', '84.3 (-1%)']]
column
['accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy']
['Ours(ConvE)', '-RS', '-AD']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Seen Queries || %</th> <th>Seen Queries || Ours(ConvE)</th> <th>Seen Queries || -RS</th> <th>Seen Queries || -AD</th> <th>Unseen Queries || %</th> <th>Unseen Queries || Ours(ConvE)</th> <th>Unseen Queries || -RS</th> <th>Unseen Queries || -AD</th> </tr> </thead> <tbody> <tr> <td>Dataset || UMLS</td> <td>97.2</td> <td>73.1</td> <td>67.9 (-7%)</td> <td>61.4 (-16%)</td> <td>2.8</td> <td>68.5</td> <td>61.5 (-10%)</td> <td>58.7 (-14%)</td> </tr> <tr> <td>Dataset || Kinship</td> <td>96.8</td> <td>75.1</td> <td>66.5 (-11%)</td> <td>65.8 (-12%)</td> <td>3.2</td> <td>73.6</td> <td>64.3 (-13%)</td> <td>53.3 (-27%)</td> </tr> <tr> <td>Dataset || FB15k-237</td> <td>76.1</td> <td>28.3</td> <td>24.3 (-14%)</td> <td>20.6 (-27%)</td> <td>23.9</td> <td>70.9</td> <td>69.1 (-2%)</td> <td>63.9 (-10%)</td> </tr> <tr> <td>Dataset || WN18RR</td> <td>41.8</td> <td>60.8</td> <td>62.0 (+2%)</td> <td>53.4 (-12%)</td> <td>58.2</td> <td>31.5</td> <td>33.9 (+7%)</td> <td>28.8 (-9%)</td> </tr> <tr> <td>Dataset || NELL-995</td> <td>15.3</td> <td>40.4</td> <td>45.9 (+14%)</td> <td>42.5 (+5%)</td> <td>84.7</td> <td>85.5</td> <td>84.7 (-1%)</td> <td>84.3 (-1%)</td> </tr> </tbody></table>
Table 5
table_5
D18-1362
9
emnlp2018
Table 5 shows the percentage of examples associated with seen and unseen queries on each dev dataset and the corresponding MRR evaluation metrics of previously studied models. On most datasets, the ratio of seen vs. unseen queries is similar to that of to-many vs. to-one relations (Table 4) as a result of random data split, with the exception of WN18RR. On some datasets, all models perform better on seen queries (UMLS, Kinship, WN18RR) while others reveal the opposite trend. We leave the study of these model behaviors to future work. On NELL-995 both of our proposed enhancements are not effective over the seen queries. In most cases, our proposed enhancements improve the performance over unseen queries, with AD being more effective.
[1, 1, 1, 2, 1, 1]
['Table 5 shows the percentage of examples associated with seen and unseen queries on each dev dataset and the corresponding MRR evaluation metrics of previously studied models.', 'On most datasets, the ratio of seen vs. unseen queries is similar to that of to-many vs. to-one relations (Table 4) as a result of random data split, with the exception of WN18RR.', 'On some datasets, all models perform better on seen queries (UMLS, Kinship, WN18RR) while others reveal the opposite trend.', 'We leave the study of these model behaviors to future work.', 'On NELL-995 both of our proposed enhancements are not effective over the seen queries.', 'In most cases, our proposed enhancements improve the performance over unseen queries, with AD being more effective.']
[['%', 'Seen Queries', 'Unseen Queries'], ['UMLS', 'Kinship', 'FB15k-237', 'NELL-995', '%', 'Seen Queries', 'Unseen Queries'], ['Seen Queries', 'UMLS', 'Kinship', 'WN18RR'], None, ['NELL-995', '-RS', '-AD', 'Seen Queries'], ['Unseen Queries', '-RS', '-AD']]
1
D18-1363table_1
Accuracy on PC for SIG17+SHIP (the shared task baseline SIG17 with SHIP), MED+PT (MED with paradigm transduction), MED+PT+SHIP (MED with paradigm transduction and SHIP), as well as all baselines (BL). Results are averaged over all languages, and best results are in bold; detailed accuracies for all languages can be found in Appendix A.
1
[['BL: COPY'], ['BL: MED'], ['BL: PT'], ['BL: SIG17'], ['SIG17+SHIP'], ['MED+PT'], ['MED+PT+SHIP']]
1
[['SET1'], ['SET2'], ['SET3']]
[['.0810', '.0810', '.0810'], ['.0004', '.0432', '.4211'], ['.0833', '.0833', '.0775'], ['.5012', '.6576', '.7707'], ['.5971', '.7355', '.8008'], ['.5808', '.7486', '.8454'], ['.5793', '.7547', '.8483']]
column
['accuracy', 'accuracy', 'accuracy']
['SIG17+SHIP', 'MED+PT', 'MED+PT+SHIP']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>SET1</th> <th>SET2</th> <th>SET3</th> </tr> </thead> <tbody> <tr> <td>BL: COPY</td> <td>.0810</td> <td>.0810</td> <td>.0810</td> </tr> <tr> <td>BL: MED</td> <td>.0004</td> <td>.0432</td> <td>.4211</td> </tr> <tr> <td>BL: PT</td> <td>.0833</td> <td>.0833</td> <td>.0775</td> </tr> <tr> <td>BL: SIG17</td> <td>.5012</td> <td>.6576</td> <td>.7707</td> </tr> <tr> <td>SIG17+SHIP</td> <td>.5971</td> <td>.7355</td> <td>.8008</td> </tr> <tr> <td>MED+PT</td> <td>.5808</td> <td>.7486</td> <td>.8454</td> </tr> <tr> <td>MED+PT+SHIP</td> <td>.5793</td> <td>.7547</td> <td>.8483</td> </tr> </tbody></table>
Table 1
table_1
D18-1363
6
emnlp2018
4.4 Results. Our results are shown in Table 1. For SET1, SIG17+SHIP obtains the highest accuracy, while, for SET2 and SET3, MED+PT+SHIP performs best. This difference can be easily explained by the fact that the performance of neural networks decreases rapidly for smaller training sets, and, while paradigm transduction strongly mitigates this problem, it cannot completely eliminate it. Overall, however, SIG17+SHIP, MED+PT, and MED+PT+SHIP all outperform the baselines by a wide margin for all settings. Effect of paradigm transduction. On average, MED+PT clearly outperforms SIG17, the strongest baseline: by .0796 (.5808-.5012) on SET1, .0910 (.7486-.6576) on SET2, and .0747 (.8454-.7707) on SET3. However, looking at each language individually (refer to Appendix A for those results), we find that MED+PT performs poorly for a few languages, namely Danish, English, and Norwegian (Bokmal & Nynorsk). We hypothesize that this can most likely be explained by the size of the input subset of those languages being small (cf. Figure 2 for average input subset sizes per language). Recall that the input subset is explored by the model during transduction. Most poorly performing languages have input subsets containing only the lemma; in this case paradigm transduction reduces to autoencoding the lemma. Thus, we conclude that paradigm transduction can only improve over MED if two or more sources are given. Conversely, if we consider only the languages with an average input subset size of more than 15 (Basque, Haida, Hindi, Khaling, Persian, and Quechua), the average accuracy of MED+PT for SET1 is 0.9564, compared to an overall average of 0.5808. This observation shows clearly that paradigm transduction obtains strong results if many forms per paradigm are given. Effect of SHIP. Further, Table 1 shows that SIG17+SHIP is better than SIG17 by .0959 (.5971-.5012) on SET1, .0779 (.7355-.6576) on SET2, and .0301 (.8008-.7707) on SET3. Stronger effects for smaller amounts of training data indicate that SHIP's strategy of selecting a single reliable source is more important for weaker final models; in these cases, selecting the most deterministic source reduces errors due to noise. In contrast, the performance of MED, the neural model, is relatively independent of the choice of source; this is in line with earlier findings (Cotterell et al., 2016). However, even for MED+PT, adding SHIP (i.e., MED+PT+SHIP) slightly increases accuracy by .0061 (.7547-.7486) on SET2, and .0029 (.8483-.8454) on SET3 (L53). Ablation. MED does not perform well for either SET1 or SET2. In contrast, on SET3 it even outperforms SIG17 for a few languages. However, MED loses against MED+PT in all cases, highlighting the positive effect of paradigm transduction. Looking at PT next, even though PT does not have a zero accuracy for any setting or language, it performs consistently worse than MED+PT. For SET3, PT is even lower than MED on average, by .3436 (.4211-.0775). Note that, in contrast to the other methods, PT’s performance is not dependent on the size of the training set. The main determinant for PT’s performance is the size of the input subset during transductive inference. If the input subset is large, PT can perform better than MED, e.g., for Hindi and Urdu. For Khaling SET1, PT even outperforms both MED and SIG17. However, in most cases, PT does not perform well on its own. MED+PT outperforms both MED and PT. This confirms our initial intuition: MED and PT learn complementary information for paradigm completion. The base model learns the general structure of the language (i.e., correspondences between tags and inflections) while paradigm transduction teaches the model which character sequences are common in a specific test paradigm.
[2, 1, 1, 2, 1, 2, 1, 2, 2, 2, 2, 2, 1, 2, 2, 1, 2, 1, 1, 2, 1, 2, 1, 1, 1, 2, 2, 2, 2, 1, 1, 2, 2]
['4.4 Results.', 'Our results are shown in Table 1.', 'For SET1, SIG17+SHIP obtains the highest accuracy, while, for SET2 and SET3, MED+PT+SHIP performs best.', 'This difference can be easily explained by the fact that the performance of neural networks decreases rapidly for smaller training sets, and, while paradigm transduction strongly mitigates this problem, it cannot completely eliminate it.', 'Overall, however, SIG17+SHIP, MED+PT, and MED+PT+SHIP all outperform the baselines by a wide margin for all settings.', 'Effect of paradigm transduction.', 'On average, MED+PT clearly outperforms SIG17, the strongest baseline: by .0796 (.5808-.5012) on SET1, .0910 (.7486-.6576) on SET2, and .0747 (.8454-.7707) on SET3.', 'However, looking at each language individually (refer to Appendix A for those results), we find that MED+PT performs poorly for a few languages, namely Danish, English, and Norwegian (Bokmal & Nynorsk).', 'We hypothesize that this can most likely be explained by the size of the input subset of those languages being small (cf. Figure 2 for average input subset sizes per language).', 'Recall that the input subset is explored by the model during transduction.', 'Most poorly performing languages have input subsets containing only the lemma; in this case paradigm transduction reduces to autoencoding the lemma.', 'Thus, we conclude that paradigm transduction can only improve over MED if two or more sources are given.', 'Conversely, if we consider only the languages with an average input subset size of more than 15 (Basque, Haida, Hindi, Khaling, Persian, and Quechua), the average accuracy of MED+PT for SET1 is 0.9564, compared to an overall average of 0.5808.', 'This observation shows clearly that paradigm transduction obtains strong results if many forms per paradigm are given.', 'Effect of SHIP.', 'Further, Table 1 shows that SIG17+SHIP is better than SIG17 by .0959 (.5971-.5012) on SET1, .0779 (.7355-.6576) on SET2, and .0301 (.8008-.7707) on SET3.', "Stronger effects for smaller amounts of training data indicate that SHIP's strategy of selecting a single reliable source is more important for weaker final models; in these cases, selecting the most deterministic source reduces errors due to noise.", 'In contrast, the performance of MED, the neural model, is relatively independent of the choice of source; this is in line with earlier findings (Cotterell et al., 2016).', 'However, even for MED+PT, adding SHIP (i.e., MED+PT+SHIP) slightly increases accuracy by .0061 (.7547-.7486) on SET2, and .0029 (.8483-.8454) on SET3 (L53).', 'Ablation.', 'MED does not perform well for either SET1 or SET2.', 'In contrast, on SET3 it even outperforms SIG17 for a few languages.', 'However, MED loses against MED+PT in all cases, highlighting the positive effect of paradigm transduction.', 'Looking at PT next, even though PT does not have a zero accuracy for any setting or language, it performs consistently worse than MED+PT.', 'For SET3, PT is even lower than MED on average, by .3436 (.4211-.0775).', 'Note that, in contrast to the other methods, PT’s performance is not dependent on the size of the training set.', 'The main determinant for PT’s performance is the size of the input subset during transductive inference.', 'If the input subset is large, PT can perform better than MED, e.g., for Hindi and Urdu.', 'For Khaling SET1, PT even outperforms both MED and SIG17.', 'However, in most cases, PT does not perform well on its own.', 'MED+PT outperforms both MED and PT.', 'This confirms our initial intuition: MED and PT learn complementary information for paradigm completion.', 'The base model learns the general structure of the language (i.e., correspondences between tags and inflections) while paradigm transduction teaches the model which character sequences are common in a specific test paradigm.']
[None, None, ['SET1', 'SET2', 'SET3', 'SIG17+SHIP', 'MED+PT+SHIP'], None, ['SIG17+SHIP', 'MED+PT', 'MED+PT+SHIP'], None, ['MED+PT', 'BL: SIG17', 'SET1', 'SET2', 'SET3'], ['MED+PT'], None, None, None, None, ['MED+PT', 'SET1'], None, None, ['SIG17+SHIP', 'BL: SIG17', 'SET1', 'SET2', 'SET3'], ['SIG17+SHIP'], None, ['MED+PT', 'MED+PT+SHIP', 'SET2', 'SET3'], None, ['BL: MED', 'SET1', 'SET2'], ['BL: MED', 'BL: SIG17', 'SET3'], ['BL: MED', 'MED+PT'], ['BL: PT', 'MED+PT'], ['BL: PT', 'BL: MED', 'SET3'], ['BL: PT'], ['BL: PT'], ['BL: PT', 'BL: MED'], ['SET1', 'BL: MED', 'BL: SIG17', 'BL: PT'], ['BL: MED', 'BL: SIG17', 'BL: PT'], ['MED+PT', 'BL: PT', 'BL: MED'], ['BL: PT', 'BL: MED'], None]
1
D18-1366table_3
NER results for monolingual experiments. Metric F1 (out of 100%)
4
[['Model', 'Ours', 'subword units', 'Char-ngrams + Lemma + Morph'], ['Model', 'Ours', 'subword units', 'Char-ngrams + Lemma'], ['Model', 'Ours', 'subword units', 'Char-ngrams + Morph'], ['Model', 'prop2vec', 'subword units', 'Word + Lemma'], ['Model', 'prop2vec', 'subword units', 'Word + Morph'], ['Model', 'prop2vec', 'subword units', 'Word + Lemma + Morph'], ['Model', 'fastText', 'subword units', 'Char-ngrams'], ['Model', 'word2vec', 'subword units', 'Word'], ['Model', 'Random', 'subword units', 'No embedding']]
1
[['Turkish'], ['Uyghur'], ['Hindi'], ['Bengali']]
[['68.06', '52.50', '73.15', '52.77'], ['68.61', '52.40', '73.37', '52.09'], ['67.97', '47.80', '73.46', '52.06'], ['66.52', '46.00', '71.82', '50.03'], ['64.45', '46.00', '71.52', '49.27'], ['68.46', '47.70', '70.51', '48.16'], ['66.81', '50.80', '72.67', '52.10'], ['62.85', '46.80', '72.04', '49.83'], ['58.94', '31.30', '59.89', '21.25']]
column
['F1', 'F1', 'F1', 'F1']
['Char-ngrams + Lemma + Morph', 'Char-ngrams + Lemma', 'Char-ngrams + Morph']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Turkish</th> <th>Uyghur</th> <th>Hindi</th> <th>Bengali</th> </tr> </thead> <tbody> <tr> <td>Model || Ours || subword units || Char-ngrams + Lemma + Morph</td> <td>68.06</td> <td>52.50</td> <td>73.15</td> <td>52.77</td> </tr> <tr> <td>Model || Ours || subword units || Char-ngrams + Lemma</td> <td>68.61</td> <td>52.40</td> <td>73.37</td> <td>52.09</td> </tr> <tr> <td>Model || Ours || subword units || Char-ngrams + Morph</td> <td>67.97</td> <td>47.80</td> <td>73.46</td> <td>52.06</td> </tr> <tr> <td>Model || prop2vec || subword units || Word + Lemma</td> <td>66.52</td> <td>46.00</td> <td>71.82</td> <td>50.03</td> </tr> <tr> <td>Model || prop2vec || subword units || Word + Morph</td> <td>64.45</td> <td>46.00</td> <td>71.52</td> <td>49.27</td> </tr> <tr> <td>Model || prop2vec || subword units || Word + Lemma + Morph</td> <td>68.46</td> <td>47.70</td> <td>70.51</td> <td>48.16</td> </tr> <tr> <td>Model || fastText || subword units || Char-ngrams</td> <td>66.81</td> <td>50.80</td> <td>72.67</td> <td>52.10</td> </tr> <tr> <td>Model || word2vec || subword units || Word</td> <td>62.85</td> <td>46.80</td> <td>72.04</td> <td>49.83</td> </tr> <tr> <td>Model || Random || subword units || No embedding</td> <td>58.94</td> <td>31.30</td> <td>59.89</td> <td>21.25</td> </tr> </tbody></table>
Table 3
table_3
D18-1366
7
emnlp2018
For example lemma + morph means lemma and morph embeddings are first pre-trained on the resoucerich language and then used to initialize the respective lemma and morph representations for the low resource language. Monolingual Experiments:. Table 3 shows our results on all languages. We get +5.8 F1 points for Turkish, +4.8 F1 for Uyghur, +0.8 F1 for Hindi and +0.7 F1 for Bengali over the existing methods. We observe that a combination of character-ngrams, lemma and morphological properties gives the best performance for Uyghur and Bengali. Adding morph hurts in Turkish, in contrast to Hindi, where it helps. Section 5.3.3 discuses plausible reasons for this.
[2, 2, 1, 1, 1, 1, 2]
['For example lemma + morph means lemma and morph embeddings are first pre-trained on the resoucerich language and then used to initialize the respective lemma and morph representations for the low resource language.', 'Monolingual Experiments:.', 'Table 3 shows our results on all languages.', 'We get +5.8 F1 points for Turkish, +4.8 F1 for Uyghur, +0.8 F1 for Hindi and +0.7 F1 for Bengali over the existing methods.', 'We observe that a combination of character-ngrams, lemma and morphological properties gives the best performance for Uyghur and Bengali.', 'Adding morph hurts in Turkish, in contrast to Hindi, where it helps.', 'Section 5.3.3 discuses plausible reasons for this.']
[None, None, ['Turkish', 'Uyghur', 'Hindi', 'Bengali'], ['Char-ngrams + Lemma + Morph', 'Char-ngrams + Lemma', 'Char-ngrams + Morph', 'Turkish', 'Uyghur', 'Hindi', 'Bengali'], ['Char-ngrams + Lemma + Morph', 'Turkish', 'Uyghur', 'Bengali'], ['Char-ngrams + Morph', 'Turkish', 'Hindi'], None]
1
D18-1367table_1
Mean accuracy of 10-fold cross validation in the Hype-Par setting. Column QQ shows results for our handcrafted quality and quantity features; the last two columns are concatenations of QQ with the Skip-gram and GloVe baseline features.
1
[['LR'], ['KNN'], ['NB'], ['DT'], ['SVM'], ['LDA']]
1
[['Baseline1'], ['QQ'], ['Skip-gram'], ['GloVe'], ['Skip-gram+QQ'], ['GloVe+QQ']]
[['.50', '.64', '.68', '.66', '.72', '.69'], ['.50', '.63', '.47', '.43', '.52', '.48'], ['.50', '.66', '.69', '.66', '.69', '.68'], ['.50', '.60', '.54', '.53', '.55', '.54'], ['.50', '.64', '.15', '.62', '.63', '.64'], ['.50', '.61', '.67', '.65', '.68', '.67']]
column
['accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy']
['QQ', 'Skip-gram+QQ', 'GloVe+QQ']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Baseline1</th> <th>QQ</th> <th>Skip-gram</th> <th>GloVe</th> <th>Skip-gram+QQ</th> <th>GloVe+QQ</th> </tr> </thead> <tbody> <tr> <td>LR</td> <td>.50</td> <td>.64</td> <td>.68</td> <td>.66</td> <td>.72</td> <td>.69</td> </tr> <tr> <td>KNN</td> <td>.50</td> <td>.63</td> <td>.47</td> <td>.43</td> <td>.52</td> <td>.48</td> </tr> <tr> <td>NB</td> <td>.50</td> <td>.66</td> <td>.69</td> <td>.66</td> <td>.69</td> <td>.68</td> </tr> <tr> <td>DT</td> <td>.50</td> <td>.60</td> <td>.54</td> <td>.53</td> <td>.55</td> <td>.54</td> </tr> <tr> <td>SVM</td> <td>.50</td> <td>.64</td> <td>.15</td> <td>.62</td> <td>.63</td> <td>.64</td> </tr> <tr> <td>LDA</td> <td>.50</td> <td>.61</td> <td>.67</td> <td>.65</td> <td>.68</td> <td>.67</td> </tr> </tbody></table>
Table 1
table_1
D18-1367
7
emnlp2018
Cross-validation is conducted in two settings. In one (Hype-Par), the non-hyperbolic sentences are paraphrases, in the other (Hype-Min), literal data come from the Minimal Units Corpus. The results are shown in Table 1 and Table 2. While in the Hype-Min setting the performance is not satisfying, estimators achieve above chance accuracy using paraphrases as literal inputs. In fact, in Table 1, the accuracy scores based on quantity-quality vectors (QQ column) suggest that our handcrafted features are actually useful for detecting hyperboles. Therefore, to gain further insight on their informativeness, we conduct a recurrent feature ablation and observed how different subsets affect predictions. Figure 1 illustrates that 5 features can maximize the accuracy of LR. SVM and LDA behave the same with a set of the same size, and they all assign high weights to imageability, unexpectedness and subjectivity. The three models become comparable to, and yet do not outperform, the second and third baselines. In the attempt to improve the models’ description of the data, we repeat the experiment with yet another set of features. We merge the QQ with the Skip-Gram and GloVe features, by separately concatenating the two types of vectors to our data representations (Skip-Gram+QQ and GloVe+QQ columns). An interesting trend appears both for Hype-Par and Hype-Min: with Skip-Gram+QQ, algorithms perform better than relying on Skip-Gram or QQ alone, and the same happens for Glove+QQ. The new sets of features produce a consistent improvement over the baselines and over our own features. LR outstands other classifiers in the SkipGram+QQ combination, reaching .72 mean accuracy and .76 average F1-score (see Table 3).
[2, 2, 1, 2, 1, 2, 2, 2, 1, 2, 2, 1, 2, 1]
['Cross-validation is conducted in two settings.', 'In one (Hype-Par), the non-hyperbolic sentences are paraphrases, in the other (Hype-Min), literal data come from the Minimal Units Corpus.', 'The results are shown in Table 1 and Table 2.', 'While in the Hype-Min setting the performance is not satisfying, estimators achieve above chance accuracy using paraphrases as literal inputs.', 'In fact, in Table 1, the accuracy scores based on quantity-quality vectors (QQ column) suggest that our handcrafted features are actually useful for detecting hyperboles.', 'Therefore, to gain further insight on their informativeness, we conduct a recurrent feature ablation and observed how different subsets affect predictions.', 'Figure 1 illustrates that 5 features can maximize the accuracy of LR.', 'SVM and LDA behave the same with a set of the same size, and they all assign high weights to imageability, unexpectedness and subjectivity.', 'The three models become comparable to, and yet do not outperform, the second and third baselines.', 'In the attempt to improve the models’ description of the data, we repeat the experiment with yet another set of features.', 'We merge the QQ with the Skip-Gram and GloVe features, by separately concatenating the two types of vectors to our data representations (Skip-Gram+QQ and GloVe+QQ columns).', 'An interesting trend appears both for Hype-Par and Hype-Min: with Skip-Gram+QQ, algorithms perform better than relying on Skip-Gram or QQ alone, and the same happens for Glove+QQ.', 'The new sets of features produce a consistent improvement over the baselines and over our own features.', 'LR outstands other classifiers in the SkipGram+QQ combination, reaching .72 mean accuracy and .76 average F1-score (see Table 3).']
[None, None, None, None, ['QQ'], None, ['LR'], ['SVM', 'LDA'], ['LR', 'SVM', 'LDA'], None, ['Skip-gram+QQ', 'GloVe+QQ'], ['QQ', 'Skip-gram', 'GloVe', 'Skip-gram+QQ', 'GloVe+QQ'], ['Skip-gram+QQ', 'GloVe+QQ'], ['Skip-gram+QQ', 'LR']]
1
D18-1368table_4
Cross-genre Classification Results on the Training Set of MASC+Wiki. We report accuracy (Acc), macro-average F1-score (Macro) and class-wise F1 scores.
2
[['Model', 'CRF (Friedrich et al. 2016)'], ['Model', 'Clause-level Bi-LSTM'], ['Model', 'Paragraph-level Model'], ['Model', 'Paragraph-level Model+CRF']]
1
[['Macro'], ['Acc'], ['F1 STA'], ['F1 EVE'], ['F1 REP'], ['F1 GENI'], ['F1 GENA'], ['F1 QUE'], ['F1 IMP']]
[['66.6', '71.8', '78.2', '77.0', '76.8', '44.8', '27.4', '81.8', '70.8'], ['69.3', '73.3', '79.5', '78.7', '82.8', '47.6', '31.9', '86.9', '77.7'], ['73.2', '77.2', '81.5', '80.1', '83.2', '64.7', '37.2', '88.1', '77.8'], ['73.5', '77.4', '81.5', '80.3', '83.7', '66.5', '37.4', '88.5', '76.7']]
column
['Macro', 'Acc', 'F1 STA', 'F1 EVE', 'F1 REP', 'F1 GENI', 'F1 GENA', 'F1 QUE', 'F1 IMP']
['Paragraph-level Model', 'Paragraph-level Model+CRF']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Macro</th> <th>Acc</th> <th>STA</th> <th>EVE</th> <th>REP</th> <th>GENI</th> <th>GENA</th> <th>QUE</th> <th>IMP</th> </tr> </thead> <tbody> <tr> <td>Model || CRF (Friedrich et al. 2016)</td> <td>66.6</td> <td>71.8</td> <td>78.2</td> <td>77.0</td> <td>76.8</td> <td>44.8</td> <td>27.4</td> <td>81.8</td> <td>70.8</td> </tr> <tr> <td>Model || Clause-level Bi-LSTM</td> <td>69.3</td> <td>73.3</td> <td>79.5</td> <td>78.7</td> <td>82.8</td> <td>47.6</td> <td>31.9</td> <td>86.9</td> <td>77.7</td> </tr> <tr> <td>Model || Paragraph-level Model</td> <td>73.2</td> <td>77.2</td> <td>81.5</td> <td>80.1</td> <td>83.2</td> <td>64.7</td> <td>37.2</td> <td>88.1</td> <td>77.8</td> </tr> <tr> <td>Model || Paragraph-level Model+CRF</td> <td>73.5</td> <td>77.4</td> <td>81.5</td> <td>80.3</td> <td>83.7</td> <td>66.5</td> <td>37.4</td> <td>88.5</td> <td>76.7</td> </tr> </tbody></table>
Table 4
table_4
D18-1368
7
emnlp2018
Table 4 shows cross-genre experimental results of our neural network models on the training set of MASC+Wiki by treating each genre as one crossvalidation fold. As we expected, both the macroaverage F1-score and class-wise F1 scores are lower compared with the results in Table 2 where in-genre data were used for model training as well. But the performance drop on the paragraph-level models is little, which clearly outperform the previous system (Friedrich et al., 2016) and the baseline model by a large margin.
[1, 1, 1]
['Table 4 shows cross-genre experimental results of our neural network models on the training set of MASC+Wiki by treating each genre as one crossvalidation fold.', 'As we expected, both the macroaverage F1-score and class-wise F1 scores are lower compared with the results in Table 2 where in-genre data were used for model training as well.', 'But the performance drop on the paragraph-level models is little, which clearly outperform the previous system (Friedrich et al., 2016) and the baseline model by a large margin.']
[None, ['Paragraph-level Model', 'Macro', 'F1 STA', 'F1 EVE', 'F1 REP', 'F1 GENI', 'F1 GENA', 'F1 QUE', 'F1 IMP'], ['Paragraph-level Model', 'CRF (Friedrich et al. 2016)', 'Clause-level Bi-LSTM']]
1
D18-1369table_4
Reply structure recovery results in Wikipedia conversation dataset.
1
[['Naive Baseline'], ['HDHP'], ['HD-GMHP']]
1
[['Pnode'], ['Rnode'], ['F1node']]
[['0.3223', '0.6501', '0.4310'], ['0.5598', '0.5834', '0.5714'], ['0.6433', '0.5468', '0.5911']]
column
['Pnode', 'Rnode', 'F1node']
['HD-GMHP']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Pnode</th> <th>Rnode</th> <th>F1node</th> </tr> </thead> <tbody> <tr> <td>Naive Baseline</td> <td>0.3223</td> <td>0.6501</td> <td>0.4310</td> </tr> <tr> <td>HDHP</td> <td>0.5598</td> <td>0.5834</td> <td>0.5714</td> </tr> <tr> <td>HD-GMHP</td> <td>0.6433</td> <td>0.5468</td> <td>0.5911</td> </tr> </tbody></table>
Table 4
table_4
D18-1369
9
emnlp2018
Table 4 shows the thread reconstruction results of our model and the baseline models in the Wikipedia conversation dataset. Since the HDHP model does not infer the parent event, we reconstruct threads in the form of chronologically ordered linked list of posts in each local cluster that inferred from HDHP. From the F1node score of the results, we establish our model performs better than other baseline models.
[1, 2, 1]
['Table 4 shows the thread reconstruction results of our model and the baseline models in the Wikipedia conversation dataset.', 'Since the HDHP model does not infer the parent event, we reconstruct threads in the form of chronologically ordered linked list of posts in each local cluster that inferred from HDHP.', 'From the F1node score of the results, we establish our model performs better than other baseline models.']
[None, ['HDHP'], ['HD-GMHP']]
1
D18-1371table_5
timex/event labels generated by stage 1 (bottom). Best performances are in bold.
3
[['temporal relation parsing with gold spans', 'model', 'Baseline-simple'], ['temporal relation parsing with gold spans', 'model', 'Baseline-logistic'], ['temporal relation parsing with gold spans', 'model', 'Neural-basic'], ['temporal relation parsing with gold spans', 'model', 'Neural-enriched'], ['temporal relation parsing with gold spans', 'model', 'Neural-attention'], ['temporal relation parsing with gold spans', 'model', 'Baseline-simple'], ['end-to-end systems with automatic spans', 'model', 'Baseline-logistic'], ['end-to-end systems with automatic spans', 'model', 'Neural-basic'], ['end-to-end systems with automatic spans', 'model', 'Neural-enriched'], ['end-to-end systems with automatic spans', 'model', 'Neural-attention']]
3
[['news', 'unlabeled f', 'dev'], ['news', 'unlabeled f', 'test'], ['news', 'labeled f', 'dev'], ['news', 'labeled f', 'test'], ['grimm', 'unlabeled f', 'dev'], ['grimm', 'unlabeled f', 'test'], ['grimm', 'labeled f', 'dev'], ['grimm', 'labeled f', 'test']]
[['.64', '.68', '.47', '.43', '.78', '.79', '.39', '.39'], ['.81', '.79', '.63', '.54', '.74', '.74', '.60', '.63'], ['.78', '.75', '.67', '.57', '.72', '.74', '.60', '.63'], ['.80', '.78', '.67', '.59', '.76', '.77', '.63', '.65'], ['.83', '.81', '.76', '.70', '.79', '.79', '.66', '.68'], ['.39', '.40', '.26', '.25', '.44', '.47', '.27', '.25'], ['.36', '.34', '.24', '.22', '.43', '.49', '.33', '.37'], ['.37', '.36', '.21', '.23', '.42', '.45', '.33', '.35'], ['.51', '.52', '.32', '.35', '.44', '.49', '.33', '.37'], ['.54', '.54', '.36', '.39', '.44', '.49', '.35', '.39']]
column
['f1', 'f1', 'f1', 'f1', 'f1', 'f1', 'f1', 'f1']
['Neural-basic', 'Neural-enriched', 'Neural-attention']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>news || unlabeled f || dev</th> <th>news || unlabeled f || test</th> <th>news || labeled f || dev</th> <th>news || labeled f || test</th> <th>grimm || unlabeled f || dev</th> <th>grimm || unlabeled f || test</th> <th>grimm || labeled f || dev</th> <th>grimm || labeled f || test</th> </tr> </thead> <tbody> <tr> <td>temporal relation parsing with gold spans || model || Baseline-simple</td> <td>.64</td> <td>.68</td> <td>.47</td> <td>.43</td> <td>.78</td> <td>.79</td> <td>.39</td> <td>.39</td> </tr> <tr> <td>temporal relation parsing with gold spans || model || Baseline-logistic</td> <td>.81</td> <td>.79</td> <td>.63</td> <td>.54</td> <td>.74</td> <td>.74</td> <td>.60</td> <td>.63</td> </tr> <tr> <td>temporal relation parsing with gold spans || model || Neural-basic</td> <td>.78</td> <td>.75</td> <td>.67</td> <td>.57</td> <td>.72</td> <td>.74</td> <td>.60</td> <td>.63</td> </tr> <tr> <td>temporal relation parsing with gold spans || model || Neural-enriched</td> <td>.80</td> <td>.78</td> <td>.67</td> <td>.59</td> <td>.76</td> <td>.77</td> <td>.63</td> <td>.65</td> </tr> <tr> <td>temporal relation parsing with gold spans || model || Neural-attention</td> <td>.83</td> <td>.81</td> <td>.76</td> <td>.70</td> <td>.79</td> <td>.79</td> <td>.66</td> <td>.68</td> </tr> <tr> <td>temporal relation parsing with gold spans || model || Baseline-simple</td> <td>.39</td> <td>.40</td> <td>.26</td> <td>.25</td> <td>.44</td> <td>.47</td> <td>.27</td> <td>.25</td> </tr> <tr> <td>end-to-end systems with automatic spans || model || Baseline-logistic</td> <td>.36</td> <td>.34</td> <td>.24</td> <td>.22</td> <td>.43</td> <td>.49</td> <td>.33</td> <td>.37</td> </tr> <tr> <td>end-to-end systems with automatic spans || model || Neural-basic</td> <td>.37</td> <td>.36</td> <td>.21</td> <td>.23</td> <td>.42</td> <td>.45</td> <td>.33</td> <td>.35</td> </tr> <tr> <td>end-to-end systems with automatic spans || model || Neural-enriched</td> <td>.51</td> <td>.52</td> <td>.32</td> <td>.35</td> <td>.44</td> <td>.49</td> <td>.33</td> <td>.37</td> </tr> <tr> <td>end-to-end systems with automatic spans || model || Neural-attention</td> <td>.54</td> <td>.54</td> <td>.36</td> <td>.39</td> <td>.44</td> <td>.49</td> <td>.35</td> <td>.39</td> </tr> </tbody></table>
Table 5
table_5
D18-1371
7
emnlp2018
Bottom rows in Table 5 report the end-to-end performance of our five systems on both domains. On both labeled and unlabeled parsing, our basic neural model with only lexical input performs comparable to the logistic regression model. And our enriched neural model with only three simple linguistic features outperforms both the logistic regression model and the basic neural model on news, improving the performance by more than 10%. However, our models only slightly improve the unlabeled parsing over the simple baseline on narrative Grimm data. This is probably due to (1) it is a very strong baseline to link every node to its immediate previous node, since in an narrative discourse linear temporal sequences are very common; and (2) most events breaking the temporal linearity in a narrative discourse are implicit stative descriptions which are harder to model with only lexical and distance features. Finally, attention mechanism improves temporal relation labeling on both domains. 5.3.2 Temporal Relation Evaluation. To facilitate comparison with previous work where gold events are used as parser input, we report our results on temporal dependency parsing with gold time expression and event spans in Table 5 (top rows). These results are in the same ballpark as what is reported in previous work on temporal relation extraction. The best performance in Kolomiyets et al. (2012) are 0.84 and 0.65 fscores for unlabeled and labeled parses, achieved by temporal structure parsers trained and evaluated on narrative children’s stories. Our best performing model (Neural-attention) reports 0.81 and 0.70 f-scores on unlabeled and labeled parses respectively, showing similar performance. It is important to note, however, that these two works use different data sets, and are not directly comparable. Finally, parsing accuracy with gold time/event spans as input is substantially higher than that with predicted spans, showing the effects of error propagation.
[1, 1, 1, 1, 2, 1, 2, 1, 1, 2, 1, 2, 2]
['Bottom rows in Table 5 report the end-to-end performance of our five systems on both domains.', 'On both labeled and unlabeled parsing, our basic neural model with only lexical input performs comparable to the logistic regression model.', 'And our enriched neural model with only three simple linguistic features outperforms both the logistic regression model and the basic neural model on news, improving the performance by more than 10%.', 'However, our models only slightly improve the unlabeled parsing over the simple baseline on narrative Grimm data.', 'This is probably due to (1) it is a very strong baseline to link every node to its immediate previous node, since in an narrative discourse linear temporal sequences are very common; and (2) most events breaking the temporal linearity in a narrative discourse are implicit stative descriptions which are harder to model with only lexical and distance features.', 'Finally, attention mechanism improves temporal relation labeling on both domains.', '5.3.2 Temporal Relation Evaluation.', 'To facilitate comparison with previous work where gold events are used as parser input, we report our results on temporal dependency parsing with gold time expression and event spans in Table 5 (top rows).', 'These results are in the same ballpark as what is reported in previous work on temporal relation extraction.', 'The best performance in Kolomiyets et al. (2012) are 0.84 and 0.65 fscores for unlabeled and labeled parses, achieved by temporal structure parsers trained and evaluated on narrative children’s stories.', 'Our best performing model (Neural-attention) reports 0.81 and 0.70 f-scores on unlabeled and labeled parses respectively, showing similar performance.', 'It is important to note, however, that these two works use different data sets, and are not directly comparable.', 'Finally, parsing accuracy with gold time/event spans as input is substantially higher than that with predicted spans, showing the effects of error propagation.']
[None, ['Neural-basic', 'Baseline-logistic', 'unlabeled f', 'labeled f', 'end-to-end systems with automatic spans'], ['Baseline-logistic', 'Neural-basic', 'Neural-enriched', 'news', 'end-to-end systems with automatic spans'], ['grimm', 'unlabeled f', 'Baseline-simple', 'Neural-basic', 'Neural-enriched', 'Neural-attention', 'end-to-end systems with automatic spans'], None, ['Neural-attention', 'news', 'grimm', 'end-to-end systems with automatic spans'], None, None, ['temporal relation parsing with gold spans'], ['temporal relation parsing with gold spans'], ['Neural-attention', 'news', 'unlabeled f', 'labeled f', 'test', 'temporal relation parsing with gold spans'], None, None]
1
D18-1373table_2
Comparison on datasets with the baselines. ‘+F’: tested with all modalities(U,O,M,V), ‘-X’: dropping one modality, ‘-U’ and ‘-O’: user and item cold-start scenario.
2
[['Dataset Models', 'Offset'], ['Dataset Models', 'NMF'], ['Dataset Models', 'SVD++'], ['Dataset Models', 'URP'], ['Dataset Models', 'RMR'], ['Dataset Models', 'HFT'], ['Dataset Models', 'DeepCoNN'], ['Dataset Models', 'NRT'], ['Dataset Models', 'LRMM(+F)'], ['Dataset Models', 'LRMM(-U)'], ['Dataset Models', 'LRMM(-O)'], ['Dataset Models', 'LRMM(-M)'], ['Dataset Models', 'LRMM(-V)']]
2
[['S&O', 'RMSE'], ['S&O', 'MAE'], ['H&P', 'RMSE'], ['H&P', 'MAE'], ['Movie', 'RMSE'], ['Movie', 'MAE'], ['Electronics', 'RMSE'], ['Electronics', 'MAE']]
[['0.979', '0.769', '1.247', '0.882', '1.389', '0.933', '1.401', '0.928'], ['0.948', '0.671', '1.059', '0.761', '1.135', '0.794', '1.297', '0.904'], ['0.922', '0.669', '1.026', '0.760', '1.049', '0.745', '1.194', '0.847'], ['-', '-', '-', '-', '1.006', '0.764', '1.126', '0.860'], ['-', '-', '-', '-', '1.005', '0.741', '1.123', '0.822'], ['0.924', '0.659', '1.040', '0.757', '0.997', '0.735', '1.110', '0.807'], ['0.943', '0.667', '1.045', '0.746', '1.014', '0.743', '1.109', '0.797'], ['-', '-', '-', '-', '0.985', '0.702', '1.107', '0.806'], ['0.886', '0.624', '0.988', '0.708', '0.983', '0.716', '1.052', '0.766'], ['0.936', '0.719', '1.058', '0.782', '1.086', '0.821', '1.138', '0.900'], ['0.931', '0.680', '1.039', '0.805', '1.074', '0.855', '1.101', '0.864'], ['0.887', '0.625', '0.989', '0.710', '0.991', '0.725', '1.053', '0.766'], ['0.886', '0.624', '0.989', '0.708', '0.991', '0.725', '1.052', '0.766']]
column
['RMSE', 'MAE', 'RMSE', 'MAE', 'RMSE', 'MAE', 'RMSE', 'MAE']
['LRMM(+F)', 'LRMM(-U)', 'LRMM(-O)', 'LRMM(-M)', 'LRMM(-V)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>S&amp;O || RMSE</th> <th>S&amp;O || MAE</th> <th>H&amp;P || RMSE</th> <th>H&amp;P || MAE</th> <th>Movie || RMSE</th> <th>Movie || MAE</th> <th>Electronics || RMSE</th> <th>Electronics || MAE</th> </tr> </thead> <tbody> <tr> <td>Dataset Models || Offset</td> <td>0.979</td> <td>0.769</td> <td>1.247</td> <td>0.882</td> <td>1.389</td> <td>0.933</td> <td>1.401</td> <td>0.928</td> </tr> <tr> <td>Dataset Models || NMF</td> <td>0.948</td> <td>0.671</td> <td>1.059</td> <td>0.761</td> <td>1.135</td> <td>0.794</td> <td>1.297</td> <td>0.904</td> </tr> <tr> <td>Dataset Models || SVD++</td> <td>0.922</td> <td>0.669</td> <td>1.026</td> <td>0.760</td> <td>1.049</td> <td>0.745</td> <td>1.194</td> <td>0.847</td> </tr> <tr> <td>Dataset Models || URP</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>1.006</td> <td>0.764</td> <td>1.126</td> <td>0.860</td> </tr> <tr> <td>Dataset Models || RMR</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>1.005</td> <td>0.741</td> <td>1.123</td> <td>0.822</td> </tr> <tr> <td>Dataset Models || HFT</td> <td>0.924</td> <td>0.659</td> <td>1.040</td> <td>0.757</td> <td>0.997</td> <td>0.735</td> <td>1.110</td> <td>0.807</td> </tr> <tr> <td>Dataset Models || DeepCoNN</td> <td>0.943</td> <td>0.667</td> <td>1.045</td> <td>0.746</td> <td>1.014</td> <td>0.743</td> <td>1.109</td> <td>0.797</td> </tr> <tr> <td>Dataset Models || NRT</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>0.985</td> <td>0.702</td> <td>1.107</td> <td>0.806</td> </tr> <tr> <td>Dataset Models || LRMM(+F)</td> <td>0.886</td> <td>0.624</td> <td>0.988</td> <td>0.708</td> <td>0.983</td> <td>0.716</td> <td>1.052</td> <td>0.766</td> </tr> <tr> <td>Dataset Models || LRMM(-U)</td> <td>0.936</td> <td>0.719</td> <td>1.058</td> <td>0.782</td> <td>1.086</td> <td>0.821</td> <td>1.138</td> <td>0.900</td> </tr> <tr> <td>Dataset Models || LRMM(-O)</td> <td>0.931</td> <td>0.680</td> <td>1.039</td> <td>0.805</td> <td>1.074</td> <td>0.855</td> <td>1.101</td> <td>0.864</td> </tr> <tr> <td>Dataset Models || LRMM(-M)</td> <td>0.887</td> <td>0.625</td> <td>0.989</td> <td>0.710</td> <td>0.991</td> <td>0.725</td> <td>1.053</td> <td>0.766</td> </tr> <tr> <td>Dataset Models || LRMM(-V)</td> <td>0.886</td> <td>0.624</td> <td>0.989</td> <td>0.708</td> <td>0.991</td> <td>0.725</td> <td>1.052</td> <td>0.766</td> </tr> </tbody></table>
Table 2
table_2
D18-1373
6
emnlp2018
3.4 Compare with State-of-the-art. First, we compare LRMM with state-of-the-art methods listed in Sec. 3.2. In this setting, LRMM is trained with all data modalities and tested with different missing modality regimes. Table 2 lists the results on the four datasets. By leveraging multimodal correlations, LRMM significantly outperforms MF-based models (i.e. NMF, SVD++) and topic-based methods (i.e., URP, CTR, RMR, and HFT). LRMM also outperforms recent deep learning models (i.e., NRT, DeepCoNN) with respect to almost all metrics. LRMM is the only method with a robust performance for the cold-start recommendation problem where user review or item review texts are removed. While the cold-start recommendation is more challenging, LRMM(-U) and LRMM(-O) are still able to achieve a similar performance to the baselines in the standard recommendation setting. For example, RMSE 1.101 (LRMM(-O)) to 1.107 (NRT) on Electronics, MAE 0.680 (LRMM(-O)) to 0.667 (DeepCoNN) on S&O. We conjecture that the cross-modality dependencies (Srivastava and Salakhutdinov, 2012) make LRMM more robust when modalities are missing. Table 5 lists some randomly selected rating predictions. Similar to Table 2, missing user (-U) and item (-O) preference significantly deteriorates the performance.
[2, 2, 2, 1, 1, 1, 2, 1, 1, 2, 1, 2]
['3.4 Compare with State-of-the-art.', 'First, we compare LRMM with state-of-the-art methods listed in Sec. 3.2.', 'In this setting, LRMM is trained with all data modalities and tested with different missing modality regimes.', 'Table 2 lists the results on the four datasets.', 'By leveraging multimodal correlations, LRMM significantly outperforms MF-based models (i.e. NMF, SVD++) and topic-based methods (i.e., URP, CTR, RMR, and HFT).', 'LRMM also outperforms recent deep learning models (i.e., NRT, DeepCoNN) with respect to almost all metrics.', 'LRMM is the only method with a robust performance for the cold-start recommendation problem where user review or item review texts are removed.', 'While the cold-start recommendation is more challenging, LRMM(-U) and LRMM(-O) are still able to achieve a similar performance to the baselines in the standard recommendation setting.', 'For example, RMSE 1.101 (LRMM(-O)) to 1.107 (NRT) on Electronics, MAE 0.680 (LRMM(-O)) to 0.667 (DeepCoNN) on S&O.', 'We conjecture that the cross-modality dependencies (Srivastava and Salakhutdinov, 2012) make LRMM more robust when modalities are missing.', 'Table 5 lists some randomly selected rating predictions.', 'Similar to Table 2, missing user (-U) and item (-O) preference significantly deteriorates the performance.']
[None, None, None, None, ['LRMM(+F)', 'NMF', 'SVD++', 'URP', 'RMR', 'HFT'], ['LRMM(+F)', 'HFT', 'DeepCoNN'], ['LRMM(+F)'], ['LRMM(-U)', 'LRMM(-O)'], ['LRMM(-O)', 'NRT', 'DeepCoNN', 'Electronics', 'S&O', 'RMSE', 'MAE'], None, None, ['LRMM(-U)', 'LRMM(-O)']]
1
D18-1373table_3
The performance of training with missing modality imputation.
2
[['Dataset Models', 'LRMM(+F)'], ['Dataset Models', 'LRMM(-U)'], ['Dataset Models', 'LRMM(-O)'], ['Dataset Models', 'LRMM(-M)'], ['Dataset Models', 'LRMM(-V)']]
2
[['S&O', 'RMSE'], ['S&O', 'MAE'], ['H&P', 'RMSE'], ['H&P', 'MAE']]
[['0.997', '0.790', '1.131', '0.912'], ['0.998', '0.795', '1.132', '0.914'], ['0.999', '0.796', '1.133', '0.917'], ['0.998', '0.797', '1.133', '0.913'], ['0.997', '0.791', '1.132', '0.913']]
column
['RMSE', 'MAE', 'RMSE', 'MAE']
['LRMM(+F)', 'LRMM(-U)', 'LRMM(-O)', 'LRMM(-M)', 'LRMM(-V)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>S&amp;O || RMSE</th> <th>S&amp;O || MAE</th> <th>H&amp;P || RMSE</th> <th>H&amp;P || MAE</th> </tr> </thead> <tbody> <tr> <td>Dataset Models || LRMM(+F)</td> <td>0.997</td> <td>0.790</td> <td>1.131</td> <td>0.912</td> </tr> <tr> <td>Dataset Models || LRMM(-U)</td> <td>0.998</td> <td>0.795</td> <td>1.132</td> <td>0.914</td> </tr> <tr> <td>Dataset Models || LRMM(-O)</td> <td>0.999</td> <td>0.796</td> <td>1.133</td> <td>0.917</td> </tr> <tr> <td>Dataset Models || LRMM(-M)</td> <td>0.998</td> <td>0.797</td> <td>1.133</td> <td>0.913</td> </tr> <tr> <td>Dataset Models || LRMM(-V)</td> <td>0.997</td> <td>0.791</td> <td>1.132</td> <td>0.913</td> </tr> </tbody></table>
Table 3
table_3
D18-1373
7
emnlp2018
3.6 Missing Modality Imputation. The proposed m-drop and m-auto methods allow LRMM to be more robust to missing data modalities. Table 3 lists the results of training LRMM with missing data modalities for the modality dropout ratio pm = 0.5 on the S&O and H&P datasets, respectively. Both RMSE and MAE of LRMM deteriorate but are still comparable to the MF-based approaches NMF and SVD++.
[2, 2, 1, 1]
['3.6 Missing Modality Imputation.', 'The proposed m-drop and m-auto methods allow LRMM to be more robust to missing data modalities.', 'Table 3 lists the results of training LRMM with missing data modalities for the modality dropout ratio pm = 0.5 on the S&O and H&P datasets, respectively.', 'Both RMSE and MAE of LRMM deteriorate but are still comparable to the MF-based approaches NMF and SVD++.']
[None, None, ['LRMM(+F)', 'S&O', 'H&P'], ['LRMM(+F)', 'RMSE', 'MAE']]
1
D18-1374table_1
Experimental results for NCG-IDF@k and prec@k scores for different methods.
2
[['p(F)', '-'], ['N(C F)', 'base'], ['N(C F)', 'SVD'], ['P(F|C)', 'base'], ['P(F|C)', 'SVD'], ['PPMI(C F)', 'base'], ['PPMI(C F)', 'SVD'], ['Youtube', 'base'], ['Youtube', 'IDF'], ['XML-CNN', 'base'], ['XML-CNN', 'IDF'], ['FastXML', '-'], ['PFastreXML', '-']]
1
[['NCG-IDF @1'], ['NCG-IDF @3'], ['NCG-IDF @5'], ['NCG-IDF @7'], ['prec @1'], ['prec @3'], ['prec @5'], ['prec @7'], ['model size']]
[['14.86', '14.50', '14.61', '14.56', '36.76', '32.69', '31.12', '30.23', '28B'], ['20.12', '19.95', '19.43', '18.91', '46.92', '42.89', '40.29', '38.15', '4.7GB'], ['19.99', '19.79', '19.20', '18.72', '46.68', '42.60', '39.88', '37.81', '0.1GB'], ['22.92', '22.77', '22.17', '21.86', '51.50', '47.81', '44.97', '43.19', '4.7GB'], ['21.56', '21.00', '20.65', '20.31', '48.79', '44.44', '42.22', '40.54', '0.1GB'], ['22.92', '20.86', '19.45', '18.42', '23.95', '21.17', '19.33', '18.01', '4.7GB'], ['18.22', '18.22', '18.03', '17.89', '26.49', '25.36', '24.31', '23.49', '0.1GB'], ['31.30', '31.12', '31.07', '31.02', '54.85', '52.62', '51.09', '49.79', '1.8GB'], ['32.95', '32.24', '32.00', '31.62', '50.95', '49.07', '47.63', '46.57', '1.8GB'], ['33.13', '32.20', '31.93', '31.75', '58.72', '53.60', '52.05', '50.35', '1.8GB'], ['33.89', '33.42', '33.02', '32.76', '51.99', '49.54', '48.40', '47.24', '1.8GB'], ['35.31', '34.39', '33.74', '33.15', '69.98', '63.78', '60.16', '57.37', '150GB'], ['36.19', '34.94', '34.12', '33.31', '55.09', '51.51', '49.50', '47.85', '150GB']]
column
['NCG-IDF @1', 'NCG-IDF @3', 'NCG-IDF @5', 'NCG-IDF @7', 'prec @1', 'prec @3', 'prec @5', 'prec @7', 'model size']
['FastXML', 'PFastreXML']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>NCG-IDF @1</th> <th>NCG-IDF @3</th> <th>NCG-IDF @5</th> <th>NCG-IDF @7</th> <th>prec @1</th> <th>prec @3</th> <th>prec @5</th> <th>prec @7</th> <th>model size</th> </tr> </thead> <tbody> <tr> <td>p(F) || -</td> <td>14.86</td> <td>14.50</td> <td>14.61</td> <td>14.56</td> <td>36.76</td> <td>32.69</td> <td>31.12</td> <td>30.23</td> <td>28B</td> </tr> <tr> <td>N(C F) || base</td> <td>20.12</td> <td>19.95</td> <td>19.43</td> <td>18.91</td> <td>46.92</td> <td>42.89</td> <td>40.29</td> <td>38.15</td> <td>4.7GB</td> </tr> <tr> <td>N(C F) || SVD</td> <td>19.99</td> <td>19.79</td> <td>19.20</td> <td>18.72</td> <td>46.68</td> <td>42.60</td> <td>39.88</td> <td>37.81</td> <td>0.1GB</td> </tr> <tr> <td>P(F|C) || base</td> <td>22.92</td> <td>22.77</td> <td>22.17</td> <td>21.86</td> <td>51.50</td> <td>47.81</td> <td>44.97</td> <td>43.19</td> <td>4.7GB</td> </tr> <tr> <td>P(F|C) || SVD</td> <td>21.56</td> <td>21.00</td> <td>20.65</td> <td>20.31</td> <td>48.79</td> <td>44.44</td> <td>42.22</td> <td>40.54</td> <td>0.1GB</td> </tr> <tr> <td>PPMI(C F) || base</td> <td>22.92</td> <td>20.86</td> <td>19.45</td> <td>18.42</td> <td>23.95</td> <td>21.17</td> <td>19.33</td> <td>18.01</td> <td>4.7GB</td> </tr> <tr> <td>PPMI(C F) || SVD</td> <td>18.22</td> <td>18.22</td> <td>18.03</td> <td>17.89</td> <td>26.49</td> <td>25.36</td> <td>24.31</td> <td>23.49</td> <td>0.1GB</td> </tr> <tr> <td>Youtube || base</td> <td>31.30</td> <td>31.12</td> <td>31.07</td> <td>31.02</td> <td>54.85</td> <td>52.62</td> <td>51.09</td> <td>49.79</td> <td>1.8GB</td> </tr> <tr> <td>Youtube || IDF</td> <td>32.95</td> <td>32.24</td> <td>32.00</td> <td>31.62</td> <td>50.95</td> <td>49.07</td> <td>47.63</td> <td>46.57</td> <td>1.8GB</td> </tr> <tr> <td>XML-CNN || base</td> <td>33.13</td> <td>32.20</td> <td>31.93</td> <td>31.75</td> <td>58.72</td> <td>53.60</td> <td>52.05</td> <td>50.35</td> <td>1.8GB</td> </tr> <tr> <td>XML-CNN || IDF</td> <td>33.89</td> <td>33.42</td> <td>33.02</td> <td>32.76</td> <td>51.99</td> <td>49.54</td> <td>48.40</td> <td>47.24</td> <td>1.8GB</td> </tr> <tr> <td>FastXML || -</td> <td>35.31</td> <td>34.39</td> <td>33.74</td> <td>33.15</td> <td>69.98</td> <td>63.78</td> <td>60.16</td> <td>57.37</td> <td>150GB</td> </tr> <tr> <td>PFastreXML || -</td> <td>36.19</td> <td>34.94</td> <td>34.12</td> <td>33.31</td> <td>55.09</td> <td>51.51</td> <td>49.50</td> <td>47.85</td> <td>150GB</td> </tr> </tbody></table>
Table 1
table_1
D18-1374
7
emnlp2018
6 Experiments. In Table 1 we report results from the experiments on the 10M web documents dataset for prec and NDCG-IDF metrics for k = 1, 3, 5, 7, limiting k to small values as is common in the recommendation problems from large sets of items (Jain et al., 2016). The p(F ) baseline always predicts entities according to their frequency over the training set. It can be viewed as maximum likelihood estimate (MLE) for the model which is only composed of a bias vector (i.e., input entities are ignored). Notice the relatively high performance when the most popular entities are taken. For example, in 36.76% of cases entity Research (the most popular future entity from the corpus) is in the future of the document (as can be also seen from Figure 1, where the entity Research corresponds to the leftmost point of the graph). This constitutes a high value, as the vocabulary consists of 100K entities. Among the linear models, P(F|C) yields the highest scores, significantly outperforming the baselines. PPMI(F|C) model yields relatively high NCG-IDF scores (although in most cases lower than P(F|C)), and very low precision scores. Notice that the SVD methods are consistently worse than the linear models. This shows that no additional generalization is gained when lowering the number of parameters of the linear models. When experimenting with higher ranks for SVD decomposition we found the performance increases, but does not improve over the linear models. NNs improve over linear models according to both NCG-IDF and prec scores. This is especially apparent for NCG-IDF, where the relative improvement is very significant. XML-CNN is in all cases better than the Youtube model, which shows how utilizing more linguistic structure than simply bag of entities is helpful in the NN framework. Both Youtube and XML-CNN models with a modified loss function improve over the basic NN models in terms of the NCG-IDF metrics, showing that a simple adjustment of a loss function in the NN framework can lead to more rare entities being recommended. This comes at the cost of lowering prec@k scores, which however correlates with user judgments to a lesser extent, as we showed in Section 3.2. The Random Forest models turn out to be the most competitive. Notice that no linguistic structure is captured in FastXML models, only the bags of entities. This is in contrast with XML-CNN approach which looks at local contexts of feature entities. FastXML performs particularly well on the precision scores, which however is not necessarily useful, as demonstrated in the examples discussed later. Last, we inspect the sizes of the different models reported in the rightmost column of Table 1. Model size is an important factor to consider in practical applications, e.g. when deploying a system on the device. PFastreXML model takes 150GB, by far the most of all methods, resulting in its capability in recommending tail entities. The linear models take 4.7GB related to the fact that in the full 100K × 100K co-occurrence matrix approximately 11% of entries are non-zero. Applying SVD matrix decomposition helps reduce this size significantly. The NN models take around 2GB, significantly less than the random forests.
[2, 1, 2, 2, 2, 1, 2, 1, 1, 1, 2, 1, 1, 1, 1, 1, 1, 2, 2, 2, 1, 1, 2, 1, 1, 2, 1]
['6 Experiments.', 'In Table 1 we report results from the experiments on the 10M web documents dataset for prec and NDCG-IDF metrics for k = 1, 3, 5, 7, limiting k to small values as is common in the recommendation problems from large sets of items (Jain et al., 2016).', 'The p(F ) baseline always predicts entities according to their frequency over the training set.', 'It can be viewed as maximum likelihood estimate (MLE) for the model which is only composed of a bias vector (i.e., input entities are ignored).', 'Notice the relatively high performance when the most popular entities are taken.', 'For example, in 36.76% of cases entity Research (the most popular future entity from the corpus) is in the future of the document (as can be also seen from Figure 1, where the entity Research corresponds to the leftmost point of the graph).', 'This constitutes a high value, as the vocabulary consists of 100K entities.', 'Among the linear models, P(F|C) yields the highest scores, significantly outperforming the baselines.', 'PPMI(F|C) model yields relatively high NCG-IDF scores (although in most cases lower than P(F|C)), and very low precision scores.', 'Notice that the SVD methods are consistently worse than the linear models.', 'This shows that no additional generalization is gained when lowering the number of parameters of the linear models.', 'When experimenting with higher ranks for SVD decomposition we found the performance increases, but does not improve over the linear models.', 'NNs improve over linear models according to both NCG-IDF and prec scores.', 'This is especially apparent for NCG-IDF, where the relative improvement is very significant.', 'XML-CNN is in all cases better than the Youtube model, which shows how utilizing more linguistic structure than simply bag of entities is helpful in the NN framework.', 'Both Youtube and XML-CNN models with a modified loss function improve over the basic NN models in terms of the NCG-IDF metrics, showing that a simple adjustment of a loss function in the NN framework can lead to more rare entities being recommended.', 'This comes at the cost of lowering prec@k scores, which however correlates with user judgments to a lesser extent, as we showed in Section 3.2.', 'The Random Forest models turn out to be the most competitive.', 'Notice that no linguistic structure is captured in FastXML models, only the bags of entities.', 'This is in contrast with XML-CNN approach which looks at local contexts of feature entities.', 'FastXML performs particularly well on the precision scores, which however is not necessarily useful, as demonstrated in the examples discussed later.', 'Last, we inspect the sizes of the different models reported in the rightmost column of Table 1.', 'Model size is an important factor to consider in practical applications, e.g. when deploying a system on the device.', 'PFastreXML model takes 150GB, by far the most of all methods, resulting in its capability in recommending tail entities.', 'The linear models take 4.7GB related to the fact that in the full 100K × 100K co-occurrence matrix approximately 11% of entries are non-zero.', 'Applying SVD matrix decomposition helps reduce this size significantly.', 'The NN models take around 2GB, significantly less than the random forests.']
[None, ['NCG-IDF @1', 'NCG-IDF @3', 'NCG-IDF @5', 'NCG-IDF @7', 'prec @1', 'prec @3', 'prec @5', 'prec @7'], ['p(F)'], ['p(F)'], ['p(F)'], ['p(F)', 'prec @1'], None, ['P(F|C)'], ['PPMI(C F)', 'NCG-IDF @1', 'NCG-IDF @3', 'NCG-IDF @5', 'NCG-IDF @7'], ['SVD'], None, ['SVD'], ['Youtube', 'XML-CNN', 'base', 'NCG-IDF @1', 'NCG-IDF @3', 'NCG-IDF @5', 'NCG-IDF @7', 'prec @1', 'prec @3', 'prec @5', 'prec @7'], ['Youtube', 'XML-CNN', 'NCG-IDF @1', 'NCG-IDF @3', 'NCG-IDF @5', 'NCG-IDF @7'], ['Youtube', 'XML-CNN'], ['Youtube', 'XML-CNN', 'base', 'IDF', 'NCG-IDF @1', 'NCG-IDF @3', 'NCG-IDF @5', 'NCG-IDF @7'], ['prec @1', 'prec @3', 'prec @5', 'prec @7', 'IDF'], ['FastXML', 'PFastreXML'], ['FastXML'], ['FastXML', 'XML-CNN'], ['FastXML', 'prec @1', 'prec @3', 'prec @5', 'prec @7'], ['model size'], None, ['PFastreXML', 'model size'], ['N(C F)', 'P(F|C)', 'PPMI(C F)', 'base', 'model size'], ['SVD'], ['Youtube', 'XML-CNN', 'FastXML', 'PFastreXML', 'model size']]
1
D18-1378table_8
SEU sentiment classification accuracy.
1
[['tSEU'], ['tSEU(D)']]
2
[['Accuracy', 'LA'], ['Accuracy', 'LAD'], ['Accuracy', 'LAW'], ['Accuracy', 'L']]
[['0.702', '0.723', '0.733', '0.750'], ['0.692', '0.715', '0.716', '0.735']]
column
['Accuracy', 'Accuracy', 'Accuracy', 'Accuracy']
['LA', 'LAD', 'LAW']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Accuracy || LA</th> <th>Accuracy || LAD</th> <th>Accuracy || LAW</th> <th>Accuracy || L</th> </tr> </thead> <tbody> <tr> <td>tSEU</td> <td>0.702</td> <td>0.723</td> <td>0.733</td> <td>0.750</td> </tr> <tr> <td>tSEU(D)</td> <td>0.692</td> <td>0.715</td> <td>0.716</td> <td>0.735</td> </tr> </tbody></table>
Table 8
table_8
D18-1378
8
emnlp2018
To understand the contributions of incorporating authors, discourse relations, and word embeddings, we evaluate variants of Limbic for SEUlevel sentiment classification on two datasets: tSEU and tSEU(D). We create tSEU by randomly selecting 200 hotel reviews by seven authors. We manually annotate the sentiments of each SEU, obtaining 2,692 SEUs. We create tSEU(D) by selecting reviews in tSEU containing at least one Comparison or Expansion. We define three variants of Limbic (L): LA with just authors, no discourse relations or word embeddings; LAD with authors and discourse relations but no word embeddings; LAW with authors and word embeddings but no discourse relations. Table 8 compares Limbic with LA LAD, and LAW. We observe that for both datasets, incorporating discourse relations improves accuracy. By incorporating word embeddings, LAW yields better accuracy than LAD, showing that word embeddings add more value to Limbic than discourse relations do.
[2, 2, 2, 2, 2, 1, 1, 1]
['To understand the contributions of incorporating authors, discourse relations, and word embeddings, we evaluate variants of Limbic for SEUlevel sentiment classification on two datasets: tSEU and tSEU(D).', 'We create tSEU by randomly selecting 200 hotel reviews by seven authors.', 'We manually annotate the sentiments of each SEU, obtaining 2,692 SEUs.', 'We create tSEU(D) by selecting reviews in tSEU containing at least one Comparison or Expansion.', 'We define three variants of Limbic (L): LA with just authors, no discourse relations or word embeddings; LAD with authors and discourse relations but no word embeddings; LAW with authors and word embeddings but no discourse relations.', 'Table 8 compares Limbic with LA, LAD, and LAW.', 'We observe that for both datasets, incorporating discourse relations improves accuracy.', 'By incorporating word embeddings, LAW yields better accuracy than LAD, showing that word embeddings add more value to Limbic than discourse relations do.']
[['tSEU', 'tSEU(D)'], ['tSEU'], None, ['tSEU(D)'], ['LA', 'LAD', 'LAW'], ['LA', 'LAD', 'LAW'], ['tSEU', 'tSEU(D)', 'LAD'], ['LAD', 'LAW']]
1
D18-1380table_2
The performance comparisons of different methods on the three datasets, where the results of baseline methods are retrieved from published papers. The best performances are marked in bold.
2
[['Method', 'Majority'], ['Method', 'Feature-SVM'], ['Method', 'ATAE-LSTM'], ['Method', 'TD-LSTM'], ['Method', 'IAN'], ['Method', 'MemNet'], ['Method', 'BILSTM-ATT-G'], ['Method', 'RAM'], ['Method', 'MGAN']]
2
[['Laptop', 'Acc'], ['Laptop', 'Macro-F1'], ['Restaurant', 'Acc'], ['Restaurant', 'Macro-F1'], ['Twitter', 'Acc'], ['Twitter', 'Macro-F1']]
[['0.5350', '0.3333', '0.6500', '0.3333', '0.5000', '0.3333'], ['0.7049', '-', '0.8016', '-', '0.6340', '0.6330'], ['0.6870', '-', '0.7720', '-', '-', '-'], ['0.7183', '0.6843', '0.7800', '0.6673', '0.6662', '0.6401'], ['0.7210', '-', '0.7860', '-', '-', '-'], ['0.7237', '-', '0.8032', '-', '0.6850', '0.6691'], ['0.7312', '0.6980', '0.7973', '0.6925', '0.7038', '0.6837'], ['0.7449', '0.7135', '0.8023', '0.7080', '0.6936', '0.6730'], ['0.7539', '0.7247', '0.8125', '0.7194', '0.7254', '0.7081']]
column
['Acc', 'Macro-F1', 'Acc', 'Macro-F1', 'Acc', 'Macro-F1']
['MGAN']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Laptop || Acc</th> <th>Laptop || Macro-F1</th> <th>Restaurant || Acc</th> <th>Restaurant || Macro-F1</th> <th>Twitter || Acc</th> <th>Twitter || Macro-F1</th> </tr> </thead> <tbody> <tr> <td>Method || Majority</td> <td>0.5350</td> <td>0.3333</td> <td>0.6500</td> <td>0.3333</td> <td>0.5000</td> <td>0.3333</td> </tr> <tr> <td>Method || Feature-SVM</td> <td>0.7049</td> <td>-</td> <td>0.8016</td> <td>-</td> <td>0.6340</td> <td>0.6330</td> </tr> <tr> <td>Method || ATAE-LSTM</td> <td>0.6870</td> <td>-</td> <td>0.7720</td> <td>-</td> <td>-</td> <td>-</td> </tr> <tr> <td>Method || TD-LSTM</td> <td>0.7183</td> <td>0.6843</td> <td>0.7800</td> <td>0.6673</td> <td>0.6662</td> <td>0.6401</td> </tr> <tr> <td>Method || IAN</td> <td>0.7210</td> <td>-</td> <td>0.7860</td> <td>-</td> <td>-</td> <td>-</td> </tr> <tr> <td>Method || MemNet</td> <td>0.7237</td> <td>-</td> <td>0.8032</td> <td>-</td> <td>0.6850</td> <td>0.6691</td> </tr> <tr> <td>Method || BILSTM-ATT-G</td> <td>0.7312</td> <td>0.6980</td> <td>0.7973</td> <td>0.6925</td> <td>0.7038</td> <td>0.6837</td> </tr> <tr> <td>Method || RAM</td> <td>0.7449</td> <td>0.7135</td> <td>0.8023</td> <td>0.7080</td> <td>0.6936</td> <td>0.6730</td> </tr> <tr> <td>Method || MGAN</td> <td>0.7539</td> <td>0.7247</td> <td>0.8125</td> <td>0.7194</td> <td>0.7254</td> <td>0.7081</td> </tr> </tbody></table>
Table 2
table_2
D18-1380
8
emnlp2018
4.3 Overall Performance Comparison. Table 2 shows the performance comparison results of MGAN with other baseline methods. We can have the following observations. (1) Majority performs worst since it only utilizes the data distribution information. Feature+SVM can achieve much better performance on all the datasets, with the well-designed feature engineering. Our method MGAN outperforms Majority and Feature+SVM since MGAN could learn the high quality representation for prediction. (2) ATAE-LSTM is better than LSTM since it employs attention mechanism on the hidden states and combines with attention embedding to generate the final representation. TD-LSTM performs slightly better than ATAE-LSTM, and it employs two LSTM networks to capture the left and right context of the aspect. TD-LSTM performs worse than our method MGAN since it could not properly pay more attentions on the important parts of the context. (3) IAN achieves slightly better results with the previous LSTM-based methods, which interactively learns the attended aspect and context vector as final representation. Our method consistently performs better than IAN since we utilize the finegrained attention vectors to relieve the information loss in IAN. MemNet continuously learns the attended vector on the context word embedding memory, and updates the query vector at each hop. BILSTM-ATT-G models left context and right context using attention-based LSTMs, which achieves better performance than MemNet. RAM performs better than other baselines. It employs bidirectional LSTM network to generate contextual memory, and learns the multiple attended vector on the memory. Similar with MemNet, it utilizes the averaged aspect vector to learn the attention weights on context words. Our proposed MGAN consistently performs better than MemNet, BILSTM-ATT-G and RAM on all three datasets. On one hand, they only consider to learn the attention weights on context towards the aspect, and do not consider to learn the weights on aspect words towards the context. On the other hand, they just use the averaged aspect vector to guide the attention, which will lose some information, especially on the aspects with multiple words. From another perspective, our method employs the aspect alignment loss, which can bring extra useful information from the aspectlevel interactions.
[2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 1, 1, 2, 2, 1, 2, 2, 2]
['4.3 Overall Performance Comparison.', 'Table 2 shows the performance comparison results of MGAN with other baseline methods.', 'We can have the following observations.', '(1) Majority performs worst since it only utilizes the data distribution information.', 'Feature+SVM can achieve much better performance on all the datasets, with the well-designed feature engineering.', 'Our method MGAN outperforms Majority and Feature+SVM since MGAN could learn the high quality representation for prediction.', '(2) ATAE-LSTM is better than LSTM since it employs attention mechanism on the hidden states and combines with attention embedding to generate the final representation.', 'TD-LSTM performs slightly better than ATAE-LSTM, and it employs two LSTM networks to capture the left and right context of the aspect.', 'TD-LSTM performs worse than our method MGAN since it could not properly pay more attentions on the important parts of the context.', '(3) IAN achieves slightly better results with the previous LSTM-based methods, which interactively learns the attended aspect and context vector as final representation.', 'Our method consistently performs better than IAN since we utilize the finegrained attention vectors to relieve the information loss in IAN.', 'MemNet continuously learns the attended vector on the context word embedding memory, and updates the query vector at each hop.', 'BILSTM-ATT-G models left context and right context using attention-based LSTMs, which achieves better performance than MemNet.', 'RAM performs better than other baselines.', 'It employs bidirectional LSTM network to generate contextual memory, and learns the multiple attended vector on the memory.', 'Similar with MemNet, it utilizes the averaged aspect vector to learn the attention weights on context words.', 'Our proposed MGAN consistently performs better than MemNet, BILSTM-ATT-G and RAM on all three datasets.', 'On one hand, they only consider to learn the attention weights on context towards the aspect, and do not consider to learn the weights on aspect words towards the context.', 'On the other hand, they just use the averaged aspect vector to guide the attention, which will lose some information, especially on the aspects with multiple words.', 'From another perspective, our method employs the aspect alignment loss, which can bring extra useful information from the aspectlevel interactions.']
[None, ['MGAN'], None, ['Majority'], ['Feature-SVM'], ['MGAN', 'Majority', 'Feature-SVM'], ['ATAE-LSTM'], ['TD-LSTM', 'ATAE-LSTM'], ['MGAN', 'TD-LSTM'], ['ATAE-LSTM', 'TD-LSTM', 'IAN'], ['MGAN', 'IAN'], ['MemNet'], ['MemNet', 'BILSTM-ATT-G'], ['RAM'], ['RAM'], ['RAM', 'MemNet'], ['MGAN', 'MemNet', 'BILSTM-ATT-G', 'RAM'], ['MemNet', 'BILSTM-ATT-G', 'RAM'], ['MemNet', 'BILSTM-ATT-G', 'RAM'], ['MGAN']]
1
D18-1380table_3
The performance comparisons of MGAN variants. ∗ means MGAN-CF and MGAN can be regarded as the same method on twitter dataset.
2
[['Method', 'MGAN-C'], ['Method', 'MGAN-F'], ['Method', 'MGAN-CF'], ['Method', 'MGAN']]
2
[['Laptop', 'Acc'], ['Laptop', 'Macro-F1'], ['Restaurant', 'Acc'], ['Restaurant', 'Macro-F1'], ['Twitter', 'Acc'], ['Twitter', 'Macro-F1']]
[['0.7273', '0.6933', '0.8054', '0.7099', '0.7153', '0.6952'], ['0.7398', '0.7082', '0.8000', '0.7092', '0.7110', '0.6918'], ['0.7445', '0.7121', '0.8089', '0.7135', '0.7254*', '0.7081*'], ['0.7539', '0.7247', '0.8125', '0.7194', '0.7254', '0.7081']]
column
['Acc', 'Macro-F1', 'Acc', 'Macro-F1', 'Acc', 'Macro-F1']
['MGAN']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Laptop || Acc</th> <th>Laptop || Macro-F1</th> <th>Restaurant || Acc</th> <th>Restaurant || Macro-F1</th> <th>Twitter || Acc</th> <th>Twitter || Macro-F1</th> </tr> </thead> <tbody> <tr> <td>Method || MGAN-C</td> <td>0.7273</td> <td>0.6933</td> <td>0.8054</td> <td>0.7099</td> <td>0.7153</td> <td>0.6952</td> </tr> <tr> <td>Method || MGAN-F</td> <td>0.7398</td> <td>0.7082</td> <td>0.8000</td> <td>0.7092</td> <td>0.7110</td> <td>0.6918</td> </tr> <tr> <td>Method || MGAN-CF</td> <td>0.7445</td> <td>0.7121</td> <td>0.8089</td> <td>0.7135</td> <td>0.7254*</td> <td>0.7081*</td> </tr> <tr> <td>Method || MGAN</td> <td>0.7539</td> <td>0.7247</td> <td>0.8125</td> <td>0.7194</td> <td>0.7254</td> <td>0.7081</td> </tr> </tbody></table>
Table 3
table_3
D18-1380
8
emnlp2018
4.4 Analysis of MGAN model. Table 3 shows the performance comparison among the variants of MGAN model. We can have the following observations. (1) the proposed fine-grained attention mechanism MGAN-F, which is responsible for linking and fusing the information between the context and aspect word, achieves competitive performance compared with MGAN-C, especially on laptop dataset. To investigate this case, we collect the percentage of aspects with different word lengths in Table 4. We can find that laptop dataset has the highest percentage on the aspects with more than two words, and the second-highest percentage on two words. It demonstrates MGAN-F has better performance on aspects with more words, and make use of the word-level interactions to relieve the information loss occurred in coarsegrained attention mechanism. (2) MGAN-CF is better than both MGAN-C and MGAN-F, which demonstrates the coarsegrained attentions and fine-grained attentions could improve the performance from different perspectives. Compared with MGAN-CF, the complete MGAN model gains further improvement by bringing the aspect alignment loss, which is designed to capture the aspect level interactions.
[2, 1, 1, 1, 2, 2, 1, 1, 1]
['4.4 Analysis of MGAN model.', 'Table 3 shows the performance comparison among the variants of MGAN model.', 'We can have the following observations.', '(1) the proposed fine-grained attention mechanism MGAN-F, which is responsible for linking and fusing the information between the context and aspect word, achieves competitive performance compared with MGAN-C, especially on laptop dataset.', 'To investigate this case, we collect the percentage of aspects with different word lengths in Table 4.', 'We can find that laptop dataset has the highest percentage on the aspects with more than two words, and the second-highest percentage on two words.', 'It demonstrates MGAN-F has better performance on aspects with more words, and make use of the word-level interactions to relieve the information loss occurred in coarsegrained attention mechanism.', '(2) MGAN-CF is better than both MGAN-C and MGAN-F, which demonstrates the coarsegrained attentions and fine-grained attentions could improve the performance from different perspectives.', 'Compared with MGAN-CF, the complete MGAN model gains further improvement by bringing the aspect alignment loss, which is designed to capture the aspect level interactions.']
[None, ['MGAN'], None, ['MGAN-F', 'MGAN-C', 'Laptop'], None, ['Laptop'], ['MGAN-F'], ['MGAN-CF', 'MGAN-C', 'MGAN-F'], ['MGAN-CF', 'MGAN']]
1
D18-1381table_1
Experimental results on test datasets SemEval2013 and SemEval2014.
1
[['NBOW-MLP'], ['CNN'], ['BiLSTM'], ['AT-BiLSTM'], ['Lexicon RNN'], ['AGLR']]
3
[['SemEval13', '3-way', 'Acc'], ['SemEval13', '3-way', 'F1'], ['SemEval13', 'Binary', 'Acc'], ['SemEval13', 'Binary', 'F1'], ['SemEval14', '3-way', 'Acc'], ['SemEval14', '3-way', 'F1'], ['SemEval14', 'Binary', 'Acc'], ['SemEval14', 'Binary', 'F1'], ['-', 'AVG', 'Acc'], ['-', 'AVG', 'F1']]
[['65.18', '60.94', '85.44', '82.30', '65.68', '60.35', '89.44', '81.60', '76.44', '71.30'], ['71.41', '68.23', '85.74', '82.60', '70.05', '66.22', '89.86', '82.09', '79.27', '74.79'], ['72.06', '70.00', '85.89', '82.79', '71.62', '68.34', '90.20', '83.09', '79.94', '76.06'], ['72.21', '69.89', '86.13', '83.22', '71.83', '68.01', '90.20', '83.46', '80.09', '76.15'], ['69.97', '68.69', '86.43', '83.54', '70.75', '67.06', '91.13', '84.60', '79.57', '75.97'], ['73.27', '71.79', '86.72', '84.18', '73.29', '70.48', '90.37', '84.15', '80.91', '77.65']]
column
['Acc', 'F1', 'Acc', 'F1', 'Acc', 'F1', 'Acc', 'F1', 'Acc', 'F1']
['AGLR']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>SemEval13 || 3-way || Acc</th> <th>SemEval13 || 3-way || F1</th> <th>SemEval13 || Binary || Acc</th> <th>SemEval13 || Binary || F1</th> <th>SemEval14 || 3-way || Acc</th> <th>SemEval14 || 3-way || F1</th> <th>SemEval14 || Binary || Acc</th> <th>SemEval14 || Binary || F1</th> <th>- || AVG || Acc</th> <th>- || AVG || F1</th> </tr> </thead> <tbody> <tr> <td>NBOW-MLP</td> <td>65.18</td> <td>60.94</td> <td>85.44</td> <td>82.30</td> <td>65.68</td> <td>60.35</td> <td>89.44</td> <td>81.60</td> <td>76.44</td> <td>71.30</td> </tr> <tr> <td>CNN</td> <td>71.41</td> <td>68.23</td> <td>85.74</td> <td>82.60</td> <td>70.05</td> <td>66.22</td> <td>89.86</td> <td>82.09</td> <td>79.27</td> <td>74.79</td> </tr> <tr> <td>BiLSTM</td> <td>72.06</td> <td>70.00</td> <td>85.89</td> <td>82.79</td> <td>71.62</td> <td>68.34</td> <td>90.20</td> <td>83.09</td> <td>79.94</td> <td>76.06</td> </tr> <tr> <td>AT-BiLSTM</td> <td>72.21</td> <td>69.89</td> <td>86.13</td> <td>83.22</td> <td>71.83</td> <td>68.01</td> <td>90.20</td> <td>83.46</td> <td>80.09</td> <td>76.15</td> </tr> <tr> <td>Lexicon RNN</td> <td>69.97</td> <td>68.69</td> <td>86.43</td> <td>83.54</td> <td>70.75</td> <td>67.06</td> <td>91.13</td> <td>84.60</td> <td>79.57</td> <td>75.97</td> </tr> <tr> <td>AGLR</td> <td>73.27</td> <td>71.79</td> <td>86.72</td> <td>84.18</td> <td>73.29</td> <td>70.48</td> <td>90.37</td> <td>84.15</td> <td>80.91</td> <td>77.65</td> </tr> </tbody></table>
Table 1
table_1
D18-1381
7
emnlp2018
4.2 Experimental Results. Table 1 and Table 2 report the results of our experiments. The results on TRAIN-ALL are higher than TRAIN for SemEval16 in lieu of the larger dataset. Firstly, we observe that our proposed AGLR outperforms all neural baselines on 3-way classification. The overall performance of AGLR achieves state-of-the-art performance. On average, AGLR outperforms Lexicon RNN and AT-BiLSTM by 1% − 3% in terms of F1 score. We also observe that AGLR always improves AT-BiLSTM which ascertains the effectiveness of learning auxiliary lexicon embeddings. The key idea here is that the auxiliary lexicon embeddings provide a different view of the sentence which supports the network in making predictions. We also observe that Lexicon RNN does not handle 3-way classification well. Even though it has achieved good performance on binary classification, the performance on 3-way classification is lackluster (the performance of AGLR outperforms Lexicon RNN by up to 8% on SemEval16 TRAIN). This could also be attributed to the MSE based loss function. Conversely, by learning an auxiliary embedding (instead of a scalar score), our model becomes more flexible at the final layer and can be adapted to using a k softmax function. Finally, we observe that BiLSTM and AT-BiLSTM outperform Lexicon RNN on average with Lexicon RNN being slightly better on binary classification.
[2, 1, 0, 1, 1, 1, 1, 2, 1, 1, 2, 2, 1]
['4.2 Experimental Results.', 'Table 1 and Table 2 report the results of our experiments.', 'The results on TRAIN-ALL are higher than TRAIN for SemEval16 in lieu of the larger dataset.', 'Firstly, we observe that our proposed AGLR outperforms all neural baselines on 3-way classification.', 'The overall performance of AGLR achieves state-of-the-art performance.', 'On average, AGLR outperforms Lexicon RNN and AT-BiLSTM by 1% − 3% in terms of F1 score.', 'We also observe that AGLR always improves AT-BiLSTM which ascertains the effectiveness of learning auxiliary lexicon embeddings.', 'The key idea here is that the auxiliary lexicon embeddings provide a different view of the sentence which supports the network in making predictions.', 'We also observe that Lexicon RNN does not handle 3-way classification well.', 'Even though it has achieved good performance on binary classification, the performance on 3-way classification is lackluster (the performance of AGLR outperforms Lexicon RNN by up to 8% on SemEval16 TRAIN).', 'This could also be attributed to the MSE based loss function.', 'Conversely, by learning an auxiliary embedding (instead of a scalar score), our model becomes more flexible at the final layer and can be adapted to using a k softmax function.', 'Finally, we observe that BiLSTM and AT-BiLSTM outperform Lexicon RNN on average with Lexicon RNN being slightly better on binary classification.']
[None, None, None, ['AGLR', '3-way'], ['AGLR'], ['AGLR', 'AT-BiLSTM', 'Lexicon RNN', 'AVG', 'F1'], ['AGLR', 'AT-BiLSTM'], None, ['Lexicon RNN', '3-way'], ['Lexicon RNN', 'Binary'], None, ['AGLR'], ['BiLSTM', 'AT-BiLSTM', 'Lexicon RNN', 'Binary', 'AVG']]
1
D18-1381table_3
Comparisons against top SemEval systems. Results reported are the FPN metric scores used in the SemEval tasks.
2
[['SemEval13', 'Tweets'], ['SemEval14', 'Tweets'], ['SemEval14', 'Sarcasm'], ['SemEval14', 'LiveJournal'], ['SemEval16', 'Tweets'], ['SemEval16', 'Tweets (Acc)']]
1
[['Top System'], ['Ours']]
[['69.02', '70.10'], ['70.96', '71.11'], ['56.50', '58.87'], ['69.44', '72.52'], ['63.30', '61.90'], ['64.60', '66.60']]
column
['FPN', 'FPN']
['Ours']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Top System</th> <th>Ours</th> </tr> </thead> <tbody> <tr> <td>SemEval13 || Tweets</td> <td>69.02</td> <td>70.10</td> </tr> <tr> <td>SemEval14 || Tweets</td> <td>70.96</td> <td>71.11</td> </tr> <tr> <td>SemEval14 || Sarcasm</td> <td>56.50</td> <td>58.87</td> </tr> <tr> <td>SemEval14 || LiveJournal</td> <td>69.44</td> <td>72.52</td> </tr> <tr> <td>SemEval16 || Tweets</td> <td>63.30</td> <td>61.90</td> </tr> <tr> <td>SemEval16 || Tweets (Acc)</td> <td>64.60</td> <td>66.60</td> </tr> </tbody></table>
Table 3
table_3
D18-1381
8
emnlp2018
Comparisons against Top SemEval Systems. Table 3 reports the results of our proposed approach against the top team of each SemEval run, i.e., NRC-Canada (Mohammad et al., 2013) for 2013 Task 2, Team-X (Miura et al., 2014) for 2014 Task 9, SwissCheese (Deriu et al., 2016) for 2016 Task 4. We follow the exact training datasets allowed for each SemEval run. Following the competition setting, with the exception of accuracy for SemEval 2016, all metrics reported are the macro averaged F1 score of positive and negative classes. We observe that AGLR achieves competitive performance relative to the top runs in SemEval 2013, 2014 and 2016. It is good to note that SemEval approaches are often heavily engineered containing ensembles and many handcrafted features which include extensive use of sentiment lexicons, POS tags and negation detectors. Recent SemEval runs gravitate towards neural ensembles. For instance, the winning approach for SwissCheese (SemEval 2016) uses an ensemble of 6 CNN models along with a meta-classifier (random forest classifier). On the other hand, our proposed model is a single neural model. In addition, SwissCheese also uses emoticon-based distant supervision which exploits a huge corpus of sentences (millions) for training. Conversely, our approach only uses the 2013 and 2016 training sets which are significantly smaller. Given these conditions, we find it remarkable that our single model is able to achieve competitive performance relative to the extensively engineered approach of SwissCheese. Moreover, we actually outperform significantly in terms of pure accuracy. AGLR performs competitively on SemEval 2013 and 2014 as well. The good performance on the sarcasm dataset could be attributed to our contrastive attention mechanism.
[2, 1, 2, 1, 1, 2, 0, 2, 1, 2, 2, 1, 1, 1, 2]
['Comparisons against Top SemEval Systems.', 'Table 3 reports the results of our proposed approach against the top team of each SemEval run, i.e., NRC-Canada (Mohammad et al., 2013) for 2013 Task 2, Team-X (Miura et al., 2014) for 2014 Task 9, SwissCheese (Deriu et al., 2016) for 2016 Task 4.', 'We follow the exact training datasets allowed for each SemEval run.', 'Following the competition setting, with the exception of accuracy for SemEval 2016, all metrics reported are the macro averaged F1 score of positive and negative classes.', 'We observe that AGLR achieves competitive performance relative to the top runs in SemEval 2013, 2014 and 2016.', 'It is good to note that SemEval approaches are often heavily engineered containing ensembles and many handcrafted features which include extensive use of sentiment lexicons, POS tags and negation detectors.', 'Recent SemEval runs gravitate towards neural ensembles.', 'For instance, the winning approach for SwissCheese (SemEval 2016) uses an ensemble of 6 CNN models along with a meta-classifier (random forest classifier).', 'On the other hand, our proposed model is a single neural model.', 'In addition, SwissCheese also uses emoticon-based distant supervision which exploits a huge corpus of sentences (millions) for training.', 'Conversely, our approach only uses the 2013 and 2016 training sets which are significantly smaller.', 'Given these conditions, we find it remarkable that our single model is able to achieve competitive performance relative to the extensively engineered approach of SwissCheese.', 'Moreover, we actually outperform significantly in terms of pure accuracy.', 'AGLR performs competitively on SemEval 2013 and 2014 as well.', 'The good performance on the sarcasm dataset could be attributed to our contrastive attention mechanism.']
[None, ['Ours', 'Top System'], None, ['SemEval16'], ['Ours', 'Top System', 'SemEval13', 'SemEval14', 'SemEval16'], None, None, ['SemEval16', 'Top System'], ['Ours'], ['SemEval16', 'Top System'], ['Ours'], ['Ours', 'SemEval16', 'Top System'], ['Ours', 'SemEval16', 'Top System', 'Tweets (Acc)'], ['SemEval14', 'Top System', 'Ours'], None]
1
D18-1385table_2
The task and event settings performance.
2
[['Method', 'UoS-ITI'], ['Method', 'MCG-ICT'], ['Method', 'CERTH-UNITN'], ['Method', 'TFG'], ['Method', 'BFG'], ['Method', 'Combo']]
1
[['F1-Task'], ['F1-Event']]
[['0.830', '0.224'], ['0.942', '0.756'], ['0.911', '0.693'], ['0.908', '0.822'], ['0.810', '0.739'], ['0.899', '0.816']]
column
['F1-Task', 'F1-Event']
['TFG', 'BFG', 'Combo']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>F1-Task</th> <th>F1-Event</th> </tr> </thead> <tbody> <tr> <td>Method || UoS-ITI</td> <td>0.830</td> <td>0.224</td> </tr> <tr> <td>Method || MCG-ICT</td> <td>0.942</td> <td>0.756</td> </tr> <tr> <td>Method || CERTH-UNITN</td> <td>0.911</td> <td>0.693</td> </tr> <tr> <td>Method || TFG</td> <td>0.908</td> <td>0.822</td> </tr> <tr> <td>Method || BFG</td> <td>0.810</td> <td>0.739</td> </tr> <tr> <td>Method || Combo</td> <td>0.899</td> <td>0.816</td> </tr> </tbody></table>
Table 2
table_2
D18-1385
6
emnlp2018
6.2 Results. We describe the task setting results in Table 2, and detailed per-event results in Table 3. Although TFG does not achieve the highest F1-score in the task setting, it is mainly due to the split of the dataset. More than half of the tweets in the test set do not have images. Thus we can only leverage cross-platform information by searching videos’ URLs, which results in less accurate crosslingual cross-platform features. In the event setting, which has a more fair comparison, TFG outperformed other methods with a big margin (p<0.001). It is surprising to see that only 10 features extracted from external resources indexed by search engines leveraged by a simple classifier can bring such a big performance boost. 7.1 Results. Experiment results using the task setting are shown in Table 2 and the detailed per-event results are listed in Table 3. Similar to the problem in TFG, we can not obtain any Chinese webpages related to events such as Syrian boy, Varoufakis and zdf, which cover most tweets in the test set. Those missing features make TFB perform poorly in the task setting. However, TFB performs better than two of the baselines in the event setting. If we exclude events without Baidu webpages (event 10, 16 and 17), the average F1-score of UoS, MCG and CER are 0.130, 0.732 and 0.660, which are all lower than TFB's. The performance of TFB proves that our method can be generalized across languages or platforms. To further test the robustness of our crosslingual cross-platform features, we also examined if it would still work when leveraging external information that contains different languages. We extracted the cross-lingual cross-platform features for tweets leveraging Google and Baidu webpages together (Combo) and accessed the performance of Combo using MLP similarly. The performance of Combo is also listed in Table 2. Since Combo would contain noise introduced from combining webpages indexed by different search engines, it is not surprising that Combo performs slightly worse than TFG extracted from Google webpages which already cover a wide range of information solely. However, Combo performs much better than TFB which only leverages Baidu webpages. It proves that our cross-lingual cross-platform features are robust enough to utilize combined external information from different languages and platforms.
[2, 1, 1, 2, 2, 1, 2, 2, 1, 2, 0, 0, 0, 0, 2, 2, 1, 1, 1, 2]
['6.2 Results.', 'We describe the task setting results in Table 2, and detailed per-event results in Table 3.', 'Although TFG does not achieve the highest F1-score in the task setting, it is mainly due to the split of the dataset.', 'More than half of the tweets in the test set do not have images.', 'Thus we can only leverage cross-platform information by searching videos’ URLs, which results in less accurate crosslingual cross-platform features.', 'In the event setting, which has a more fair comparison, TFG outperformed other methods with a big margin (p<0.001).', 'It is surprising to see that only 10 features extracted from external resources indexed by search engines leveraged by a simple classifier can bring such a big performance boost.', '7.1 Results.', 'Experiment results using the task setting are shown in Table 2 and the detailed per-event results are listed in Table 3.', 'Similar to the problem in TFG, we can not obtain any Chinese webpages related to events such as Syrian boy, Varoufakis and zdf, which cover most tweets in the test set.', 'Those missing features make TFB perform poorly in the task setting.', 'However, TFB performs better than two of the baselines in the event setting.', "If we exclude events without Baidu webpages (event 10, 16 and 17), the average F1-score of UoS, MCG and CER are 0.130, 0.732 and 0.660, which are all lower than TFB's.", 'The performance of TFB proves that our method can be generalized across languages or platforms.', 'To further test the robustness of our crosslingual cross-platform features, we also examined if it would still work when leveraging external information that contains different languages.', 'We extracted the cross-lingual cross-platform features for tweets leveraging Google and Baidu webpages together (Combo) and accessed the performance of Combo using MLP similarly.', 'The performance of Combo is also listed in Table 2.', 'Since Combo would contain noise introduced from combining webpages indexed by different search engines, it is not surprising that Combo performs slightly worse than TFG extracted from Google webpages which already cover a wide range of information solely.', 'However, Combo performs much better than TFB which only leverages Baidu webpages.', 'It proves that our cross-lingual cross-platform features are robust enough to utilize combined external information from different languages and platforms.']
[None, None, ['TFG', 'F1-Task'], None, None, ['TFG', 'F1-Event'], None, None, None, ['TFG'], None, None, None, None, None, ['Combo'], ['Combo'], ['Combo', 'TFG'], ['Combo'], None]
1
D18-1385table_5
Rumor verification performance on the CCMR Baidu, where indicates there is no webpage in that event.
4
[['ID', '01', 'Event', 'Hurricane Sandy'], ['ID', '02', 'Event', 'Boston Marathon bombing'], ['ID', '03', 'Event', 'Sochi Olympics'], ['ID', '04', 'Event', 'MH flight 370'], ['ID', '05', 'Event', 'Bring Back Our Girls'], ['ID', '06', 'Event', 'Columbian Chemicals'], ['ID', '07', 'Event', 'Passport hoax'], ['ID', '08', 'Event', 'Rock Elephant'], ['ID', '09', 'Event', 'Underwater bedroom'], ['ID', '10', 'Event', 'Livr mobile app'], ['ID', '11', 'Event', 'Pig fish'], ['ID', '12', 'Event', 'Solar Eclipse'], ['ID', '13', 'Event', 'Girl with Samurai boots'], ['ID', '14', 'Event', 'Nepal Earthquake'], ['ID', '15', 'Event', 'Garissa Attack'], ['ID', '16', 'Event', 'Syrian boy'], ['ID', '17', 'Event', 'Varoufakis and zdf'], ['ID', '-', 'Event', 'Avg']]
1
[['Random'], ['Transfer']]
[['0.247', '0.287'], ['0.230', '0.284'], ['0.555', '0.752'], ['0.407', '0.536'], ['0.500', '0.923'], ['0.000', '0.100'], ['0.000', '0.000'], ['0.000', '0.500'], ['0.577', '0.972'], ['-', '-'], ['0.375', '1.00'], ['0.571', '0.889'], ['0.559', '0.925'], ['0.227', '0.211'], ['0.125', '0.059'], ['-', '-'], ['-', '-'], ['0.312', '0.531']]
column
['accuracy', 'accuracy']
['Transfer']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Random</th> <th>Transfer</th> </tr> </thead> <tbody> <tr> <td>ID || 01 || Event || Hurricane Sandy</td> <td>0.247</td> <td>0.287</td> </tr> <tr> <td>ID || 02 || Event || Boston Marathon bombing</td> <td>0.230</td> <td>0.284</td> </tr> <tr> <td>ID || 03 || Event || Sochi Olympics</td> <td>0.555</td> <td>0.752</td> </tr> <tr> <td>ID || 04 || Event || MH flight 370</td> <td>0.407</td> <td>0.536</td> </tr> <tr> <td>ID || 05 || Event || Bring Back Our Girls</td> <td>0.500</td> <td>0.923</td> </tr> <tr> <td>ID || 06 || Event || Columbian Chemicals</td> <td>0.000</td> <td>0.100</td> </tr> <tr> <td>ID || 07 || Event || Passport hoax</td> <td>0.000</td> <td>0.000</td> </tr> <tr> <td>ID || 08 || Event || Rock Elephant</td> <td>0.000</td> <td>0.500</td> </tr> <tr> <td>ID || 09 || Event || Underwater bedroom</td> <td>0.577</td> <td>0.972</td> </tr> <tr> <td>ID || 10 || Event || Livr mobile app</td> <td>-</td> <td>-</td> </tr> <tr> <td>ID || 11 || Event || Pig fish</td> <td>0.375</td> <td>1.00</td> </tr> <tr> <td>ID || 12 || Event || Solar Eclipse</td> <td>0.571</td> <td>0.889</td> </tr> <tr> <td>ID || 13 || Event || Girl with Samurai boots</td> <td>0.559</td> <td>0.925</td> </tr> <tr> <td>ID || 14 || Event || Nepal Earthquake</td> <td>0.227</td> <td>0.211</td> </tr> <tr> <td>ID || 15 || Event || Garissa Attack</td> <td>0.125</td> <td>0.059</td> </tr> <tr> <td>ID || 16 || Event || Syrian boy</td> <td>-</td> <td>-</td> </tr> <tr> <td>ID || 17 || Event || Varoufakis and zdf</td> <td>-</td> <td>-</td> </tr> <tr> <td>ID || - || Event || Avg</td> <td>0.312</td> <td>0.531</td> </tr> </tbody></table>
Table 5
table_5
D18-1385
8
emnlp2018
8.1 Results. Table 5 lists the detailed results of our transfer learning experiment. We achieved much better performance compared to the baseline with statistical significance (p<0.001), which indicates that our cross-lingual cross-platform feature set can be generalized to rumors in different languages. It enables the trained classifier to leverage the information learned from one language to another. 8.2 Analysis. In event 11 (Pig fish), Transfer achieves much higher performance than the random baseline. Generally, Baidu webpages’ titles are semantically different from tweets. However, in this particular event, the textual information of those titles and tweets are semantically close. As a result, models learned from English rumors can easily work on Chinese rumors, which is helpful for our transfer learning. Figure 4 shows three Twitter-Baidu rumor pairs with similar meaning in this event. Transfer obtains pretty low F1-scores in event 07 (Passport hoax). The annotation conflict caused its weak performance. This event is about a Child drew all over his dads passport and made his dad stuck in South Korea. During the manual annotation process, we found out that it is a real event confirmed by official accounts according to one news article from Chinese social media, while CCMR Twitter labeled such tweets as fake. Since Transfer is pre-trained using Twitter dataset, it is not surprising that Transfer achieves 0 in F1-score on this event. The annotation conflict also brings out that rumor verification will benefit from utilizing cross-lingual and cross-platform information.
[2, 1, 1, 2, 2, 1, 2, 2, 2, 2, 1, 2, 2, 2, 1, 2]
['8.1 Results.', 'Table 5 lists the detailed results of our transfer learning experiment.', 'We achieved much better performance compared to the baseline with statistical significance (p<0.001), which indicates that our cross-lingual cross-platform feature set can be generalized to rumors in different languages.', 'It enables the trained classifier to leverage the information learned from one language to another.', '8.2 Analysis.', 'In event 11 (Pig fish), Transfer achieves much higher performance than the random baseline.', 'Generally, Baidu webpages’ titles are semantically different from tweets.', 'However, in this particular event, the textual information of those titles and tweets are semantically close.', 'As a result, models learned from English rumors can easily work on Chinese rumors, which is helpful for our transfer learning.', 'Figure 4 shows three Twitter-Baidu rumor pairs with similar meaning in this event.', 'Transfer obtains pretty low F1-scores in event 07 (Passport hoax).', 'The annotation conflict caused its weak performance.', 'This event is about a Child drew all over his dads passport and made his dad stuck in South Korea.', 'During the manual annotation process, we found out that it is a real event confirmed by official accounts according to one news article from Chinese social media, while CCMR Twitter labeled such tweets as fake.', 'Since Transfer is pre-trained using Twitter dataset, it is not surprising that Transfer achieves 0 in F1-score on this event.', 'The annotation conflict also brings out that rumor verification will benefit from utilizing cross-lingual and cross-platform information.']
[None, ['Transfer'], ['Transfer', 'Random'], None, None, ['Random', 'Transfer', 'ID', '11', 'Pig fish'], None, ['Pig fish'], None, None, ['Transfer', 'ID', '07', 'Passport hoax'], ['Passport hoax'], ['Passport hoax'], ['Passport hoax'], ['Transfer', 'Passport hoax'], None]
1
D18-1386table_1
Rationale performance relative to human annotations. Prediction accuracy is based on a binary threshold of 0.5. Performance of both Lei2016 model variants is significantly different from the baseline model (McNemar’s test, p < 0.05)
2
[['Model', 'Sigmoid predictor'], ['Model', 'RNN predictor'], ['Model', 'Mean human performance'], ['Model', 'Sigmoid predictor + feature importance'], ['Model', 'RNN predictor + sigmoid generator'], ['Model', 'RNN predictor + LIME'], ['Model', 'Lei2016'], ['Model', 'Lei2016 + bias'], ['Model', 'Lei2016 + bias + inverse (EAN)']]
3
[['Rationale', 'Tokenwise', 'F1'], ['Rationale', 'Tokenwise', 'Pr.'], ['Rationale', 'Tokenwise', 'Rec.'], ['Rationale', 'Phrasewise', 'F1'], ['Rationale', 'Phrasewise', 'Pr.'], ['Rationale', 'Phrasewise', 'Rec.'], ['Prediction', '-', 'MSE'], ['Prediction', '-', 'Acc.'], ['Prediction', '-', 'F1']]
[['-', '-', '-', '-', '-', '-', '0.029', '0.94', '0.74'], ['-', '-', '-', '-', '-', '-', '0.018', '0.95', '0.78'], ['0.55', '0.62', '0.57', '0.72', '0.78', '0.69', '-', '--', '-'], ['0.20', '0.62', '0.12', '0.64', '0.59', '0.70', '0.029', '0.94', '0.74'], ['0.29', '0.22', '0.45', '0.31', '0.19', '0.92', '0.038', '0.91', '0.70'], ['0.33', '0.29', '0.39', '0.4', '0.25', '0.96', '0.018', '0.95', '0.78'], ['0.44', '0.38', '0.52', '0.51', '0.38', '0.83', '0.021', '0.95', '0.77'], ['0.49', '0.48', '0.49', '0.60', '0.46', '0.86', '0.02', '0.95', '0.77'], ['0.53', '0.48', '0.58', '0.61', '0.47', '0.87', '0.021', '0.95', '0.77']]
column
['F1', 'Pr.', 'Rec.', 'F1', 'Pr.', 'Rec.', 'MSE', 'Acc.', 'F1']
['Lei2016', 'Lei2016 + bias', 'Lei2016 + bias + inverse (EAN)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Rationale || Tokenwise || F1</th> <th>Rationale || Tokenwise || Pr.</th> <th>Rationale || Tokenwise || Rec.</th> <th>Rationale || Phrasewise || F1</th> <th>Rationale || Phrasewise || Pr.</th> <th>Rationale || Phrasewise || Rec.</th> <th>Prediction || - || MSE</th> <th>Prediction || - || Acc.</th> <th>Prediction || - || F1</th> </tr> </thead> <tbody> <tr> <td>Model || Sigmoid predictor</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>0.029</td> <td>0.94</td> <td>0.74</td> </tr> <tr> <td>Model || RNN predictor</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>0.018</td> <td>0.95</td> <td>0.78</td> </tr> <tr> <td>Model || Mean human performance</td> <td>0.55</td> <td>0.62</td> <td>0.57</td> <td>0.72</td> <td>0.78</td> <td>0.69</td> <td>-</td> <td>--</td> <td>-</td> </tr> <tr> <td>Model || Sigmoid predictor + feature importance</td> <td>0.20</td> <td>0.62</td> <td>0.12</td> <td>0.64</td> <td>0.59</td> <td>0.70</td> <td>0.029</td> <td>0.94</td> <td>0.74</td> </tr> <tr> <td>Model || RNN predictor + sigmoid generator</td> <td>0.29</td> <td>0.22</td> <td>0.45</td> <td>0.31</td> <td>0.19</td> <td>0.92</td> <td>0.038</td> <td>0.91</td> <td>0.70</td> </tr> <tr> <td>Model || RNN predictor + LIME</td> <td>0.33</td> <td>0.29</td> <td>0.39</td> <td>0.4</td> <td>0.25</td> <td>0.96</td> <td>0.018</td> <td>0.95</td> <td>0.78</td> </tr> <tr> <td>Model || Lei2016</td> <td>0.44</td> <td>0.38</td> <td>0.52</td> <td>0.51</td> <td>0.38</td> <td>0.83</td> <td>0.021</td> <td>0.95</td> <td>0.77</td> </tr> <tr> <td>Model || Lei2016 + bias</td> <td>0.49</td> <td>0.48</td> <td>0.49</td> <td>0.60</td> <td>0.46</td> <td>0.86</td> <td>0.02</td> <td>0.95</td> <td>0.77</td> </tr> <tr> <td>Model || Lei2016 + bias + inverse (EAN)</td> <td>0.53</td> <td>0.48</td> <td>0.58</td> <td>0.61</td> <td>0.47</td> <td>0.87</td> <td>0.021</td> <td>0.95</td> <td>0.77</td> </tr> </tbody></table>
Table 1
table_1
D18-1386
7
emnlp2018
Table 1 displays the results. The difference in performance between the three baselines that don't use a RNN generator and the three model variants that do demonstrates the importance of context in recognizing personal attacks within text. The relative performance of the three variants of the Lei et al. model show that both modifications, setting the bias term and the addition of the adversarial predictor, lead to marginal improvements in tokenwise F1. The best-performing model approaches average human performance on this metric. The phrasewise metric is relaxed. It allows a contiguous personal attack sequence to be considered captured if even a single token from the sequence is captured. The results on this metric show that in an absolute sense, 87% of personal attacks are at least partially captured by the algorithm. The simplest baseline, which produces rationales by thresholding the coefficients of a logistic regression model, does deceptively well on this metric by only identifying attacking words like ”jerk” and ”a**hole”, but its poor tokenwise performance shows that it doesn’t mimic human highlighting very well.
[1, 1, 1, 1, 1, 2, 1, 2]
['Table 1 displays the results.', "The difference in performance between the three baselines that don't use a RNN generator and the three model variants that do demonstrates the importance of context in recognizing personal attacks within text.", 'The relative performance of the three variants of the Lei et al. model show that both modifications, setting the bias term and the addition of the adversarial predictor, lead to marginal improvements in tokenwise F1.', 'The best-performing model approaches average human performance on this metric.', 'The phrasewise metric is relaxed.', 'It allows a contiguous personal attack sequence to be considered captured if even a single token from the sequence is captured.', 'The results on this metric show that in an absolute sense, 87% of personal attacks are at least partially captured by the algorithm.', 'The simplest baseline, which produces rationales by thresholding the coefficients of a logistic regression model, does deceptively well on this metric by only identifying attacking words like ”jerk” and ”a**hole”, but its poor tokenwise performance shows that it doesn’t mimic human highlighting very well.']
[None, ['RNN predictor', 'RNN predictor + sigmoid generator', 'RNN predictor + LIME'], ['Lei2016', 'Lei2016 + bias', 'Lei2016 + bias + inverse (EAN)'], ['Mean human performance'], ['Phrasewise'], ['Phrasewise'], ['Lei2016 + bias + inverse (EAN)', 'Phrasewise', 'Rec.'], None]
1
D18-1387table_7
Results on classifying vague sentences.
2
[['System', 'Baseline (Majority)'], ['System', 'LSTM'], ['System', 'CNN'], ['System', 'AC-GAN (Full Model)'], ['System', 'AC-GAN (Vagueness Only)']]
2
[['Sentence-Level', 'P (%)'], ['Sentence-Level', 'R (%)'], ['Sentence-Level', 'F (%)']]
[['25.77', '50.77', '34.19'], ['47.79', '50.06', '47.88'], ['49.66', '52.51', '50.18'], ['51.00', '53.50', '50.42'], ['52.90', '54.64', '52.34']]
column
['P (%)', 'R (%)', 'F (%)']
['AC-GAN (Full Model)', 'AC-GAN (Vagueness Only)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Sentence-Level || P (%)</th> <th>Sentence-Level || R (%)</th> <th>Sentence-Level || F (%)</th> </tr> </thead> <tbody> <tr> <td>System || Baseline (Majority)</td> <td>25.77</td> <td>50.77</td> <td>34.19</td> </tr> <tr> <td>System || LSTM</td> <td>47.79</td> <td>50.06</td> <td>47.88</td> </tr> <tr> <td>System || CNN</td> <td>49.66</td> <td>52.51</td> <td>50.18</td> </tr> <tr> <td>System || AC-GAN (Full Model)</td> <td>51.00</td> <td>53.50</td> <td>50.42</td> </tr> <tr> <td>System || AC-GAN (Vagueness Only)</td> <td>52.90</td> <td>54.64</td> <td>52.34</td> </tr> </tbody></table>
Table 7
table_7
D18-1387
8
emnlp2018
6.3 Predicting Vague Sentences. In Table 7 we present results on classifying privacy policy sentences into four categories: clear, somewhat clear, vague, and extremely vague. We compare AC-GAN with three baselines: CNN and LSTM trained on human-annotated sentences, and a majority baseline that assigns the most frequent label to all test sentences. We observe that the ACGAN models (using CNN discriminator) perform strongly, surpassing all baseline approaches. CNN shows strong performance, yielding an F-score of 50.92%. A similar effect has been demonstrated on other sentence classification tasks, where CNN outperforms LSTM and logistic regression classifiers (Kim, 2014; Zhang and Wallace, 2015). We report results of AC-GAN using the CNN discriminator. Comparing "Full Model" with "Vagueness Only",we found that allowing the AC-GAN to only discriminate sentences of different levels of vagueness, but not real/fake sentences, yields better results. We conjecture this is because training GAN models, especially with a multitask learning objective, can be unstable and more effort is required to balance the two objectives (LS and LC).
[2, 1, 1, 1, 1, 2, 2, 1, 2]
['6.3 Predicting Vague Sentences.', 'In Table 7 we present results on classifying privacy policy sentences into four categories: clear, somewhat clear, vague, and extremely vague.', 'We compare AC-GAN with three baselines: CNN and LSTM trained on human-annotated sentences, and a majority baseline that assigns the most frequent label to all test sentences.', 'We observe that the ACGAN models (using CNN discriminator) perform strongly, surpassing all baseline approaches.', 'CNN shows strong performance, yielding an F-score of 50.92%.', 'A similar effect has been demonstrated on other sentence classification tasks, where CNN outperforms LSTM and logistic regression classifiers (Kim, 2014; Zhang and Wallace, 2015).', 'We report results of AC-GAN using the CNN discriminator.', 'Comparing "Full Model" with "Vagueness Only",we found that allowing the AC-GAN to only discriminate sentences of different levels of vagueness, but not real/fake sentences, yields better results.', 'We conjecture this is because training GAN models, especially with a multitask learning objective, can be unstable and more effort is required to balance the two objectives (LS and LC).']
[None, None, ['AC-GAN (Full Model)', 'AC-GAN (Vagueness Only)', 'Baseline (Majority)', 'LSTM', 'CNN'], ['AC-GAN (Full Model)', 'AC-GAN (Vagueness Only)'], ['CNN'], ['CNN'], ['AC-GAN (Full Model)', 'AC-GAN (Vagueness Only)'], ['AC-GAN (Full Model)', 'AC-GAN (Vagueness Only)'], None]
1
D18-1388table_1
Precision, Recall, and F1 scores of our model MVDAM on the test set compared with several baselines. All flavors of our model significantly outperform baselines and yield state of the art performance.
4
[['Model', 'CHANCE', 'Views', '-'], ['Model', 'LR', 'Views', 'Title'], ['Model', 'CNN', 'Views', 'Title'], ['Model', 'FNN', 'Views', 'Network'], ['Model', 'HDAM', 'Views', 'Content'], ['Model', 'MVDAM', 'Views', 'Title Network'], ['Model', 'MVDAM', 'Views', 'Title Content'], ['Model', 'MVDAM', 'Views', 'Title Network Content']]
1
[['P'], ['R'], ['F1']]
[['34.53', '34.59', '34.53'], ['59.53', '59.42', '59.12'], ['59.26', '59.40', '59.24'], ['68.28', '56.54', '55.10'], ['69.85', '68.72', '68.92'], ['69.87', '69.71', '69.66'], ['70.84', '70.19', '69.54'], ['80.10', '79.56', '79.67']]
column
['P', 'R', 'F1']
['MVDAM']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P</th> <th>R</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Model || CHANCE || Views || -</td> <td>34.53</td> <td>34.59</td> <td>34.53</td> </tr> <tr> <td>Model || LR || Views || Title</td> <td>59.53</td> <td>59.42</td> <td>59.12</td> </tr> <tr> <td>Model || CNN || Views || Title</td> <td>59.26</td> <td>59.40</td> <td>59.24</td> </tr> <tr> <td>Model || FNN || Views || Network</td> <td>68.28</td> <td>56.54</td> <td>55.10</td> </tr> <tr> <td>Model || HDAM || Views || Content</td> <td>69.85</td> <td>68.72</td> <td>68.92</td> </tr> <tr> <td>Model || MVDAM || Views || Title Network</td> <td>69.87</td> <td>69.71</td> <td>69.66</td> </tr> <tr> <td>Model || MVDAM || Views || Title Content</td> <td>70.84</td> <td>70.19</td> <td>69.54</td> </tr> <tr> <td>Model || MVDAM || Views || Title Network Content</td> <td>80.10</td> <td>79.56</td> <td>79.67</td> </tr> </tbody></table>
Table 1
table_1
D18-1388
6
emnlp2018
5.1 Results and Analysis. Quantitative Results. Table 1 shows the results of the evaluation. First note that the logistic regression classifier and the CNN model using the Title outperforms the CHANCE classifier significantly (F1: 59.12,59.24 vs 34.53). Second, only modeling the network structure yields a F1 of 55.10 but still significantly better than the chance baseline. This confirms our intuition that modeling the network structure can be useful in prediction of ideology. Third, note that modeling the content (HDAM) significantly outperforms all previous baselines (F1:68.92). This suggests that content cues can be very strong indicators of ideology. Finally, all flavors of our model outperform the baselines. Specifically, observe that incorporating the network cues outperforms all uni-modal models that only model either the title, the network, or the content. It is also worth noting that without the network, only the title and the content show only a small improvement over the best performing baseline (69.54 vs 68.92) suggesting that the network yields distinctive cues from both the title, and the content. Finally, the best performing model effectively uses all three modalities to yield a F1 score of 79.67 outperforming the state of the art baseline by 10 percentage points. Altogether our results suggest the superiority of our model over competitive baselines. In order to obtain deeper insights into our model, we also perform a qualitative analysis of our model’s predictions.
[2, 2, 1, 1, 1, 2, 1, 2, 1, 2, 1, 1, 2, 2]
['5.1 Results and Analysis.', 'Quantitative Results.', 'Table 1 shows the results of the evaluation.', 'First note that the logistic regression classifier and the CNN model using the Title outperforms the CHANCE classifier significantly (F1: 59.12,59.24 vs 34.53).', 'Second, only modeling the network structure yields a F1 of 55.10 but still significantly better than the chance baseline.', 'This confirms our intuition that modeling the network structure can be useful in prediction of ideology.', 'Third, note that modeling the content (HDAM) significantly outperforms all previous baselines (F1:68.92).', 'This suggests that content cues can be very strong indicators of ideology.', 'Finally, all flavors of our model outperform the baselines.', 'Specifically, observe that incorporating the network cues outperforms all uni-modal models that only model either the title, the network, or the content.', 'It is also worth noting that without the network, only the title and the content show only a small improvement over the best performing baseline (69.54 vs 68.92) suggesting that the network yields distinctive cues from both the title, and the content.', 'Finally, the best performing model effectively uses all three modalities to yield a F1 score of 79.67 outperforming the state of the art baseline by 10 percentage points.', 'Altogether our results suggest the superiority of our model over competitive baselines.', 'In order to obtain deeper insights into our model, we also perform a qualitative analysis of our model’s predictions.']
[None, None, None, ['LR', 'CNN', 'CHANCE', 'F1'], ['FNN', 'CHANCE', 'F1'], ['FNN'], ['HDAM', 'F1'], ['HDAM'], ['MVDAM', 'Title Network', 'Title Content', 'Title Network Content'], ['Title Network Content'], ['MVDAM', 'Title Content', 'HDAM', 'F1', 'Network'], ['MVDAM', 'Title Network Content', 'HDAM', 'F1'], ['MVDAM'], None]
1
D18-1389table_4
Results for factuality and bias prediction. Bold values indicate the best-performing feature type in its family of features, while underlined values indicate the best-performing feature type overall.
5
[['Source', 'Majority Baseline', '-', 'Dim.', '-'], ['Source', 'Traffic', 'Alexa rank', 'Dim.', '1'], ['Source', 'URL', 'URL URL structure', 'Dim.', '12'], ['Source', 'Twitter', 'created at.', 'Dim.', '1'], ['Source', 'Twitter', 'has account', 'Dim.', '1'], ['Source', 'Twitter', 'verified', 'Dim.', '1'], ['Source', 'Twitter', 'has location', 'Dim.', '1'], ['Source', 'Twitter', 'URL match', 'Dim.', '2'], ['Source', 'Twitter', 'description', 'Dim.', '300'], ['Source', 'Twitter', 'counts', 'Dim.', '5'], ['Source', 'Twitter', 'Twitter-All', 'Dim.', '308'], ['Source', 'Wikipedia', 'has page', 'Dim.', '1'], ['Source', 'Wikipedia', 'table of content', 'Dim.', '300'], ['Source', 'Wikipedia', 'categories', 'Dim.', '300'], ['Source', 'Wikipedia', 'information box', 'Dim.', '300'], ['Source', 'Wikipedia', 'summary', 'Dim.', '300'], ['Source', 'Wikipedia', 'content', 'Dim.', '300'], ['Source', 'Wikipedia', 'Wikipedia-All', 'Dim.', '301'], ['Source', 'Articles', 'title', 'Dim.', '141'], ['Source', 'Articles', 'body', 'Dim.', '141']]
2
[['Factuality', 'Macro-F1'], ['Factuality', 'Acc.'], ['Factuality', 'MAE'], ['Factuality', 'MAEM'], ['Bias', 'Macro-F1'], ['Bias', 'Acc.'], ['Bias', 'MAE'], ['Bias', 'MAEM']]
[['22.47', '50.84', '0.73', '1.00', '5.65', '24.67', '1.39', '1.71'], ['22.46', '50.75', '0.73', '1.00', '7.76', '25.70', '1.38', '1.71'], ['39.30', '53.28', '0.68', '0.81', '13.50', '23.64', '1.65', '2.06'], ['30.72', '52.91', '0.69', '0.92', '5.65', '24.67', '1.39', '1.71'], ['30.72', '52.91', '0.69', '0.92', '5.65', '24.67', '1.39', '1.71'], ['30.72', '52.91', '0.69', '0.92', '5.65', '24.67', '1.39', '1.71'], ['36.73', '52.72', '0.69', '0.82', '9.44', '24.86', '1.54', '1.85'], ['39.98', '54.60', '0.66', '0.72', '10.16', '25.61', '1.51', '1.97'], ['44.79', '51.41', '0.65', '0.70', '19.08', '25.33', '1.73', '2.04'], ['46.88', '57.22', '0.57', '0.66', '18.34', '24.86', '1.62', '2.01'], ['48.23', '54.78', '0.59', '0.64', '21.38', '27.77', '1.58', '1.83'], ['43.53', '59.10', '0.57', '0.63', '14.33', '26.83', '1.63', '2.14'], ['43.95', '51.04', '0.60', '0.65', '15.10', '22.96', '1.86', '2.25'], ['46.36', '53.70', '0.65', '0.61', '25.64', '32.16', '1.70', '2.10'], ['46.39', '51.14', '0.71', '0.65', '19.79', '26.85', '1.68', '1.99'], ['51.88', '58.91', '0.54', '0.52', '30.02', '37.43', '1.47', '1.98'], ['55.29', '62.10', '0.51', '0.50', '30.92', '38.61', '1.51', '2.01'], ['55.52', '62.29', '0.50', '0.49', '28.66', '35.93', '1.51', '2.00'], ['53.20', '59.57', '0.51', '0.58', '30.91', '37.52', '1.29', '1.53'], ['58.02', '64.35', '0.43', '0.51', '36.63', '41.74', '1.15', '1.43']]
column
['Macro-F1', 'Acc.', 'MAE', 'MAEM', 'Macro-F1', 'Acc.', 'MAE', 'MAEM']
['Traffic', 'URL', 'Twitter', 'Wikipedia', 'Articles']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Factuality || Macro-F1</th> <th>Factuality || Acc.</th> <th>Factuality || MAE</th> <th>Factuality || MAEM</th> <th>Bias || Macro-F1</th> <th>Bias || Acc.</th> <th>Bias || MAE</th> <th>Bias || MAEM</th> </tr> </thead> <tbody> <tr> <td>Source || Majority Baseline || - || Dim. || -</td> <td>22.47</td> <td>50.84</td> <td>0.73</td> <td>1.00</td> <td>5.65</td> <td>24.67</td> <td>1.39</td> <td>1.71</td> </tr> <tr> <td>Source || Traffic || Alexa rank || Dim. || 1</td> <td>22.46</td> <td>50.75</td> <td>0.73</td> <td>1.00</td> <td>7.76</td> <td>25.70</td> <td>1.38</td> <td>1.71</td> </tr> <tr> <td>Source || URL || URL URL structure || Dim. || 12</td> <td>39.30</td> <td>53.28</td> <td>0.68</td> <td>0.81</td> <td>13.50</td> <td>23.64</td> <td>1.65</td> <td>2.06</td> </tr> <tr> <td>Source || Twitter || created at. || Dim. || 1</td> <td>30.72</td> <td>52.91</td> <td>0.69</td> <td>0.92</td> <td>5.65</td> <td>24.67</td> <td>1.39</td> <td>1.71</td> </tr> <tr> <td>Source || Twitter || has account || Dim. || 1</td> <td>30.72</td> <td>52.91</td> <td>0.69</td> <td>0.92</td> <td>5.65</td> <td>24.67</td> <td>1.39</td> <td>1.71</td> </tr> <tr> <td>Source || Twitter || verified || Dim. || 1</td> <td>30.72</td> <td>52.91</td> <td>0.69</td> <td>0.92</td> <td>5.65</td> <td>24.67</td> <td>1.39</td> <td>1.71</td> </tr> <tr> <td>Source || Twitter || has location || Dim. || 1</td> <td>36.73</td> <td>52.72</td> <td>0.69</td> <td>0.82</td> <td>9.44</td> <td>24.86</td> <td>1.54</td> <td>1.85</td> </tr> <tr> <td>Source || Twitter || URL match || Dim. || 2</td> <td>39.98</td> <td>54.60</td> <td>0.66</td> <td>0.72</td> <td>10.16</td> <td>25.61</td> <td>1.51</td> <td>1.97</td> </tr> <tr> <td>Source || Twitter || description || Dim. || 300</td> <td>44.79</td> <td>51.41</td> <td>0.65</td> <td>0.70</td> <td>19.08</td> <td>25.33</td> <td>1.73</td> <td>2.04</td> </tr> <tr> <td>Source || Twitter || counts || Dim. || 5</td> <td>46.88</td> <td>57.22</td> <td>0.57</td> <td>0.66</td> <td>18.34</td> <td>24.86</td> <td>1.62</td> <td>2.01</td> </tr> <tr> <td>Source || Twitter || Twitter-All || Dim. || 308</td> <td>48.23</td> <td>54.78</td> <td>0.59</td> <td>0.64</td> <td>21.38</td> <td>27.77</td> <td>1.58</td> <td>1.83</td> </tr> <tr> <td>Source || Wikipedia || has page || Dim. || 1</td> <td>43.53</td> <td>59.10</td> <td>0.57</td> <td>0.63</td> <td>14.33</td> <td>26.83</td> <td>1.63</td> <td>2.14</td> </tr> <tr> <td>Source || Wikipedia || table of content || Dim. || 300</td> <td>43.95</td> <td>51.04</td> <td>0.60</td> <td>0.65</td> <td>15.10</td> <td>22.96</td> <td>1.86</td> <td>2.25</td> </tr> <tr> <td>Source || Wikipedia || categories || Dim. || 300</td> <td>46.36</td> <td>53.70</td> <td>0.65</td> <td>0.61</td> <td>25.64</td> <td>32.16</td> <td>1.70</td> <td>2.10</td> </tr> <tr> <td>Source || Wikipedia || information box || Dim. || 300</td> <td>46.39</td> <td>51.14</td> <td>0.71</td> <td>0.65</td> <td>19.79</td> <td>26.85</td> <td>1.68</td> <td>1.99</td> </tr> <tr> <td>Source || Wikipedia || summary || Dim. || 300</td> <td>51.88</td> <td>58.91</td> <td>0.54</td> <td>0.52</td> <td>30.02</td> <td>37.43</td> <td>1.47</td> <td>1.98</td> </tr> <tr> <td>Source || Wikipedia || content || Dim. || 300</td> <td>55.29</td> <td>62.10</td> <td>0.51</td> <td>0.50</td> <td>30.92</td> <td>38.61</td> <td>1.51</td> <td>2.01</td> </tr> <tr> <td>Source || Wikipedia || Wikipedia-All || Dim. || 301</td> <td>55.52</td> <td>62.29</td> <td>0.50</td> <td>0.49</td> <td>28.66</td> <td>35.93</td> <td>1.51</td> <td>2.00</td> </tr> <tr> <td>Source || Articles || title || Dim. || 141</td> <td>53.20</td> <td>59.57</td> <td>0.51</td> <td>0.58</td> <td>30.91</td> <td>37.52</td> <td>1.29</td> <td>1.53</td> </tr> <tr> <td>Source || Articles || body || Dim. || 141</td> <td>58.02</td> <td>64.35</td> <td>0.43</td> <td>0.51</td> <td>36.63</td> <td>41.74</td> <td>1.15</td> <td>1.43</td> </tr> </tbody></table>
Table 4
table_4
D18-1389
7
emnlp2018
4.3 Results and Discussion. We present in Table 4 the results of using features from the different sources proposed in Section 3. We start by describing the contribution of each feature type towards factuality and bias. We can see that the textual features extracted from the ARTICLES yielded the best performance on factuality. They also perform well on bias, being the only type that beats the baseline on MAE. These results indicate the importance of analyzing the contents of the target website. They also show that using the titles only is not enough, and that the article bodies contain important information that should not be ignored. Overall, the WIKIPEDIA features are less useful for factuality, and perform reasonably well for bias. The best features from this family are those about the page content, which includes a general description of the medium, its history, ideology and other information that can be potentially helpful. Interestingly, the has page feature alone yields sizable improvement over the baseline, especially for factuality. This makes sense given that trustworthy websites are more likely to have Wikipedia pages; yet, this feature does not help much for predicting political bias. The TWITTER features perform moderately for factuality and poorly for bias. This is not surprising, as we normally may not be able to tell much about the political ideology of a website just by looking at its Twitter profile (not its tweets!) unless something is mentioned in its description, which turns out to perform better than the rest of the features from this family. We can see that the has twitter feature is less effective than has wiki for factuality, which makes sense given that Twitter is less regulated than Wikipedia. Note that the counts features yield reasonable performance, indicating that information about activity (e.g., number of statuses) and social connectivity (e.g., number of followers) is useful. Overall, the TWITTER features seem to complement each other, as their union yields the best performance on factuality. The URL features are better used for factuality rather than bias prediction. This is mainly due to the nature of these features, which are aimed at detecting phishing websites, as we mentioned in Section 3. Overall, this feature family yields slight improvements, suggesting that it can be useful when used together with other features. Finally, the Alexa rank does not improve over the baseline, which suggests that more sophisticated TRAFFIC-related features might be needed.
[2, 1, 2, 1, 1, 2, 1, 1, 2, 1, 2, 1, 1, 1, 1, 1, 1, 2, 1, 1]
['4.3 Results and Discussion.', 'We present in Table 4 the results of using features from the different sources proposed in Section 3.', 'We start by describing the contribution of each feature type towards factuality and bias.', 'We can see that the textual features extracted from the ARTICLES yielded the best performance on factuality.', 'They also perform well on bias, being the only type that beats the baseline on MAE.', 'These results indicate the importance of analyzing the contents of the target website.', 'They also show that using the titles only is not enough, and that the article bodies contain important information that should not be ignored.', 'Overall, the WIKIPEDIA features are less useful for factuality, and perform reasonably well for bias.', 'The best features from this family are those about the page content, which includes a general description of the medium, its history, ideology and other information that can be potentially helpful.', 'Interestingly, the has page feature alone yields sizable improvement over the baseline, especially for factuality.', 'This makes sense given that trustworthy websites are more likely to have Wikipedia pages; yet, this feature does not help much for predicting political bias.', 'The TWITTER features perform moderately for factuality and poorly for bias.', 'This is not surprising, as we normally may not be able to tell much about the political ideology of a website just by looking at its Twitter profile (not its tweets!) unless something is mentioned in its description, which turns out to perform better than the rest of the features from this family.', 'We can see that the has twitter feature is less effective than has wiki for factuality, which makes sense given that Twitter is less regulated than Wikipedia.', 'Note that the counts features yield reasonable performance, indicating that information about activity (e.g., number of statuses) and social connectivity (e.g., number of followers) is useful.', 'Overall, the TWITTER features seem to complement each other, as their union yields the best performance on factuality.', 'The URL features are better used for factuality rather than bias prediction.', 'This is mainly due to the nature of these features, which are aimed at detecting phishing websites, as we mentioned in Section 3.', 'Overall, this feature family yields slight improvements, suggesting that it can be useful when used together with other features.', 'Finally, the Alexa rank does not improve over the baseline, which suggests that more sophisticated TRAFFIC-related features might be needed.']
[None, ['Traffic', 'URL', 'Twitter', 'Wikipedia', 'Articles'], ['Factuality', 'Bias'], ['Articles', 'Factuality'], ['Articles', 'Majority Baseline', 'Bias', 'MAE'], None, ['Articles', 'title', 'body'], ['Wikipedia', 'Factuality', 'Bias'], ['Wikipedia', 'content'], ['Wikipedia', 'has page', 'Factuality', 'Majority Baseline'], ['Wikipedia'], ['Twitter', 'Factuality', 'Bias'], ['Twitter', 'description'], ['Twitter', 'Wikipedia'], ['counts'], ['Twitter', 'Factuality'], ['URL', 'Factuality', 'Bias'], ['URL'], ['URL'], ['Traffic', 'Majority Baseline', 'Alexa rank']]
1
D18-1392table_1
R2 (variance explained) of residualized factor adaptation (RFA) versus baseline models. Results are shown for 3 hand-picked factors (age, race, education) as well as all factors. RC is residualized control and FA is factor adaptation. Each row is color-coded separately, from red (lowest value) to green (highest values). Bold and * indicate a significant (p < .05) reduction in error over the next best model (bold) and FA (*), respectively, according to paired t-tests.
3
[['Domain', 'Health', 'HD'], ['Domain', 'Health', 'FP'], ['Domain', 'Psych.', 'LS'], ['Domain', 'Econ.', 'IP'], ['Domain', 'Econ.', 'FC'], ['Domain', '-', 'Avg.']]
2
[['Lang.', '-'], ['3 Socio-Demographic Factors', 'Controls Only'], ['3 Socio-Demographic Factors', 'Added-Controls'], ['3 Socio-Demographic Factors', 'RC'], ['3 Socio-Demographic Factors', 'FA'], ['3 Socio-Demographic Factors', 'RFA'], ['All Factors', 'Controls Only'], ['All Factors', 'Added-Controls'], ['All Factors', 'RC'], ['All Factors', 'FA'], ['All Factors', 'RFA']]
[['0.585', '0.423', '0.590', '0.620', '0.628', '0.638', '0.515', '0.597', '0.630', '0.636', '0.657*'], ['0.602', '0.434', '0.606', '0.619', '0.647', '0.647', '0.609', '0.632', '0.657', '0.685', '0.680'], ['0.214', '0.148', '0.219', '0.292', '0.308', '0.338', '0.326', '0.352', '0.376', '0.353', '0.396*'], ['0.245', '0.072', '0.243', '0.266', '0.274', '0.307', '0.240', '0.226', '0.330', '0.344', '0.402*'], ['0.153', '0.128', '0.156', '0.197', '0.218', '0.238', '0.160', '0.161', '0.209', '0.240', '0.276*'], ['0.360', '0.241', '0.362', '0.398', '0.415', '0.434', '0.370', '0.394', '0.440', '0.452', '0.482*']]
column
['R2', 'R2', 'R2', 'R2', 'R2', 'R2', 'R2', 'R2', 'R2', 'R2', 'R2']
['RC', 'FA', 'RFA']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Lang. || -</th> <th>3 Socio-Demographic Factors || Controls Only</th> <th>3 Socio-Demographic Factors || Added-Controls</th> <th>3 Socio-Demographic Factors || RC</th> <th>3 Socio-Demographic Factors || FA</th> <th>3 Socio-Demographic Factors || RFA</th> <th>All Factors || Controls Only</th> <th>All Factors || Added-Controls</th> <th>All Factors || RC</th> <th>All Factors || FA</th> <th>All Factors || RFA</th> </tr> </thead> <tbody> <tr> <td>Domain || Health || HD</td> <td>0.585</td> <td>0.423</td> <td>0.590</td> <td>0.620</td> <td>0.628</td> <td>0.638</td> <td>0.515</td> <td>0.597</td> <td>0.630</td> <td>0.636</td> <td>0.657*</td> </tr> <tr> <td>Domain || Health || FP</td> <td>0.602</td> <td>0.434</td> <td>0.606</td> <td>0.619</td> <td>0.647</td> <td>0.647</td> <td>0.609</td> <td>0.632</td> <td>0.657</td> <td>0.685</td> <td>0.680</td> </tr> <tr> <td>Domain || Psych. || LS</td> <td>0.214</td> <td>0.148</td> <td>0.219</td> <td>0.292</td> <td>0.308</td> <td>0.338</td> <td>0.326</td> <td>0.352</td> <td>0.376</td> <td>0.353</td> <td>0.396*</td> </tr> <tr> <td>Domain || Econ. || IP</td> <td>0.245</td> <td>0.072</td> <td>0.243</td> <td>0.266</td> <td>0.274</td> <td>0.307</td> <td>0.240</td> <td>0.226</td> <td>0.330</td> <td>0.344</td> <td>0.402*</td> </tr> <tr> <td>Domain || Econ. || FC</td> <td>0.153</td> <td>0.128</td> <td>0.156</td> <td>0.197</td> <td>0.218</td> <td>0.238</td> <td>0.160</td> <td>0.161</td> <td>0.209</td> <td>0.240</td> <td>0.276*</td> </tr> <tr> <td>Domain || - || Avg.</td> <td>0.360</td> <td>0.241</td> <td>0.362</td> <td>0.398</td> <td>0.415</td> <td>0.434</td> <td>0.370</td> <td>0.394</td> <td>0.440</td> <td>0.452</td> <td>0.482*</td> </tr> </tbody></table>
Table 1
table_1
D18-1392
6
emnlp2018
Table 1 compares results in terms of variance explained, when using the three hand-picked factors vs. using all 11 extra-linguistic factors (Since past work has also used the Pearson-r metric, Table 2 shows the same results for all factors in terms of Pearson-r). As the table shows, FA outperforms controls only, added-controls, and residualized control. RFA does even better and outperforms FA on both the hand-picked factors and when using the entire set of factors. These results demonstrate the complementary nature of the residualized control and factor adaptation approaches and the benefits of combining them. Even though adding controls directly, as in the “added-controls” column, works better than language-only and controls-only models, it is worse than any other model that exploits both language and extra-linguistic data. This motivates the need for combining different types of features in both an additive (residualized control) and multiplicative (factor adaptation) style. Overall, these results show the power of RFA over the other models. RFA’s improvement over FA was statistically significant for 4 out of 5 outcomes, and 3 out of 5 for residualized control. Recall that added-controls, residualized control, FA, and RFA all have access to the same set of information. The gains of RFA over FA show that RFA’s structure utilizing residualized control is better suited for combining extra-linguistic and language-only features.
[1, 1, 1, 2, 1, 2, 1, 2, 2, 2]
['Table 1 compares results in terms of variance explained, when using the three hand-picked factors vs. using all 11 extra-linguistic factors (Since past work has also used the Pearson-r metric, Table 2 shows the same results for all factors in terms of Pearson-r).', 'As the table shows, FA outperforms controls only, added-controls, and residualized control.', 'RFA does even better and outperforms FA on both the hand-picked factors and when using the entire set of factors.', 'These results demonstrate the complementary nature of the residualized control and factor adaptation approaches and the benefits of combining them.', 'Even though adding controls directly, as in the “added-controls” column, works better than language-only and controls-only models, it is worse than any other model that exploits both language and extra-linguistic data.', 'This motivates the need for combining different types of features in both an additive (residualized control) and multiplicative (factor adaptation) style.', 'Overall, these results show the power of RFA over the other models.', 'RFA’s improvement over FA was statistically significant for 4 out of 5 outcomes, and 3 out of 5 for residualized control.', 'Recall that added-controls, residualized control, FA, and RFA all have access to the same set of information.', 'The gains of RFA over FA show that RFA’s structure utilizing residualized control is better suited for combining extra-linguistic and language-only features.']
[None, ['Controls Only', 'Added-Controls', 'RC', 'FA'], ['RFA', 'FA', '3 Socio-Demographic Factors', 'All Factors'], None, ['Lang.', 'Controls Only', 'Added-Controls'], ['FA', 'RFA'], ['RFA'], ['RC', 'FA', 'RFA'], ['Added-Controls', 'RC', 'FA', 'RFA'], ['RFA']]
1
D18-1395table_2
Results: content features
1
[['In-domain'], ['Out-of-domain']]
1
[['Binary'], ['Families'], ['NLI']]
[['91.07', '83.51', '70.26'], ['81.49', '65.37', '35.99']]
column
['accuracy', 'accuracy', 'accuracy']
['In-domain', 'Out-of-domain']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Binary</th> <th>Families</th> <th>NLI</th> </tr> </thead> <tbody> <tr> <td>In-domain</td> <td>91.07</td> <td>83.51</td> <td>70.26</td> </tr> <tr> <td>Out-of-domain</td> <td>81.49</td> <td>65.37</td> <td>35.99</td> </tr> </tbody></table>
Table 2
table_2
D18-1395
6
emnlp2018
Table 2 depicts the results obtained by combining character trigrams, tokens, and spelling features (Sections 3.5.1, 3.5.2). As expected, these content features yield excellent results in-domain, but the accuracy deteriorates out-of-domain, especially in the most challenging task of NLI.
[1, 1]
['Table 2 depicts the results obtained by combining character trigrams, tokens, and spelling features (Sections 3.5.1, 3.5.2).', 'As expected, these content features yield excellent results in-domain, but the accuracy deteriorates out-of-domain, especially in the most challenging task of NLI.']
[None, ['In-domain', 'Out-of-domain', 'NLI']]
1
D18-1395table_4
Results: grammar and spelling features
1
[['In-domain'], ['Out-of-domain']]
1
[['Binary'], ['Families'], ['NLI']]
[['72.93', '55.59', '26.74'], ['70.24', '47.23', '14.15']]
column
['accuracy', 'accuracy', 'accuracy']
['In-domain', 'Out-of-domain']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Binary</th> <th>Families</th> <th>NLI</th> </tr> </thead> <tbody> <tr> <td>In-domain</td> <td>72.93</td> <td>55.59</td> <td>26.74</td> </tr> <tr> <td>Out-of-domain</td> <td>70.24</td> <td>47.23</td> <td>14.15</td> </tr> </tbody></table>
Table 4
table_4
D18-1395
7
emnlp2018
Table 4 shows the results obtained by combining the spelling features with the grammar features (Section 3.5.2). Clearly, these two feature types reflect somewhat different phenomena, as the results are better than using any of the two alone.
[1, 1]
['Table 4 shows the results obtained by combining the spelling features with the grammar features (Section 3.5.2).', 'Clearly, these two feature types reflect somewhat different phenomena, as the results are better than using any of the two alone.']
[None, None]
1
D18-1395table_5
Results: centrality features
1
[['In-domain'], ['Out-of-domain']]
1
[['Binary'], ['Families'], ['NLI']]
[['57.92', '32.39', '5.75'], ['56.29', '30.70', '5.60']]
column
['accuracy', 'accuracy', 'accuracy']
['In-domain', 'Out-of-domain']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Binary</th> <th>Families</th> <th>NLI</th> </tr> </thead> <tbody> <tr> <td>In-domain</td> <td>57.92</td> <td>32.39</td> <td>5.75</td> </tr> <tr> <td>Out-of-domain</td> <td>56.29</td> <td>30.70</td> <td>5.60</td> </tr> </tbody></table>
Table 5
table_5
D18-1395
7
emnlp2018
Table 5 shows the accuracy obtained by all the centrality features (Section 3.5.4), excluding the most popular subreddits. As expected, the contribution of these features is small, and is most evident on the binary task. The signal of the native language reflected by these features is very subtle, but is nonetheless present, as the results are consistently higher than the baseline.
[1, 1, 1]
['Table 5 shows the accuracy obtained by all the centrality features (Section 3.5.4), excluding the most popular subreddits.', 'As expected, the contribution of these features is small, and is most evident on the binary task.', 'The signal of the native language reflected by these features is very subtle, but is nonetheless present, as the results are consistently higher than the baseline.']
[None, ['Binary'], None]
1
D18-1396table_3
BLEU scores. ”0” represents the translation results without teacher forcing during inference, and ”1” represents the translation results with teacher forcing during inference. ∆ represents the BLEU score improvement of teacher forcing over normal translation.
2
[['De-En', 'Left'], ['De-En', 'Right'], ['En-De', 'Left'], ['En-De', 'Right'], ['En-Zh', 'Left'], ['En-Zh', 'Right']]
2
[['left-to-right', '0'], ['left-to-right', '1'], ['left-to-right', 'Δ'], ['right-to-left', '0'], ['right-to-left', '1'], ['right-to-left', 'Δ']]
[['10.17', '10.71', '0.54', '9.41', '10.41', '1.00'], ['8.39', '9.25', '0.86', '7.83', '8.45', '0.62'], ['7.90', '9.43', '1.53', '7.11', '10.71', '3.60'], ['6.60', '8.36', '1.76', '6.45', '8.37', '1.92'], ['7.41', '9.11', '1.70', '7.01', '9.83', '2.82'], ['5.91', '8.55', '2.64', '5.77', '7.54', '1.77']]
column
['BLEU', 'BLEU', 'BLEU', 'BLEU', 'BLEU', 'BLEU']
['1']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>left-to-right || 0</th> <th>left-to-right || 1</th> <th>left-to-right || Δ</th> <th>right-to-left || 0</th> <th>right-to-left || 1</th> <th>right-to-left || Δ</th> </tr> </thead> <tbody> <tr> <td>De-En || Left</td> <td>10.17</td> <td>10.71</td> <td>0.54</td> <td>9.41</td> <td>10.41</td> <td>1.00</td> </tr> <tr> <td>De-En || Right</td> <td>8.39</td> <td>9.25</td> <td>0.86</td> <td>7.83</td> <td>8.45</td> <td>0.62</td> </tr> <tr> <td>En-De || Left</td> <td>7.90</td> <td>9.43</td> <td>1.53</td> <td>7.11</td> <td>10.71</td> <td>3.60</td> </tr> <tr> <td>En-De || Right</td> <td>6.60</td> <td>8.36</td> <td>1.76</td> <td>6.45</td> <td>8.37</td> <td>1.92</td> </tr> <tr> <td>En-Zh || Left</td> <td>7.41</td> <td>9.11</td> <td>1.70</td> <td>7.01</td> <td>9.83</td> <td>2.82</td> </tr> <tr> <td>En-Zh || Right</td> <td>5.91</td> <td>8.55</td> <td>2.64</td> <td>5.77</td> <td>7.54</td> <td>1.77</td> </tr> </tbody></table>
Table 3
table_3
D18-1396
4
emnlp2018
Same as last section, we evaluate the quality of the left and right half of the translation results generated by both the left-to-right and right-toleft models. The results are summarized in Table 3. For comparison, we also include the BLEU scores of normal translation (without teacher forcing). We have several findings from Table 3 as follows:. • Exposure bias exists. The accuracy of both left and right half tokens in the normal translation is lower than that in teacher forcing, which feeds the ground-truth tokens as inputs. This demonstrates that feeding the previously generated tokens (which might be in correct) in inference indeed hurts translation accuracy. • Error propagation does exist. We find the error is accumulated along the sequential generation of the sentence. Taking En-Zh and the left-to-right NMT model as an example, the BLEU score improvement of the right half (the second half of the generation) of teacher forcing over normal translation is 2.64, which is much larger than the accuracy improvement of the left half (the first half of the generation): 1.70. Similarly, for En-Zh with the right-to-left NMT model, the BLEU score improvement of the left half (the second half of the generation) of teacher forcing over normal translation is 2.82, which is much larger than the accuracy improvement of the right half (the first half of the generation): 1.77. • Other causes exist. Taking En-De translation with the left-to-right model as an example, the accuracy of the left half (9.43) is higher than that of the right half (8.36) when there is no error propagation with teacher forcing. Similar results can be found in other language pairs and models. This suggests that there must be some other causes leading to accuracy drop, which will be studied in the next section.
[2, 1, 2, 1, 2, 1, 2, 2, 2, 1, 1, 2, 1, 1, 2]
['Same as last section, we evaluate the quality of the left and right half of the translation results generated by both the left-to-right and right-toleft models.', 'The results are summarized in Table 3.', 'For comparison, we also include the BLEU scores of normal translation (without teacher forcing).', 'We have several findings from Table 3 as follows:.', '• Exposure bias exists.', 'The accuracy of both left and right half tokens in the normal translation is lower than that in teacher forcing, which feeds the ground-truth tokens as inputs.', 'This demonstrates that feeding the previously generated tokens (which might be in correct) in inference indeed hurts translation accuracy.', '• Error propagation does exist.', 'We find the error is accumulated along the sequential generation of the sentence.', 'Taking En-Zh and the left-to-right NMT model as an example, the BLEU score improvement of the right half (the second half of the generation) of teacher forcing over normal translation is 2.64, which is much larger than the accuracy improvement of the left half (the first half of the generation): 1.70.', 'Similarly, for En-Zh with the right-to-left NMT model, the BLEU score improvement of the left half (the second half of the generation) of teacher forcing over normal translation is 2.82, which is much larger than the accuracy improvement of the right half (the first half of the generation): 1.77.', '• Other causes exist.', 'Taking En-De translation with the left-to-right model as an example, the accuracy of the left half (9.43) is higher than that of the right half (8.36) when there is no error propagation with teacher forcing.', 'Similar results can be found in other language pairs and models.', 'This suggests that there must be some other causes leading to accuracy drop, which will be studied in the next section.']
[['left-to-right', 'right-to-left'], None, ['0'], None, None, ['Left', 'Right', '0', '1'], None, None, None, ['En-Zh', 'Left', 'Right', 'left-to-right', 'Δ'], ['En-Zh', 'Left', 'Right', 'right-to-left', 'Δ'], None, ['En-De', 'Left', 'Right', 'left-to-right', '1'], None, None]
1
D18-1397table_2
Results of variance reduction of gradient estimation.
2
[['Training Strategy', 'RL'], ['Training Strategy', 'RL (baseline function)']]
1
[[' En-De'], ['En-Zh'], ['Zh-En']]
[['27.23', '34.47', '24.72'], ['27.25', '34.43', '24.73']]
column
['BLEU', 'BLEU', 'BLEU']
['RL (baseline function)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>En-De</th> <th>En-Zh</th> <th>Zh-En</th> </tr> </thead> <tbody> <tr> <td>Training Strategy || RL</td> <td>27.23</td> <td>34.47</td> <td>24.72</td> </tr> <tr> <td>Training Strategy || RL (baseline function)</td> <td>27.25</td> <td>34.43</td> <td>24.73</td> </tr> </tbody></table>
Table 2
table_2
D18-1397
6
emnlp2018
Table 2 shows that the learning of baseline reward does not help RL training. This contradicts with previous observations (Ranzato et al., 2016), and seems to suggest that the variance of gradient estimation in NMT is not as large as we expected. The reason might be that the probability mass on the target-side language space induced by the NMT model is highly concentrated, making the sampled y representative enough in terms of estimating the expectation. Therefore, for the economic perspective, it is not necessary to add the additional steps of using baseline reward on RL training for NMT.
[1, 2, 2, 2]
['Table 2 shows that the learning of baseline reward does not help RL training.', 'This contradicts with previous observations (Ranzato et al., 2016), and seems to suggest that the variance of gradient estimation in NMT is not as large as we expected.', 'The reason might be that the probability mass on the target-side language space induced by the NMT model is highly concentrated, making the sampled y representative enough in terms of estimating the expectation.', 'Therefore, for the economic perspective, it is not necessary to add the additional steps of using baseline reward on RL training for NMT.']
[['RL (baseline function)'], None, None, ['RL (baseline function)']]
1
D18-1397table_3
Results with source monolingual data. “B” denotes bilingual data, “Ms” denotes source-side monolingual data, “&” denotes data combination.
2
[['[Data] (Objective)', '[B] (MLE)'], ['[Data] (Objective)', '[B] (MLE) + [B] (RL)'], ['[Data] (Objective)', '[B] (MLE) + [Ms] (RL)'], ['[Data] (Objective)', '[B & Ms] (MLE)'], ['[Data] (Objective)', '[B & Ms] (MLE) + [B & Ms] (RL)']]
1
[['Valid'], ['Test']]
[['22.32', '24.29'], ['22.87', '25.04'], ['23.03', '25.22'], ['24.31', '25.31'], ['24.58', '25.60']]
column
['BLEU', 'BLEU']
['[B] (MLE)', '[B] (MLE) + [B] (RL)', '[B] (MLE) + [Ms] (RL)', '[B & Ms] (MLE)', '[B & Ms] (MLE) + [B & Ms] (RL)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Valid</th> <th>Test</th> </tr> </thead> <tbody> <tr> <td>[Data] (Objective) || [B] (MLE)</td> <td>22.32</td> <td>24.29</td> </tr> <tr> <td>[Data] (Objective) || [B] (MLE) + [B] (RL)</td> <td>22.87</td> <td>25.04</td> </tr> <tr> <td>[Data] (Objective) || [B] (MLE) + [Ms] (RL)</td> <td>23.03</td> <td>25.22</td> </tr> <tr> <td>[Data] (Objective) || [B &amp; Ms] (MLE)</td> <td>24.31</td> <td>25.31</td> </tr> <tr> <td>[Data] (Objective) || [B &amp; Ms] (MLE) + [B &amp; Ms] (RL)</td> <td>24.58</td> <td>25.60</td> </tr> </tbody></table>
Table 3
table_3
D18-1397
7
emnlp2018
From Table 3 and 4, we have several observations. First, monolingual data helps RL training, improving BLEU score from 25.04 to 25.22 (ρ < 0.05) in Table 3. Second, when we only add monolingual data for RL training, the model achieves similar performance compared to MLE training with bilingual and monolingual data (e.g., 25.15 vs. 25.24 (ρ < 0.05) in Table 4).
[1, 1, 1]
['From Table 3 and 4, we have several observations.', 'First, monolingual data helps RL training, improving BLEU score from 25.04 to 25.22 (ρ < 0.05) in Table 3.', 'Second, when we only add monolingual data for RL training, the model achieves similar performance compared to MLE training with bilingual and monolingual data (e.g., 25.15 vs. 25.24 (ρ < 0.05) in Table 4).']
[None, ['[B] (MLE) + [Ms] (RL)', '[B] (MLE) + [B] (RL)', 'Test'], ['[B] (MLE) + [Ms] (RL)', '[B & Ms] (MLE)']]
1
D18-1397table_5
Results of sequential approach for monolingual data. “B” denotes bilingual data, “Ms” denotes source-side monolingual data and “Mt” denotes targetside monolingual data, “&” denotes data combination.
2
[['[Data] (Objective)', '[B & Ms] (MLE)'], ['[Data] (Objective)', '[B & Ms] (MLE) + [B & Ms] (RL)'], ['[Data] (Objective)', '[B & Ms] (MLE) + [Mt] (RL)'], ['[Data] (Objective)', '[B & Mt] (MLE)'], ['[Data] (Objective)', '[B & Mt] (MLE) + [B & Mt] (RL)'], ['[Data] (Objective)', '[B & Mt] (MLE) + [Ms] (RL)']]
1
[['Valid'], ['Test']]
[['24.31', '25.31'], ['24.58', '25.60'], ['24.61', '25.72'], ['24.14', '25.24'], ['24.41', '25.58'], ['24.75', '25.92']]
column
['BLEU', 'BLEU']
['[B & Mt] (MLE)', '[B & Mt] (MLE) + [Ms] (RL)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Valid</th> <th>Test</th> </tr> </thead> <tbody> <tr> <td>[Data] (Objective) || [B &amp; Ms] (MLE)</td> <td>24.31</td> <td>25.31</td> </tr> <tr> <td>[Data] (Objective) || [B &amp; Ms] (MLE) + [B &amp; Ms] (RL)</td> <td>24.58</td> <td>25.60</td> </tr> <tr> <td>[Data] (Objective) || [B &amp; Ms] (MLE) + [Mt] (RL)</td> <td>24.61</td> <td>25.72</td> </tr> <tr> <td>[Data] (Objective) || [B &amp; Mt] (MLE)</td> <td>24.14</td> <td>25.24</td> </tr> <tr> <td>[Data] (Objective) || [B &amp; Mt] (MLE) + [B &amp; Mt] (RL)</td> <td>24.41</td> <td>25.58</td> </tr> <tr> <td>[Data] (Objective) || [B &amp; Mt] (MLE) + [Ms] (RL)</td> <td>24.75</td> <td>25.92</td> </tr> </tbody></table>
Table 5
table_5
D18-1397
7
emnlp2018
With both Source-Side and Target-Side Monolingual Data. We have two approaches to use both source-side and target-side monolingual data, as described in subsection 4.3. The results are reported in Table 5. From Table 5, we can observe that the sequential training of monolingual data can benefit the model performance. Taking the last three rows as an example, the BLEU score of the MLE model trained on the combination of bilingual data and target-side monolingual data is 25.24; based on this model, RL training using the source-side monolingual data further improves the model performance by 0.7 (ρ < 0.01) BLEU points.
[2, 2, 1, 1, 1]
['With both Source-Side and Target-Side Monolingual Data.', 'We have two approaches to use both source-side and target-side monolingual data, as described in subsection 4.3.', 'The results are reported in Table 5.', 'From Table 5, we can observe that the sequential training of monolingual data can benefit the model performance.', 'Taking the last three rows as an example, the BLEU score of the MLE model trained on the combination of bilingual data and target-side monolingual data is 25.24; based on this model, RL training using the source-side monolingual data further improves the model performance by 0.7 (Ï\x81 < 0.01) BLEU points.']
[None, None, None, None, ['[B & Mt] (MLE)', '[B & Mt] (MLE) + [Ms] (RL)', 'Test']]
1
D18-1399table_3
Results of the proposed method in comparison to supervised systems (BLEU). Transformer results reported by Vaswani et al. (2017). SMT variants are incremental (e.g. 2nd includes 1st). Refer to the text for more details.
2
[['Supervised', 'NMT (transformer)'], ['Supervised', 'WMT best'], ['Supervised', 'Supervised SMT (europarl)'], ['Supervised', '+ w/o lexical reord.'], ['Supervised', '+ constrained vocab.'], ['Supervised', '+ unsup. tuning'], ['Unsup.', 'Proposed system']]
2
[['WMT-14', 'FR-EN'], ['WMT-14', 'EN-FR'], ['WMT-14', 'DE-EN'], ['WMT-14', 'EN-DE'], ['WMT-16', 'DE-EN'], ['WMT-16', 'EN-DE']]
[['-', '41.8', '-', '28.4', '-', '-'], ['35.0', '35.8', '29.0', '20.6', '40.2', '34.2'], ['30.61', '30.82', '20.83', '16.60', '26.38', '22.12'], ['30.54', '30.33', '20.37', '16.34', '25.99', '22.20'], ['30.04', '30.10', '19.91', '16.32', '25.66', '21.53'], ['29.32', '29.46', '17.75', '15.45', '23.35', '19.86'], ['25.87', '26.22', '17.43', '14.08', '23.05', '18.23']]
column
['BLEU', 'BLEU', 'BLEU', 'BLEU', 'BLEU', 'BLEU']
['Proposed system']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>WMT-14 || FR-EN</th> <th>WMT-14 || EN-FR</th> <th>WMT-14 || DE-EN</th> <th>WMT-14 || EN-DE</th> <th>WMT-16 || DE-EN</th> <th>WMT-16 || EN-DE</th> </tr> </thead> <tbody> <tr> <td>Supervised || NMT (transformer)</td> <td>-</td> <td>41.8</td> <td>-</td> <td>28.4</td> <td>-</td> <td>-</td> </tr> <tr> <td>Supervised || WMT best</td> <td>35.0</td> <td>35.8</td> <td>29.0</td> <td>20.6</td> <td>40.2</td> <td>34.2</td> </tr> <tr> <td>Supervised || Supervised SMT (europarl)</td> <td>30.61</td> <td>30.82</td> <td>20.83</td> <td>16.60</td> <td>26.38</td> <td>22.12</td> </tr> <tr> <td>Supervised || + w/o lexical reord.</td> <td>30.54</td> <td>30.33</td> <td>20.37</td> <td>16.34</td> <td>25.99</td> <td>22.20</td> </tr> <tr> <td>Supervised || + constrained vocab.</td> <td>30.04</td> <td>30.10</td> <td>19.91</td> <td>16.32</td> <td>25.66</td> <td>21.53</td> </tr> <tr> <td>Supervised || + unsup. tuning</td> <td>29.32</td> <td>29.46</td> <td>17.75</td> <td>15.45</td> <td>23.35</td> <td>19.86</td> </tr> <tr> <td>Unsup. || Proposed system</td> <td>25.87</td> <td>26.22</td> <td>17.43</td> <td>14.08</td> <td>23.05</td> <td>18.23</td> </tr> </tbody></table>
Table 3
table_3
D18-1399
7
emnlp2018
6.3 Comparison with supervised systems. So as to put our results into perspective, Table 3 comprises the results of different supervised methods in the same test sets. More concretely, we report the results of the Transformer (Vaswani et al., 2017), an NMT system based on self-attention that is the current state-of-the-art in machine translation, along with the scores obtained by the best performing system in each WMT shared task at the time, and those of a standard phrase-based SMT system trained on Europarl and tuned on newstest2013 using Moses. We also report the effect of removing lexical reordering from the latter as we do in our initial system (Section 4), restricting the vocabulary to the most frequent unigram, bigram and trigrams as we do when training our embeddings (Section 3), and using our unsupervised tuning procedure over a subset of the monolingual corpus (Section 4.2) instead of using standard MERT tuning over newstest2013. Quite surprisingly, our proposed system, trained exclusively on monolingual corpora, is relatively close to a comparable phrase-based SMT system trained on Europarl, with differences below 5 BLEU points in all cases and as little as 2.5 in some. Note that both systems use the exact same language model trained on News Crawl, making them fully comparable in terms of the monolingual corpora they have access to. While more of a baseline than the state-of-the-art, note that Moses+Europarl is widely used as a reference system in machine translation. As such, we think that our results are very encouraging, as they show that our fully unsupervised system is already quite close to this competitive baseline. In addition to that, the results for the constrained variants of this SMT system justify some of the simplifications required by our approach. In particular, removing lexical reordering and constraining the phrase table to the most frequent n-grams, as we do for our initial system, has a relatively small effect, with a drop of less than 1 BLEU point in all cases, and as little as 0.28 in some. Replacing standard MERT tuning with our unsupervised variant does cause a considerable drop in performance, although it is below 2.5 BLEU points even in the worst case, and our unsupervised tuning method is still better than using default weights as reported in Table 2. This shows the importance of tuning in SMT, suggesting that these results could be further improved if one had access to a small parallel corpus for tuning.
[2, 1, 2, 2, 1, 2, 2, 2, 1, 1, 1, 2]
['6.3 Comparison with supervised systems.', 'So as to put our results into perspective, Table 3 comprises the results of different supervised methods in the same test sets.', 'More concretely, we report the results of the Transformer (Vaswani et al., 2017), an NMT system based on self-attention that is the current state-of-the-art in machine translation, along with the scores obtained by the best performing system in each WMT shared task at the time, and those of a standard phrase-based SMT system trained on Europarl and tuned on newstest2013 using Moses.', 'We also report the effect of removing lexical reordering from the latter as we do in our initial system (Section 4), restricting the vocabulary to the most frequent unigram, bigram and trigrams as we do when training our embeddings (Section 3), and using our unsupervised tuning procedure over a subset of the monolingual corpus (Section 4.2) instead of using standard MERT tuning over newstest2013.', 'Quite surprisingly, our proposed system, trained exclusively on monolingual corpora, is relatively close to a comparable phrase-based SMT system trained on Europarl, with differences below 5 BLEU points in all cases and as little as 2.5 in some.', 'Note that both systems use the exact same language model trained on News Crawl, making them fully comparable in terms of the monolingual corpora they have access to.', 'While more of a baseline than the state-of-the-art, note that Moses+Europarl is widely used as a reference system in machine translation.', 'As such, we think that our results are very encouraging, as they show that our fully unsupervised system is already quite close to this competitive baseline.', 'In addition to that, the results for the constrained variants of this SMT system justify some of the simplifications required by our approach.', 'In particular, removing lexical reordering and constraining the phrase table to the most frequent n-grams, as we do for our initial system, has a relatively small effect, with a drop of less than 1 BLEU point in all cases, and as little as 0.28 in some.', 'Replacing standard MERT tuning with our unsupervised variant does cause a considerable drop in performance, although it is below 2.5 BLEU points even in the worst case, and our unsupervised tuning method is still better than using default weights as reported in Table 2.', 'This shows the importance of tuning in SMT, suggesting that these results could be further improved if one had access to a small parallel corpus for tuning.']
[None, None, ['NMT (transformer)'], ['+ w/o lexical reord.', '+ constrained vocab.', '+ unsup. tuning'], ['Proposed system', 'Supervised SMT (europarl)'], ['Proposed system', 'Supervised SMT (europarl)'], None, ['Proposed system'], ['Supervised SMT (europarl)', '+ w/o lexical reord.', '+ constrained vocab.', '+ unsup. tuning'], ['Supervised SMT (europarl)', '+ w/o lexical reord.', '+ constrained vocab.'], ['Supervised SMT (europarl)', '+ unsup. tuning', 'Proposed system'], None]
1
D18-1403table_4
Experimental results for the identification of aspect segments (top) and the retrieval of salient segments (bottom) on OPOSUM’s six product domains and overall (AVG).
2
[['Aspect Extraction (F1)', 'Majority'], ['Aspect Extraction (F1)', 'ABAE'], ['Aspect Extraction (F1)', 'ABAEinit'], ['Aspect Extraction (F1)', 'MATE'], ['Aspect Extraction (F1)', 'MATE+MT'], ['Salience (MAP/P@5)', 'MILNET'], ['Salience (MAP/P@5)', 'ABAEinit'], ['Salience (MAP/P@5)', 'MATE'], ['Salience (MAP/P@5)', 'MATE+MT'], ['Salience (MAP/P@5)', 'MILNET+ABAEinit'], ['Salience (MAP/P@5)', 'MILNET+MATE'], ['Salience (MAP/P@5)', 'MILNET+MATE+MT']]
1
[['L. Bags'], ['B/T H/S'], ['Boots'], ['Keyb/s'], ['TVs'], ['Vac/s'], ['AVG']]
[['37.9', '39.8', '37.1', '43.2', '41.7', '41.6', '40.2'], ['38.1', '37.6', '35.2', '38.6', '39.5', '38.1', '37.9'], ['41.6', '48.5', '41.2', '41.3', '45.7', '40.6', '43.2'], ['46.2', '52.2', '45.6', '43.5', '48.8', '42.3', '46.4'], ['48.6', '54.5', '46.4', '45.3', '51.8', '47.7', '49.1'], ['21.8 / 40.0', '19.8 / 36.7', '17.0 / 39.3', '14.1 / 28.0', '14.3 / 36.0', '14.6 / 31.3', '16.9 / 35.2'], ['19.9 / 48.5', '27.5 / 49.7', '13.8 / 28.1', '19.0 / 44.9', '16.8 / 42.4', '16.1 / 34.0', '18.8 / 41.3'], ['23.0 / 57.1', '30.9 / 50.7', '15.4 / 31.9', '21.0 / 43.1', '18.7 / 44.7', '19.9 / 44.0', '21.5 / 45.2'], ['26.3 / 60.8', '37.5 / 66.7', '17.3 / 33.6', '20.9 / 44.9', '23.6 / 48.0', '22.4 / 43.9', '24.7 / 49.6'], ['27.1 / 56.0', '33.5 / 66.5', '19.3 / 34.8', '22.4 / 51.7', '19.0 / 43.7', '20.8 / 43.5', '23.7 / 49.4'], ['28.2 / 54.7', '36.0 / 66.5', '21.7 / 39.3', '24.0 / 52.0', '20.8 / 46.1', '23.5 / 49.3', '25.7 / 51.3'], ['32.1 / 69.2', '40.0 / 74.7', '23.3 / 40.4', '24.8 / 56.4', '23.8 / 52.8', '26.0 / 53.1', '28.3 / 57.8']]
column
['F1', 'F1', 'F1', 'F1', 'F1', 'F1', 'F1']
['MATE', 'MATE+MT', 'MILNET+ABAEinit', 'MILNET+MATE', 'MILNET+MATE+MT']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>L. Bags</th> <th>B/T H/S</th> <th>Boots</th> <th>Keyb/s</th> <th>TVs</th> <th>Vac/s</th> <th>AVG</th> </tr> </thead> <tbody> <tr> <td>Aspect Extraction (F1) || Majority</td> <td>37.9</td> <td>39.8</td> <td>37.1</td> <td>43.2</td> <td>41.7</td> <td>41.6</td> <td>40.2</td> </tr> <tr> <td>Aspect Extraction (F1) || ABAE</td> <td>38.1</td> <td>37.6</td> <td>35.2</td> <td>38.6</td> <td>39.5</td> <td>38.1</td> <td>37.9</td> </tr> <tr> <td>Aspect Extraction (F1) || ABAEinit</td> <td>41.6</td> <td>48.5</td> <td>41.2</td> <td>41.3</td> <td>45.7</td> <td>40.6</td> <td>43.2</td> </tr> <tr> <td>Aspect Extraction (F1) || MATE</td> <td>46.2</td> <td>52.2</td> <td>45.6</td> <td>43.5</td> <td>48.8</td> <td>42.3</td> <td>46.4</td> </tr> <tr> <td>Aspect Extraction (F1) || MATE+MT</td> <td>48.6</td> <td>54.5</td> <td>46.4</td> <td>45.3</td> <td>51.8</td> <td>47.7</td> <td>49.1</td> </tr> <tr> <td>Salience (MAP/P@5) || MILNET</td> <td>21.8 / 40.0</td> <td>19.8 / 36.7</td> <td>17.0 / 39.3</td> <td>14.1 / 28.0</td> <td>14.3 / 36.0</td> <td>14.6 / 31.3</td> <td>16.9 / 35.2</td> </tr> <tr> <td>Salience (MAP/P@5) || ABAEinit</td> <td>19.9 / 48.5</td> <td>27.5 / 49.7</td> <td>13.8 / 28.1</td> <td>19.0 / 44.9</td> <td>16.8 / 42.4</td> <td>16.1 / 34.0</td> <td>18.8 / 41.3</td> </tr> <tr> <td>Salience (MAP/P@5) || MATE</td> <td>23.0 / 57.1</td> <td>30.9 / 50.7</td> <td>15.4 / 31.9</td> <td>21.0 / 43.1</td> <td>18.7 / 44.7</td> <td>19.9 / 44.0</td> <td>21.5 / 45.2</td> </tr> <tr> <td>Salience (MAP/P@5) || MATE+MT</td> <td>26.3 / 60.8</td> <td>37.5 / 66.7</td> <td>17.3 / 33.6</td> <td>20.9 / 44.9</td> <td>23.6 / 48.0</td> <td>22.4 / 43.9</td> <td>24.7 / 49.6</td> </tr> <tr> <td>Salience (MAP/P@5) || MILNET+ABAEinit</td> <td>27.1 / 56.0</td> <td>33.5 / 66.5</td> <td>19.3 / 34.8</td> <td>22.4 / 51.7</td> <td>19.0 / 43.7</td> <td>20.8 / 43.5</td> <td>23.7 / 49.4</td> </tr> <tr> <td>Salience (MAP/P@5) || MILNET+MATE</td> <td>28.2 / 54.7</td> <td>36.0 / 66.5</td> <td>21.7 / 39.3</td> <td>24.0 / 52.0</td> <td>20.8 / 46.1</td> <td>23.5 / 49.3</td> <td>25.7 / 51.3</td> </tr> <tr> <td>Salience (MAP/P@5) || MILNET+MATE+MT</td> <td>32.1 / 69.2</td> <td>40.0 / 74.7</td> <td>23.3 / 40.4</td> <td>24.8 / 56.4</td> <td>23.8 / 52.8</td> <td>26.0 / 53.1</td> <td>28.3 / 57.8</td> </tr> </tbody></table>
Table 4
table_4
D18-1403
7
emnlp2018
Table 4 (top) reports the results using micro-averaged F1. Our models outperform both variants of ABAE across domains. ABAEinit improves upon the vanilla model, affirming that informed aspect initialization can facilitate the task. The richer multi-seed representation of MATE, however, helps our model achieve a 3.2% increase in F1. Further improvements are gained by the multi-task model, which boosts performance by 2.7%. Results are shown in Table 4 (bottom). The combined use of polarity and aspect information improves the retrieval of salient opinions across domains, as all model variants that use our salience formula of Equation (12) outperform the MILNET and aspect-only baselines. When comparing between aspect-based alternatives, we observe that the extraction accuracy correlates with the quality of aspect prediction. In particular, ranking using MILNET+MATE+MT gives best results, with a 2.6% increase in MAP against MILNET+MATE and 4.6% against MILNET+ABAEinit. The trend persists even when MILNET polarities are ignored, although the quality of rankings is worse in this case. Opinion Summaries We now turn to the summarization task itself, where we compare our best performing model (MILNET+MATE+MT), with and without a redundancy filter (RD), against the following methods: a baseline that selects segments randomly; a Lead baseline that only selects the leading segments from each review; SumBasic,a generic frequency-based extractive summarizer (Nenkova and Vanderwende, 2005); LexRank, a generic graph-based extractive summarizer(Erkan and Radev, 2004); Opinosis, a graph-based abstractive summarizer that is designed for opinion summarization (Ganesan et al., 2010).
[1, 1, 1, 1, 1, 2, 1, 1, 1, 2, 2]
['Table 4 (top) reports the results using micro-averaged F1.', 'Our models outperform both variants of ABAE across domains.', 'ABAEinit improves upon the vanilla model, affirming that informed aspect initialization can facilitate the task.', 'The richer multi-seed representation of MATE, however, helps our model achieve a 3.2% increase in F1.', 'Further improvements are gained by the multi-task model, which boosts performance by 2.7%.', 'Results are shown in Table 4 (bottom).', 'The combined use of polarity and aspect information improves the retrieval of salient opinions across domains, as all model variants that use our salience formula of Equation (12) outperform the MILNET and aspect-only baselines.', 'When comparing between aspect-based alternatives, we observe that the extraction accuracy correlates with the quality of aspect prediction.', 'In particular, ranking using MILNET+MATE+MT gives best results, with a 2.6% increase in MAP against MILNET+MATE and 4.6% against MILNET+ABAEinit.', 'The trend persists even when MILNET polarities are ignored, although the quality of rankings is worse in this case.', 'Opinion Summaries We now turn to the summarization task itself, where we compare our best performing model (MILNET+MATE+MT), with and without a redundancy filter (RD), against the following methods: a baseline that selects segments randomly; a Lead baseline that only selects the leading segments from each review; SumBasic,a generic frequency-based extractive summarizer (Nenkova and Vanderwende, 2005); LexRank, a generic graph-based extractive summarizer(Erkan and Radev, 2004); Opinosis, a graph-based abstractive summarizer that is designed for opinion summarization (Ganesan et al., 2010).']
[['Aspect Extraction (F1)'], ['ABAE', 'ABAEinit', 'MATE'], ['ABAE', 'ABAEinit'], ['ABAEinit', 'MATE', 'AVG'], ['MATE', 'MATE+MT', 'AVG'], ['Salience (MAP/P@5)'], ['MILNET+ABAEinit', 'MILNET+MATE', 'MILNET+MATE+MT'], None, ['MILNET+MATE+MT', 'MILNET+ABAEinit', 'MILNET+MATE', 'AVG'], ['MILNET'], ['MILNET+MATE+MT']]
1
D18-1403table_5
Summarization results on OPOSUM.
2
[['Summarization', 'Random'], ['Summarization', 'Lead'], ['Summarization', 'SumBasic'], ['Summarization', 'LexRank'], ['Summarization', 'Opinosis'], ['Summarization', 'Opinosis+MATE+MT'], ['Summarization', 'MILNET+MATE+MT'], ['Summarization', 'MILNET+MATE+MT+RD'], ['Summarization', 'Inter-annotator Agreement']]
1
[['ROUGE-1'], ['ROUGE-2'], ['ROUGE-L']]
[['35.1', '11.3', '34.3'], ['35.5', '15.2', '34.8'], ['34.0', '11.2', '32.6'], ['37.7', '14.1', '36.6'], ['36.8', '14.3', '35.7'], ['38.7', '15.8', '37.4'], ['43.5', '21.7', '42.8'], ['44.1', '21.8', '43.3'], ['54.7', '36.6', '53.9']]
column
['ROUGE-1', 'ROUGE-2', 'ROUGE-L']
['MILNET+MATE+MT', 'MILNET+MATE+MT+RD']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>ROUGE-1</th> <th>ROUGE-2</th> <th>ROUGE-L</th> </tr> </thead> <tbody> <tr> <td>Summarization || Random</td> <td>35.1</td> <td>11.3</td> <td>34.3</td> </tr> <tr> <td>Summarization || Lead</td> <td>35.5</td> <td>15.2</td> <td>34.8</td> </tr> <tr> <td>Summarization || SumBasic</td> <td>34.0</td> <td>11.2</td> <td>32.6</td> </tr> <tr> <td>Summarization || LexRank</td> <td>37.7</td> <td>14.1</td> <td>36.6</td> </tr> <tr> <td>Summarization || Opinosis</td> <td>36.8</td> <td>14.3</td> <td>35.7</td> </tr> <tr> <td>Summarization || Opinosis+MATE+MT</td> <td>38.7</td> <td>15.8</td> <td>37.4</td> </tr> <tr> <td>Summarization || MILNET+MATE+MT</td> <td>43.5</td> <td>21.7</td> <td>42.8</td> </tr> <tr> <td>Summarization || MILNET+MATE+MT+RD</td> <td>44.1</td> <td>21.8</td> <td>43.3</td> </tr> <tr> <td>Summarization || Inter-annotator Agreement</td> <td>54.7</td> <td>36.6</td> <td>53.9</td> </tr> </tbody></table>
Table 5
table_5
D18-1403
8
emnlp2018
Table 5 presents ROUGE-1, ROUGE-2 and ROUGE-L F1 scores, averaged across domains. Our model (MILNET+MATE+MT) significantly outperforms all comparison systems (p < 0.05; paired bootstrap resampling; Koehn 2004), whilst using a redundancy filter slightly improves performance. Assisting Opinosis with aspect predictions is beneficial, however, it remains significantly inferior to our model (see the supplementary material for additional results).
[1, 1, 1]
['Table 5 presents ROUGE-1, ROUGE-2 and ROUGE-L F1 scores, averaged across domains.', 'Our model (MILNET+MATE+MT) significantly outperforms all comparison systems (p < 0.05; paired bootstrap resampling; Koehn 2004), whilst using a redundancy filter slightly improves performance.', 'Assisting Opinosis with aspect predictions is beneficial, however, it remains significantly inferior to our model (see the supplementary material for additional results).']
[['ROUGE-1', 'ROUGE-2', 'ROUGE-L'], ['MILNET+MATE+MT'], ['MILNET+MATE+MT', 'Opinosis+MATE+MT']]
1
D18-1410table_2
Substitution Ranking evaluation on English Lexical Simplification shared-task of SemEval 2012. P@1 and Pearson correlation of our neural readability ranking (NRR) model compared to the state-of-the-art neural model (Paetzold and Specia, 2017) and other methods. ∗ indicates statistical significance (p < 0.05) compared to the best performing baseline (Paetzold and Specia, 2017).
1
[['Biran et al. (2011)'], ['Jauhar & Specia (2012)'], ['Kajiwara et al. (2013)'], ['Horn et al. (2014)'], ['Glavaš & Štajner (2015)'], ['Boundary Ranker'], ['Paetzold & Specia (2017)'], ['NRRall'], ['NRRall+binning'], ['NRRall+binning+WC']]
1
[['P@1'], ['Pearson']]
[['51.3', '0.505'], ['60.2', '0.575'], ['60.4', '0.649'], ['63.9', '0.673'], ['63.2', '0.644'], ['65.3', '0.677'], ['65.6', '0.679'], ['65.4', '0.682'], ['66.6', '0.702*'], ['67.3*', '0.714*']]
column
['P@1', 'Pearson']
['NRRall', 'NRRall+binning', 'NRRall+binning+WC']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P@1</th> <th>Pearson</th> </tr> </thead> <tbody> <tr> <td>Biran et al. (2011)</td> <td>51.3</td> <td>0.505</td> </tr> <tr> <td>Jauhar &amp; Specia (2012)</td> <td>60.2</td> <td>0.575</td> </tr> <tr> <td>Kajiwara et al. (2013)</td> <td>60.4</td> <td>0.649</td> </tr> <tr> <td>Horn et al. (2014)</td> <td>63.9</td> <td>0.673</td> </tr> <tr> <td>Glavaš &amp; Štajner (2015)</td> <td>63.2</td> <td>0.644</td> </tr> <tr> <td>Boundary Ranker</td> <td>65.3</td> <td>0.677</td> </tr> <tr> <td>Paetzold &amp; Specia (2017)</td> <td>65.6</td> <td>0.679</td> </tr> <tr> <td>NRRall</td> <td>65.4</td> <td>0.682</td> </tr> <tr> <td>NRRall+binning</td> <td>66.6</td> <td>0.702*</td> </tr> <tr> <td>NRRall+binning+WC</td> <td>67.3*</td> <td>0.714*</td> </tr> </tbody></table>
Table 2
table_2
D18-1410
5
emnlp2018
Results. Table 2 compares the performances of our NRR model to the state-of-the-art results reported by Paetzold and Specia (2017). We use precision of the simplest candidate (P@1) and Pearson correlation to measure performance. P@1 is equivalent to TRank (Specia et al., 2012), the official metric for the SemEval 2012 English Lexical Simplification task. While P@1 captures the practical utility of an approach, Pearson correlation indicates how well the system’s rankings correlate with human judgment. We train our NRR model with all the features (NRRall) mentioned in §3.2 except the word2vec embedding features to avoid overfitting on the small training set. Our full model (NRRall+binning+WC) exhibits a statistically significant improvement over the state-of-the-art for both measures. We use paired bootstrap test (Berg-Kirkpatrick et al., 2012; Efron and Tibshirani, 1993) as it can be applied to any performance metric. We also conducted ablation experiments to show the effectiveness of the Gaussianbased feature vectorization layer (+binning) and the word-complexity lexicon (+W C).
[2, 1, 2, 2, 2, 2, 1, 2, 1]
['Results.', 'Table 2 compares the performances of our NRR model to the state-of-the-art results reported by Paetzold and Specia (2017).', 'We use precision of the simplest candidate (P@1) and Pearson correlation to measure performance.', 'P@1 is equivalent to TRank (Specia et al., 2012), the official metric for the SemEval 2012 English Lexical Simplification task.', 'While P@1 captures the practical utility of an approach, Pearson correlation indicates how well the system’s rankings correlate with human judgment.', 'We train our NRR model with all the features (NRRall) mentioned in §3.2 except the word2vec embedding features to avoid overfitting on the small training set.', 'Our full model (NRRall+binning+WC) exhibits a statistically significant improvement over the state-of-the-art for both measures.', 'We use paired bootstrap test (Berg-Kirkpatrick et al., 2012; Efron and Tibshirani, 1993) as it can be applied to any performance metric.', 'We also conducted ablation experiments to show the effectiveness of the Gaussianbased feature vectorization layer (+binning) and the word-complexity lexicon (+W C).']
[None, None, ['P@1', 'Pearson'], ['P@1'], ['Pearson'], ['NRRall', 'NRRall+binning', 'NRRall+binning+WC'], ['NRRall+binning+WC', 'P@1', 'Pearson'], None, ['NRRall+binning', 'NRRall+binning+WC']]
1
D18-1410table_4
Cross-validation accuracy and precision of our neural readability ranking (NRR) model used to create SimplePPDB++, in comparison to the SimplePPDB and other baselines. P+1 stands for the precision of ‘simplifying’ paraphrase rules and P−1 for the precision of ‘complicating’ rules. * indicates statistical significance (p < 0.05) compared to the best performing baseline (Pavlick and Callison-Burch, 2016).
1
[['Google Ngram Frequency'], ['Number of Syllables'], ['Character & Word Length'], ['W2V'], ['SimplePPDB'], ['NRRall'], ['NRRall+binning'], ['NRRall+binning+WC']]
1
[['Acc.'], ['P+1'], ['P-1']]
[['49.4', '53.7', '54.0'], ['50.1', '53.8', '53.3'], ['56.2', '55.7', '56.1'], ['60.4', '54.9', '53.1'], ['62.1', '57.6', '57.8'], ['59.4', '61.8', '57.7'], ['64.1', '62.1', '59.8'], ['65.3*', '65.0*', '61.8*']]
column
['Acc.', 'P+1', 'P-1']
['NRRall', 'NRRall+binning', 'NRRall+binning+WC']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Acc.</th> <th>P+1</th> <th>P-1</th> </tr> </thead> <tbody> <tr> <td>Google Ngram Frequency</td> <td>49.4</td> <td>53.7</td> <td>54.0</td> </tr> <tr> <td>Number of Syllables</td> <td>50.1</td> <td>53.8</td> <td>53.3</td> </tr> <tr> <td>Character &amp; Word Length</td> <td>56.2</td> <td>55.7</td> <td>56.1</td> </tr> <tr> <td>W2V</td> <td>60.4</td> <td>54.9</td> <td>53.1</td> </tr> <tr> <td>SimplePPDB</td> <td>62.1</td> <td>57.6</td> <td>57.8</td> </tr> <tr> <td>NRRall</td> <td>59.4</td> <td>61.8</td> <td>57.7</td> </tr> <tr> <td>NRRall+binning</td> <td>64.1</td> <td>62.1</td> <td>59.8</td> </tr> <tr> <td>NRRall+binning+WC</td> <td>65.3*</td> <td>65.0*</td> <td>61.8*</td> </tr> </tbody></table>
Table 4
table_4
D18-1410
6
emnlp2018
Results. Following the evaluation setup in previous work (Pavlick and Callison-Burch, 2016), we compare accuracy and precision by 10-fold cross-validation. Folds are constructed in such a way that the training and test vocabularies are disjoint. Table 4 shows the performance of our model compared to SimplePPDB and other baselines. We use all the features (NRRall) in §3.2 except for the context features as we are classifying paraphrase rules in PPDB that come with no context. SimplePPDB used the same features plus additional discrete features, such as POS tags, character unigrams and bigrams. Our neural readability ranking model alone with Gaussian binning (NRRall+binning) achieves better accuracy and precision while using less features. Leveraging the lexicon (NRRall+binning+WC) shows statistically signi?cant improvements over SimplePPDB rankings based on the paired bootstrap test. The accuracy increases by 3.2 points, the precision for ‘simplifying’ class improves by 7.4 points and the precision for ‘complicating’ class improves by 4.0 points.
[2, 2, 2, 1, 2, 2, 1, 1, 1]
['Results.', 'Following the evaluation setup in previous work (Pavlick and Callison-Burch, 2016), we compare accuracy and precision by 10-fold cross-validation.', 'Folds are constructed in such a way that the training and test vocabularies are disjoint.', 'Table 4 shows the performance of our model compared to SimplePPDB and other baselines.', 'We use all the features (NRRall) in §3.2 except for the context features as we are classifying paraphrase rules in PPDB that come with no context.', 'SimplePPDB used the same features plus additional discrete features, such as POS tags, character unigrams and bigrams.', 'Our neural readability ranking model alone with Gaussian binning (NRRall+binning) achieves better accuracy and precision while using less features.', 'Leveraging the lexicon (NRRall+binning+WC) shows statistically signi?cant improvements over SimplePPDB rankings based on the paired bootstrap test.', 'The accuracy increases by 3.2 points, the precision for ‘simplifying’ class improves by 7.4 points and the precision for ‘complicating’ class improves by 4.0 points.']
[None, ['Acc.', 'P+1', 'P-1'], None, None, ['NRRall'], ['SimplePPDB'], ['NRRall', 'NRRall+binning', 'Acc.', 'P+1', 'P-1'], ['NRRall+binning+WC', 'SimplePPDB'], ['NRRall+binning+WC', 'SimplePPDB', 'Acc.', 'P+1', 'P-1']]
1
D18-1410table_5
Substitution Generation evaluation with Mean Average Precision, Precision@1 and the average number of paraphrases generated per target for each method. n is the number of target complex words/phrases for which the model generated > 0 candidates. Kauchak† has an advantage on MAP because it generates the least number of candidates. Glavaˇs is marked as ‘-’ because it can technically generate as many words/phrases as are in the vocabulary.
1
[['Glavas'], ['WordNet'], ['Kauchak'], ['SimplePPDB'], ['SimplePPDB++']]
1
[['#PPs'], ['MAP'], ['P@1']]
[['-', '22.8', '13.5'], ['6.63', '62.2', '50.6'], ['4.39', '76.4†', '68.9'], ['8.77', '67.8', '78.0'], ['9.52', '69.1', '80.2']]
column
['#PPs', 'MAP', 'P@1']
['SimplePPDB++']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>#PPs</th> <th>MAP</th> <th>P@1</th> </tr> </thead> <tbody> <tr> <td>Glavas</td> <td>-</td> <td>22.8</td> <td>13.5</td> </tr> <tr> <td>WordNet</td> <td>6.63</td> <td>62.2</td> <td>50.6</td> </tr> <tr> <td>Kauchak</td> <td>4.39</td> <td>76.4†</td> <td>68.9</td> </tr> <tr> <td>SimplePPDB</td> <td>8.77</td> <td>67.8</td> <td>78.0</td> </tr> <tr> <td>SimplePPDB++</td> <td>9.52</td> <td>69.1</td> <td>80.2</td> </tr> </tbody></table>
Table 5
table_5
D18-1410
7
emnlp2018
Results. Table 5 shows the comparison of SimplePPDB and SimplePPDB++ on the number of substitutions generated for each target, the mean average precision and precision@1 for the final ranked list of candidate substitutions. This is a fair and direct comparison between SimplePPDB++ and SimplePPDB, as both methods have access to the same paraphrase rules in PPDB as potential candidates. The better NRR model we used in creating SimplePPDB++ allows improved selections and rankings of simplifying paraphrase rules than the previous version of SimplePPDB. As an additional reference, we also include the measurements for the other existing methods based on (Pavlick and Callison-Burch, 2016), which, by evaluation design, are focused on the comparison of precision while PPDB has full coverage.
[2, 1, 1, 1, 2]
['Results.', 'Table 5 shows the comparison of SimplePPDB and SimplePPDB++ on the number of substitutions generated for each target, the mean average precision and precision@1 for the final ranked list of candidate substitutions.', 'This is a fair and direct comparison between SimplePPDB++ and SimplePPDB, as both methods have access to the same paraphrase rules in PPDB as potential candidates.', 'The better NRR model we used in creating SimplePPDB++ allows improved selections and rankings of simplifying paraphrase rules than the previous version of SimplePPDB.', 'As an additional reference, we also include the measurements for the other existing methods based on (Pavlick and Callison-Burch, 2016), which, by evaluation design, are focused on the comparison of precision while PPDB has full coverage.']
[None, ['SimplePPDB', 'SimplePPDB++', '#PPs', 'MAP', 'P@1'], ['SimplePPDB++', 'SimplePPDB'], ['SimplePPDB++'], None]
1
D18-1410table_6
Evaluation on two datasets for English complex word identification. Our approaches that utilize the word-complexity lexicon (W C) improve upon the nearest centroid (Yimam et al., 2017) and SV000gg (Paetzold and Specia, 2016b) systems. The best performance figure of each column is denoted in bold typeface and the second best is denoted by an underline.
1
[['Length'], ['Senses'], ['SimpleWiki'], ['NearestCentroid'], ['SV000gg'], ['WC-only'], ['NearestCentroid+WC'], ['SV000gg+WC']]
2
[['CWI SemEval 2016', 'G-score'], ['CWI SemEval 2016', 'F-score'], ['CWI SemEval 2016', 'Accuracy'], ['CWIG3G2 2018', 'G-score'], ['CWIG3G2 2018', 'F-score'], ['CWIG3G2 2018', 'Accuracy']]
[['47.8', '10.7', '33.2', '70.8', '65.9', '67.7'], ['57.9', '12.5', '43.6', '67.7', '62.3', '54.1'], ['69.7', '16.2', '58.3', '73.1', '66.3', '61.6'], ['66.1', '14.8', '53.6', '75.1', '66.6', '76.7'], ['77.3', '24.3', '77.6', '74.9', '73.8', '78.7'], ['68.5', '30.5', '87.7', '71.1', '67.5', '69.8'], ['70.2', '16.6', '61.8', '77.3', '68.8', '78.1'], ['78.1', '26.3', '80.0', '75.4', '74.8', '80.2']]
column
['G-score', 'F-score', 'Accuracy', 'G-score', 'F-score', 'Accuracy']
['WC-only', 'NearestCentroid+WC', 'SV000gg+WC']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>CWI SemEval 2016 || G-score</th> <th>CWI SemEval 2016 || F-score</th> <th>CWI SemEval 2016 || Accuracy</th> <th>CWIG3G2 2018 || G-score</th> <th>CWIG3G2 2018 || F-score</th> <th>CWIG3G2 2018 || Accuracy</th> </tr> </thead> <tbody> <tr> <td>Length</td> <td>47.8</td> <td>10.7</td> <td>33.2</td> <td>70.8</td> <td>65.9</td> <td>67.7</td> </tr> <tr> <td>Senses</td> <td>57.9</td> <td>12.5</td> <td>43.6</td> <td>67.7</td> <td>62.3</td> <td>54.1</td> </tr> <tr> <td>SimpleWiki</td> <td>69.7</td> <td>16.2</td> <td>58.3</td> <td>73.1</td> <td>66.3</td> <td>61.6</td> </tr> <tr> <td>NearestCentroid</td> <td>66.1</td> <td>14.8</td> <td>53.6</td> <td>75.1</td> <td>66.6</td> <td>76.7</td> </tr> <tr> <td>SV000gg</td> <td>77.3</td> <td>24.3</td> <td>77.6</td> <td>74.9</td> <td>73.8</td> <td>78.7</td> </tr> <tr> <td>WC-only</td> <td>68.5</td> <td>30.5</td> <td>87.7</td> <td>71.1</td> <td>67.5</td> <td>69.8</td> </tr> <tr> <td>NearestCentroid+WC</td> <td>70.2</td> <td>16.6</td> <td>61.8</td> <td>77.3</td> <td>68.8</td> <td>78.1</td> </tr> <tr> <td>SV000gg+WC</td> <td>78.1</td> <td>26.3</td> <td>80.0</td> <td>75.4</td> <td>74.8</td> <td>80.2</td> </tr> </tbody></table>
Table 6
table_6
D18-1410
8
emnlp2018
Results. We compare our enhanced approaches (SV000gg+W C and NC+W C) and lexicon only approach (W C-only), with the state-of-the-art and baseline threshold-based methods. For measuring performance, we use F-score and accuracy as well as G-score, the harmonic mean of accuracy and recall. G-score is the official metric of the CWI task of Semeval 2016. Table 6 shows that the wordcomplexity lexicon improves the performance of SV000gg and the nearest centroid classifier in all the three metrics. The improvements are statistically significant according to the paired bootstrap test with p < 0.01. The word-complexity lexicon alone (WC-only) performs satisfactorily on the CWIG3G2 dataset, which effectively is a simple table look-up approach with extreme time and space efficiency. For CWI SemEval 2016 dataset, WC-only approach gives the best accuracy and Fscore, though this can be attributed to the skewed distribution of dataset (only 5% of the test instances are 'complex'.
[2, 1, 2, 2, 1, 2, 1, 1]
['Results.', 'We compare our enhanced approaches (SV000gg+W C and NC+W C) and lexicon only approach (W C-only), with the state-of-the-art and baseline threshold-based methods.', 'For measuring performance, we use F-score and accuracy as well as G-score, the harmonic mean of accuracy and recall.', 'G-score is the official metric of the CWI task of Semeval 2016.', 'Table 6 shows that the wordcomplexity lexicon improves the performance of SV000gg and the nearest centroid classifier in all the three metrics.', 'The improvements are statistically significant according to the paired bootstrap test with p < 0.01.', 'The word-complexity lexicon alone (WC-only) performs satisfactorily on the CWIG3G2 dataset, which effectively is a simple table look-up approach with extreme time and space efficiency.', "For CWI SemEval 2016 dataset, WC-only approach gives the best accuracy and Fscore, though this can be attributed to the skewed distribution of dataset (only 5% of the test instances are 'complex'."]
[None, ['WC-only', 'NearestCentroid+WC', 'SV000gg+WC'], ['G-score', 'F-score', 'Accuracy'], ['G-score'], ['NearestCentroid+WC', 'SV000gg+WC'], None, ['WC-only', 'CWIG3G2 2018'], ['WC-only', 'CWI SemEval 2016', 'F-score', 'Accuracy']]
1
D18-1412table_2
PropBank sSRL results, using gold predicates, on CoNLL 2012 test. For fair comparison, we show only non-ensembled models.
2
[['Model', 'Zhou and Xu (2015)'], ['Model', 'He et al. (2017)'], ['Model', 'He et al. (2018a)'], ['Model', 'Tan et al. (2018)'], ['Model', 'Semi-CRF baseline'], ['Model', '+ common nonterminals']]
1
[['Prec.'], ['Rec.'], ['F1']]
[['-', '-', '81.3'], ['81.7', '81.6', '81.7'], ['83.9', '73.7', '82.1'], ['81.9', '83.6', '82.7'], ['84.8', '81.2', '83.0'], ['85.1', '82.6', '83.8']]
column
['Prec. ', 'Rec.', 'F1']
['Semi-CRF baseline', '+ common nonterminals']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Prec.</th> <th>Rec.</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Model || Zhou and Xu (2015)</td> <td>-</td> <td>-</td> <td>81.3</td> </tr> <tr> <td>Model || He et al. (2017)</td> <td>81.7</td> <td>81.6</td> <td>81.7</td> </tr> <tr> <td>Model || He et al. (2018a)</td> <td>83.9</td> <td>73.7</td> <td>82.1</td> </tr> <tr> <td>Model || Tan et al. (2018)</td> <td>81.9</td> <td>83.6</td> <td>82.7</td> </tr> <tr> <td>Model || Semi-CRF baseline</td> <td>84.8</td> <td>81.2</td> <td>83.0</td> </tr> <tr> <td>Model || + common nonterminals</td> <td>85.1</td> <td>82.6</td> <td>83.8</td> </tr> </tbody></table>
Table 2
table_2
D18-1412
7
emnlp2018
PropBank SRL. We use the OntoNotes data from the CoNLL shared task in 2012 (Pradhan et al., 2013) for Propbank SRL. Table 2 reports results using gold predicates. Recent competitive systems for PropBank SRL follow the approach of Zhou and Xu (2015), employing deep architectures, and forgoing the use of any syntax. He et al. (2017) improve on those results, and in analysis experiments, show that constraints derived using syntax may further improve performance. Tan et al. (2018) employ a similar approach but use feed-forward networks with self-attention. He et al. (2018a) use a span-based classification to jointly identify and label argument spans. Our syntax-agnostic semi-CRF baseline model improves on prior work (excluding ELMo), showing again the value of global normalization in semantic structure prediction. We obtain further improvement of 0.8 absolute F1 with the best syntactic scaffold from the frame SRL task. This indicates that a syntactic inductive bias is beneficial even when using sophisticated neural architectures. He et al. (2018a) also provide a setup where initialization was done with deep contextualized embeddings, ELMo (Peters et al., 2018), resulting in 85.5 F1 on the OntoNotes test set. The improvements from ELMo are methodologically orthogonal to syntactic scaffolds.
[2, 2, 1, 2, 2, 2, 2, 2, 1, 2, 2, 2]
['PropBank SRL.', 'We use the OntoNotes data from the CoNLL shared task in 2012 (Pradhan et al., 2013) for Propbank SRL.', 'Table 2 reports results using gold predicates.', 'Recent competitive systems for PropBank SRL follow the approach of Zhou and Xu (2015), employing deep architectures, and forgoing the use of any syntax.', 'He et al. (2017) improve on those results, and in analysis experiments, show that constraints derived using syntax may further improve performance.', 'Tan et al. (2018) employ a similar approach but use feed-forward networks with self-attention.', 'He et al. (2018a) use a span-based classification to jointly identify and label argument spans.', 'Our syntax-agnostic semi-CRF baseline model improves on prior work (excluding ELMo), showing again the value of global normalization in semantic structure prediction.', 'We obtain further improvement of 0.8 absolute F1 with the best syntactic scaffold from the frame SRL task.', 'This indicates that a syntactic inductive bias is beneficial even when using sophisticated neural architectures.', 'He et al. (2018a) also provide a setup where initialization was done with deep contextualized embeddings, ELMo (Peters et al., 2018), resulting in 85.5 F1 on the OntoNotes test set.', 'The improvements from ELMo are methodologically orthogonal to syntactic scaffolds.']
[None, None, None, ['Zhou and Xu (2015)'], ['He et al. (2017)'], ['Tan et al. (2018)'], ['He et al. (2018a)'], ['Semi-CRF baseline'], ['Semi-CRF baseline', '+ common nonterminals', 'F1'], ['+ common nonterminals'], ['He et al. (2018a)'], None]
1
D18-1414table_9
F-scores of the baseline and the bothretrained models relative to role types on the two data sets. We only list results of the PCFGLAparser-based system.
2
[['L2', 'Baseline'], ['L2', 'Both retrained'], ['L1', 'Baseline'], ['L1', 'Both retrained']]
1
[['A0'], ['A1'], ['A2'], ['AM']]
[['67.95', '71.21', '51.43', '70.20'], ['70.62', '74.75', '64.29', '72.22'], ['69.49', '79.78', '61.84', '71.74'], ['73.15', '80.90', '63.35', '73.02']]
column
['F-scores', 'F-scores', 'F-scores', 'F-scores']
['Both retrained']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>A0</th> <th>A1</th> <th>A2</th> <th>AM</th> </tr> </thead> <tbody> <tr> <td>L2 || Baseline</td> <td>67.95</td> <td>71.21</td> <td>51.43</td> <td>70.20</td> </tr> <tr> <td>L2 || Both retrained</td> <td>70.62</td> <td>74.75</td> <td>64.29</td> <td>72.22</td> </tr> <tr> <td>L1 || Baseline</td> <td>69.49</td> <td>79.78</td> <td>61.84</td> <td>71.74</td> </tr> <tr> <td>L1 || Both retrained</td> <td>73.15</td> <td>80.90</td> <td>63.35</td> <td>73.02</td> </tr> </tbody></table>
Table 9
table_9
D18-1414
9
emnlp2018
Table 9 further shows F-scores for the baseline and the both-retrained model relative to each role type in detail. Given that the F-scores for both models are equal to 0 on A3 and A4, we just omit this part. From the figure we can observe that, all the semantic roles achieve significant improvements in performances.
[1, 2, 1]
['Table 9 further shows F-scores for the baseline and the both-retrained model relative to each role type in detail.', 'Given that the F-scores for both models are equal to 0 on A3 and A4, we just omit this part.', 'From the figure we can observe that, all the semantic roles achieve significant improvements in performances.']
[['Baseline', 'Both retrained'], ['Baseline', 'Both retrained'], ['Both retrained']]
1
D18-1415table_1
Performance of the original system when interacting with different user simulators. LU error means simulating slot errors and intent errors in different rates. Succ.: success rate, Turn: average turns, Reward: average reward.
2
[['LU Error Rate', '0.00'], ['LU Error Rate', '0.05'], ['LU Error Rate', '0.10'], ['LU Error Rate', '0.20']]
2
[['Sim1', 'Succ.'], ['Sim1', 'Turn'], ['Sim1', 'Reward'], ['Sim1', 'Satis.'], ['Sim2', 'Succ.'], ['Sim2', 'Turn'], ['Sim2', 'Reward'], ['Sim2', 'Satis.']]
[['0.962', '13.6', '3.94', '-', '0.901', '13.2', '2.95', '0.57'], ['0.937', '13.7', '3.41', '-', '0.877', '14.4', '2.41', '0.48'], ['0.910', '14.3', '2.65', '-', '0.841', '13.9', '1.41', '0.47'], ['0.845', '15.2', '0.58', '-', '0.784', '14.7', '0.01', '0.47']]
column
['Succ.', 'Turn', 'Reward', 'Satis.', 'Succ.', 'Turn', 'Reward', 'Satis.']
['Sim1', 'Sim2']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Sim1 || Succ.</th> <th>Sim1 || Turn</th> <th>Sim1 || Reward</th> <th>Sim1 || Satis.</th> <th>Sim2 || Succ.</th> <th>Sim2 || Turn</th> <th>Sim2 || Reward</th> <th>Sim2 || Satis.</th> </tr> </thead> <tbody> <tr> <td>LU Error Rate || 0.00</td> <td>0.962</td> <td>13.6</td> <td>3.94</td> <td>-</td> <td>0.901</td> <td>13.2</td> <td>2.95</td> <td>0.57</td> </tr> <tr> <td>LU Error Rate || 0.05</td> <td>0.937</td> <td>13.7</td> <td>3.41</td> <td>-</td> <td>0.877</td> <td>14.4</td> <td>2.41</td> <td>0.48</td> </tr> <tr> <td>LU Error Rate || 0.10</td> <td>0.910</td> <td>14.3</td> <td>2.65</td> <td>-</td> <td>0.841</td> <td>13.9</td> <td>1.41</td> <td>0.47</td> </tr> <tr> <td>LU Error Rate || 0.20</td> <td>0.845</td> <td>15.2</td> <td>0.58</td> <td>-</td> <td>0.784</td> <td>14.7</td> <td>0.01</td> <td>0.47</td> </tr> </tbody></table>
Table 1
table_1
D18-1415
6
emnlp2018
After obtaining the original system S1, we deploy it to interact with Sim1 and Sim2 respectively, under different LU error rates (Li et al., 2017a). In each condition, we simulate 3200 episodes to obtain the performance. Table 1 shows the details of the test performance. Table 2 shows the statistics of turns when S1 interacts with Sim2. As shown in Table 1, S1 achieves higher dialog success rate and rewards when testing with Sim1.
[2, 2, 1, 0, 1]
['After obtaining the original system S1, we deploy it to interact with Sim1 and Sim2 respectively, under different LU error rates (Li et al., 2017a).', 'In each condition, we simulate 3200 episodes to obtain the performance.', 'Table 1 shows the details of the test performance.', 'Table 2 shows the statistics of turns when S1 interacts with Sim2.', 'As shown in Table 1, S1 achieves higher dialog success rate and rewards when testing with Sim1.']
[['Sim1', 'Sim2'], None, None, None, ['Succ.', 'Reward', 'Sim1']]
1
D18-1417table_1
Results of independent training for slot filling in terms of F1-score.
2
[['Methods', 'CRF (Mesnil et al. 2013)'], ['Methods', 'simple RNN (Yao et al. 2013)'], ['Methods', 'CNN-CRF (Xu and Sarikaya, 2013)'], ['Methods', 'LSTM (Yao et al. 2013)'], ['Methods', 'RNN-SOP (Liu and Lane 2015)'], ['Methods', 'Deep LSTM (Yao et al. 2013)'], ['Methods', 'RNN-EM (Peng et al. 2015)'], ['Methods', 'Bi-RNN with Ranking Loss (Vu et al. 2016)'], ['Methods', 'Encoder-labeler Deep LSTM (Kurata et al. 2016)'], ['Methods', 'Attention BiRNN (Liu and Lane 2016a)'], ['Methods', 'BLSTM-LSTM (focus) (Zhu and Yu, 2017)'], ['Methods', 'Our Model']]
1
[['F1-score']]
[['92.94'], ['94.11'], ['94.35'], ['94.85'], ['84.89'], ['95.08'], ['95.25'], ['95.47'], ['95.66'], ['95.75'], ['95.79'], ['96.35']]
column
['F1-score']
['Our Model']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>F1-score</th> </tr> </thead> <tbody> <tr> <td>Methods || CRF (Mesnil et al. 2013)</td> <td>92.94</td> </tr> <tr> <td>Methods || simple RNN (Yao et al. 2013)</td> <td>94.11</td> </tr> <tr> <td>Methods || CNN-CRF (Xu and Sarikaya, 2013)</td> <td>94.35</td> </tr> <tr> <td>Methods || LSTM (Yao et al. 2013)</td> <td>94.85</td> </tr> <tr> <td>Methods || RNN-SOP (Liu and Lane 2015)</td> <td>84.89</td> </tr> <tr> <td>Methods || Deep LSTM (Yao et al. 2013)</td> <td>95.08</td> </tr> <tr> <td>Methods || RNN-EM (Peng et al. 2015)</td> <td>95.25</td> </tr> <tr> <td>Methods || Bi-RNN with Ranking Loss (Vu et al. 2016)</td> <td>95.47</td> </tr> <tr> <td>Methods || Encoder-labeler Deep LSTM (Kurata et al. 2016)</td> <td>95.66</td> </tr> <tr> <td>Methods || Attention BiRNN (Liu and Lane 2016a)</td> <td>95.75</td> </tr> <tr> <td>Methods || BLSTM-LSTM (focus) (Zhu and Yu, 2017)</td> <td>95.79</td> </tr> <tr> <td>Methods || Our Model</td> <td>96.35</td> </tr> </tbody></table>
Table 1
table_1
D18-1417
7
emnlp2018
4.4 Independent Learning. The results of separate training for slot filling and intent detection are reported in Table 1 and Table 2 respectively. On the independent slot filling task, we fixed the intent information as the ground truth labels in the dataset. But on the independent intent detection task, there is no interaction with slot labels. Table 1 compares F1-score of slot filling between our proposed architecture and some previous works. Our model achieves state-of-the-art results and outperforms previous best model by 0.56% in terms of F1-score. We attribute the improvement of our model to the following reasons: 1) The attention used in (Liu and Lane, 2016a) is vanilla attention, which is used to compute the decoding states. It is not suitable for our model since the embeddings are composed of several parts. Self-attention allows the model to attend to information jointly from different representation parts, so as to better understand the utterance. 2) intentaugmented gating layer connects the semantics of sequence slot labels, which captures complex interactions between the two tasks.
[2, 1, 0, 2, 1, 1, 2, 2, 2, 2]
['4.4 Independent Learning.', 'The results of separate training for slot filling and intent detection are reported in Table 1 and Table 2 respectively.', 'On the independent slot filling task, we fixed the intent information as the ground truth labels in the dataset.', 'But on the independent intent detection task, there is no interaction with slot labels.', 'Table 1 compares F1-score of slot filling between our proposed architecture and some previous works.', 'Our model achieves state-of-the-art results and outperforms previous best model by 0.56% in terms of F1-score.', 'We attribute the improvement of our model to the following reasons: 1) The attention used in (Liu and Lane, 2016a) is vanilla attention, which is used to compute the decoding states.', 'It is not suitable for our model since the embeddings are composed of several parts.', 'Self-attention allows the model to attend to information jointly from different representation parts, so as to better understand the utterance.', '2) intentaugmented gating layer connects the semantics of sequence slot labels, which captures complex interactions between the two tasks.']
[None, None, None, None, ['Our Model', 'F1-score'], ['Our Model', 'F1-score'], ['Attention BiRNN (Liu and Lane 2016a)'], ['Our Model'], ['Our Model'], ['Our Model']]
1
D18-1417table_4
Feature ablation comparison of our proposed model on ATIS. slot filling and intent detection result are shown each row after after we exclude each feature from the full architecture
2
[['Methods', 'W/O char-embedding'], ['Methods', 'W/O self-attention'], ['Methods', 'W/O attention-gating'], ['Methods', 'Full Model']]
1
[['F1-Score'], ['Error(%)']]
[['96.30', '1.23'], ['96.26', '1.34'], ['96.25', '1.46'], ['96.52', '1.23']]
column
['F1-Score', 'Error(%)']
['W/O char-embedding', 'W/O self-attention', 'W/O attention-gating', 'Full Model']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>F1-Score</th> <th>Error(%)</th> </tr> </thead> <tbody> <tr> <td>Methods || W/O char-embedding</td> <td>96.30</td> <td>1.23</td> </tr> <tr> <td>Methods || W/O self-attention</td> <td>96.26</td> <td>1.34</td> </tr> <tr> <td>Methods || W/O attention-gating</td> <td>96.25</td> <td>1.46</td> </tr> <tr> <td>Methods || Full Model</td> <td>96.52</td> <td>1.23</td> </tr> </tbody></table>
Table 4
table_4
D18-1417
8
emnlp2018
Table 4 shows the joint learning performance of our model on ATIS data set by removing one module at a time. We find that all variants of our model perform well based on our gate mechanism. As listed in the table, all features contribute to both slot filling and intent classification task. If we remove the self-attention from the holistic model or just in the intent-augmented gating layer, the performance drops dramatically. The result can be interpreted that self-attention mechanism computes context representation separately and enhances the interaction of features in the same aspect. We can see that self-attention does improve performance a lot in a large scale, which is consistent with findings of previous work (Vaswani et al., 2017; Lin et al., 2017). If we remove character-level embeddings and only use word-level embeddings, we see 0.22% drop in terms of F1-score. Though word-level embeddings represent the semantics of each word, character-level embeddings can better handle the out-of-vocabulary (OOV) problem which is essential to determine the slot labels.
[1, 1, 1, 1, 2, 2, 1, 2]
['Table 4 shows the joint learning performance of our model on ATIS data set by removing one module at a time.', 'We find that all variants of our model perform well based on our gate mechanism.', 'As listed in the table, all features contribute to both slot filling and intent classification task.', 'If we remove the self-attention from the holistic model or just in the intent-augmented gating layer, the performance drops dramatically.', 'The result can be interpreted that self-attention mechanism computes context representation separately and enhances the interaction of features in the same aspect.', 'We can see that self-attention does improve performance a lot in a large scale, which is consistent with findings of previous work (Vaswani et al., 2017; Lin et al., 2017).', 'If we remove character-level embeddings and only use word-level embeddings, we see 0.22% drop in terms of F1-score.', 'Though word-level embeddings represent the semantics of each word, character-level embeddings can better handle the out-of-vocabulary (OOV) problem which is essential to determine the slot labels.']
[None, ['W/O char-embedding', 'W/O self-attention', 'W/O attention-gating', 'Full Model'], None, ['W/O self-attention'], ['W/O self-attention'], ['W/O self-attention'], ['Full Model', 'W/O char-embedding', 'F1-Score'], ['W/O char-embedding']]
1
D18-1418table_2
Test results for various models on permuted-bAbI dialog task. Results (accuracy %) are given in the standard setup and OOV setup; and both with and without match-type features.
2
[['Model', 'memN2N'], ['Model', 'memN2N + all-answers'], ['Model', 'Mask-memN2N'], ['Model', 'OOV: memN2N'], ['Model', 'OOV: memN2N + all-answers'], ['Model', 'OOV: Mask-memN2N']]
2
[['no match-type', 'Per-turn'], ['no match-type', 'Per-dialog'], ['+ match-type', 'Per-turn'], ['+ match-type', 'Per-dialog']]
[['91.8', '22', '93.3', '30.3'], ['88.5', '14.9', '92.5', '26.4'], ['93.4', '32', '95.2', '47.3'], ['63.4', '0.5', '78.1', '0.6'], ['60.8', '0.5', '74.9', '0.6'], ['63.0', '0.5', '80.1', '1']]
column
['accuracy', 'accuracy', 'accuracy', 'accuracy']
['Mask-memN2N', 'OOV: Mask-memN2N']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>no match-type || Per-turn</th> <th>no match-type || Per-dialog</th> <th>+ match-type || Per-turn</th> <th>+ match-type || Per-dialog</th> </tr> </thead> <tbody> <tr> <td>Model || memN2N</td> <td>91.8</td> <td>22</td> <td>93.3</td> <td>30.3</td> </tr> <tr> <td>Model || memN2N + all-answers</td> <td>88.5</td> <td>14.9</td> <td>92.5</td> <td>26.4</td> </tr> <tr> <td>Model || Mask-memN2N</td> <td>93.4</td> <td>32</td> <td>95.2</td> <td>47.3</td> </tr> <tr> <td>Model || OOV: memN2N</td> <td>63.4</td> <td>0.5</td> <td>78.1</td> <td>0.6</td> </tr> <tr> <td>Model || OOV: memN2N + all-answers</td> <td>60.8</td> <td>0.5</td> <td>74.9</td> <td>0.6</td> </tr> <tr> <td>Model || OOV: Mask-memN2N</td> <td>63.0</td> <td>0.5</td> <td>80.1</td> <td>1</td> </tr> </tbody></table>
Table 2
table_2
D18-1418
7
emnlp2018
6.3 Model comparison. Our results for our proposed model and comparison with other models for permuted-bAbI dialog task are given in Table 2. Table 2 follows the same format as Table 1, except we show results for different models on permuted-bAbI dialog task. We show results for three models memN2N, memN2N + all-answers and our proposed model, Mask-memN2N. In the memN2N + all-answers model, we extend the baseline memN2N model and though not realistic, we provide information on all correct next utterances during training, instead of providing only one correct next utterance. The memN2N + all-answers model has an element-wise sigmoid at the output layer instead of a softmax, allowing it to predict multiple correct answers. This model serves as an important additional baseline, and clearly demonstrates the benefit of our proposed approach. From Table 2, we observe that the memN2N + all-answers model performs poorly, in comparison to the memN2N baseline model both in standard setup and OOV setting, as well as with and without match-type features. This shows that the existing methods do not improve the accuracy of a dialog system even if all correct next utterances are known and used during training the model. Our proposed model performs better than both the baseline models. In the standard setup, the perdialog accuracy increases from 22% to 32%. Using match-type features, the per-dialog accuracy increases considerably from 30.3% to 47.3%. In the OOV setting, all models perform poorly and achieve per-dialog accuracy of 0-1% both with and without match-type features. These results are similar to results for original-bAbI dialog Task 5 from Bordes and Weston (2016b) and our results with the baseline model. Overall, Mask-memN2N is able to handle multiple correct next utterances present in permutedbAbI dialog task better than the baseline models. This indicates that permuted-bAbI dialog task is a better and effective evaluation proxy compared to original-bAbI dialog task for real-world data. This also shows that we need better neural approaches, similar to our proposed model, Mask-memN2N, for goal-oriented dialog in addition to better testbeds for benchmarking goal-oriented dialogs systems.
[2, 1, 2, 1, 2, 2, 2, 1, 2, 1, 1, 1, 1, 1, 1, 2, 2]
['6.3 Model comparison.', 'Our results for our proposed model and comparison with other models for permuted-bAbI dialog task are given in Table 2.', 'Table 2 follows the same format as Table 1, except we show results for different models on permuted-bAbI dialog task.', 'We show results for three models memN2N, memN2N + all-answers and our proposed model, Mask-memN2N.', 'In the memN2N + all-answers model, we extend the baseline memN2N model and though not realistic, we provide information on all correct next utterances during training, instead of providing only one correct next utterance.', 'The memN2N + all-answers model has an element-wise sigmoid at the output layer instead of a softmax, allowing it to predict multiple correct answers.', 'This model serves as an important additional baseline, and clearly demonstrates the benefit of our proposed approach.', 'From Table 2, we observe that the memN2N + all-answers model performs poorly, in comparison to the memN2N baseline model both in standard setup and OOV setting, as well as with and without match-type features.', 'This shows that the existing methods do not improve the accuracy of a dialog system even if all correct next utterances are known and used during training the model.', 'Our proposed model performs better than both the baseline models.', 'In the standard setup, the perdialog accuracy increases from 22% to 32%.', 'Using match-type features, the per-dialog accuracy increases considerably from 30.3% to 47.3%.', 'In the OOV setting, all models perform poorly and achieve per-dialog accuracy of 0-1% both with and without match-type features.', 'These results are similar to results for original-bAbI dialog Task 5 from Bordes and Weston (2016b) and our results with the baseline model.', 'Overall, Mask-memN2N is able to handle multiple correct next utterances present in permutedbAbI dialog task better than the baseline models.', 'This indicates that permuted-bAbI dialog task is a better and effective evaluation proxy compared to original-bAbI dialog task for real-world data.', 'This also shows that we need better neural approaches, similar to our proposed model, Mask-memN2N, for goal-oriented dialog in addition to better testbeds for benchmarking goal-oriented dialogs systems.']
[None, ['Mask-memN2N'], None, ['memN2N', 'memN2N + all-answers', 'Mask-memN2N'], ['memN2N + all-answers'], ['memN2N + all-answers'], ['memN2N + all-answers'], ['memN2N + all-answers', 'memN2N'], None, ['Mask-memN2N', 'memN2N', 'memN2N + all-answers'], ['Mask-memN2N', 'memN2N', 'Per-dialog', 'no match-type'], ['Mask-memN2N', 'memN2N', '+ match-type', 'Per-dialog'], ['OOV: memN2N', 'OOV: memN2N + all-answers', 'OOV: Mask-memN2N', 'Per-dialog'], ['OOV: Mask-memN2N'], ['Mask-memN2N'], None, ['Mask-memN2N']]
1
D18-1418table_3
Ablation study of our proposed model on permuted-bAbI dialog task. Results (accuracy %) are given in the standard setup, without match-type features.
2
[['Model', 'Mask-memN2N'], ['Model', 'Mask-memN2N (w/o entropy)'], ['Model', 'Mask-memN2N (w/o L2 mask pre-training)'], ['Model', 'Mask-memN2N (Reinforcement learning phase only)']]
1
[['Per-turn'], ['Per-dialog']]
[['93.4', '32'], ['92.1', '24.6'], ['85.8', '2.2'], ['16.0', '0']]
column
['accuracy', 'accuracy']
['Mask-memN2N', 'Mask-memN2N (w/o entropy)', 'Mask-memN2N (w/o L2 mask pre-training)', 'Mask-memN2N (Reinforcement learning phase only)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Per-turn</th> <th>Per-dialog</th> </tr> </thead> <tbody> <tr> <td>Model || Mask-memN2N</td> <td>93.4</td> <td>32</td> </tr> <tr> <td>Model || Mask-memN2N (w/o entropy)</td> <td>92.1</td> <td>24.6</td> </tr> <tr> <td>Model || Mask-memN2N (w/o L2 mask pre-training)</td> <td>85.8</td> <td>2.2</td> </tr> <tr> <td>Model || Mask-memN2N (Reinforcement learning phase only)</td> <td>16.0</td> <td>0</td> </tr> </tbody></table>
Table 3
table_3
D18-1418
8
emnlp2018
6.4 Ablation study. Here, we study the different parts of our model for better understanding of how the different parts influence the overall model performance. Our results for ablation study are given in Table 3. We show results for Mask-memN2N in various settings - a) without entropy, b) without pre-training mask c) reinforcement learning phase only. Adding entropy for the RL phase seems to have improved performance a bit by assisting better exploration in the RL phase. When we remove L2 mask pre-training, there is a huge drop in performance. In the RL phase, the action space is large. In the bAbI dialog task, which is a retrieval task, it is all the candidate answers that can be retrieved which forms the action set. L2 mask pre-training would help the RL phase to try more relevant actions from the very start. From Table 3 it is clear that the RL phase individually does not perform well; it is the combination of both the phases that gives the best performance. When we do only the RL phase, it might be very tough for the system to learn everything by trial and error, especially because the action space is so large. Preceding it with the SL phase and L2 mask pre-training would have put the system and its parameters at a good spot from which the RL phase can improve performance. Note that it would not be valid to check performance of the SL phase in the test set as the SL phase requires the actual answers for it to create the mask.
[2, 2, 1, 1, 2, 1, 2, 2, 2, 1, 2, 2, 2]
['6.4 Ablation study.', 'Here, we study the different parts of our model for better understanding of how the different parts influence the overall model performance.', 'Our results for ablation study are given in Table 3.', 'We show results for Mask-memN2N in various settings - a) without entropy, b) without pre-training mask c) reinforcement learning phase only.', 'Adding entropy for the RL phase seems to have improved performance a bit by assisting better exploration in the RL phase.', 'When we remove L2 mask pre-training, there is a huge drop in performance.', 'In the RL phase, the action space is large.', 'In the bAbI dialog task, which is a retrieval task, it is all the candidate answers that can be retrieved which forms the action set.', 'L2 mask pre-training would help the RL phase to try more relevant actions from the very start.', 'From Table 3 it is clear that the RL phase individually does not perform well; it is the combination of both the phases that gives the best performance.', 'When we do only the RL phase, it might be very tough for the system to learn everything by trial and error, especially because the action space is so large.', 'Preceding it with the SL phase and L2 mask pre-training would have put the system and its parameters at a good spot from which the RL phase can improve performance.', 'Note that it would not be valid to check performance of the SL phase in the test set as the SL phase requires the actual answers for it to create the mask.']
[None, None, None, ['Mask-memN2N (w/o entropy)', 'Mask-memN2N (w/o L2 mask pre-training)', 'Mask-memN2N (Reinforcement learning phase only)'], None, ['Mask-memN2N (w/o L2 mask pre-training)'], None, None, None, ['Mask-memN2N', 'Mask-memN2N (Reinforcement learning phase only)'], ['Mask-memN2N (Reinforcement learning phase only)'], None, None]
1
D18-1421table_2
Performances on Quora datasets.
2
[['Models', 'Seq2Seq'], ['Models', 'Residual LSTM'], ['Models', 'VAE-SVG-eq'], ['Models', 'Pointer-generator'], ['Models', 'RL-ROUGE'], ['Models', 'RbM-SL (ours)'], ['Models', 'RbM-IRL (ours)']]
2
[['Quora-I', 'ROUGE-1'], ['Quora-I', 'ROUGE-2'], ['Quora-I', 'BLEU'], ['Quora-I', 'METEOR'], ['Quora-II', 'ROUGE-1'], ['Quora-II', 'ROUGE-2'], ['Quora-II', 'BLEU'], ['Quora-II', 'METEOR']]
[['58.77', '31.47', '36.55', '26.28', '47.22', '20.72', '26.06', '20.35'], ['59.21', '32.43', '37.38', '28.17', '48.55', '22.48', '27.32', '22.37'], ['-', '-', '-', '25.50', '-', '-', '-', '22.20'], ['61.96', '36.07', '40.55', '30.21', '51.98', '25.16', '30.01', '24.31'], ['63.35', '37.33', '41.83', '30.96', '54.50', '27.50', '32.54', '25.67'], ['64.39', '38.11', '43.54', '32.84', '57.34', '31.09', '35.81', '28.12'], ['64.02', '37.72', '43.09', '31.97', '56.86', '29.90', '34.79', '26.67']]
column
['ROUGE-1', 'ROUGE-2', 'BLEU', 'METEOR', 'ROUGE-1', 'ROUGE-2', 'BLEU', 'METEOR']
['RbM-SL (ours)', 'RbM-IRL (ours)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Quora-I || ROUGE-1</th> <th>Quora-I || ROUGE-2 BLEU METEOR</th> <th>Quora-I || BLEU</th> <th>Quora-I || METEOR</th> <th>Quora-II || ROUGE-1</th> <th>Quora-II || ROUGE-2 BLEU METEOR</th> <th>Quora-II || BLEU</th> <th>Quora-II || METEOR</th> </tr> </thead> <tbody> <tr> <td>Models || Seq2Seq</td> <td>58.77</td> <td>31.47</td> <td>36.55</td> <td>26.28</td> <td>47.22</td> <td>20.72</td> <td>26.06</td> <td>20.35</td> </tr> <tr> <td>Models || Residual LSTM</td> <td>59.21</td> <td>32.43</td> <td>37.38</td> <td>28.17</td> <td>48.55</td> <td>22.48</td> <td>27.32</td> <td>22.37</td> </tr> <tr> <td>Models || VAE-SVG-eq</td> <td>-</td> <td>-</td> <td>-</td> <td>25.50</td> <td>-</td> <td>-</td> <td>-</td> <td>22.20</td> </tr> <tr> <td>Models || Pointer-generator</td> <td>61.96</td> <td>36.07</td> <td>40.55</td> <td>30.21</td> <td>51.98</td> <td>25.16</td> <td>30.01</td> <td>24.31</td> </tr> <tr> <td>Models || RL-ROUGE</td> <td>63.35</td> <td>37.33</td> <td>41.83</td> <td>30.96</td> <td>54.50</td> <td>27.50</td> <td>32.54</td> <td>25.67</td> </tr> <tr> <td>Models || RbM-SL (ours)</td> <td>64.39</td> <td>38.11</td> <td>43.54</td> <td>32.84</td> <td>57.34</td> <td>31.09</td> <td>35.81</td> <td>28.12</td> </tr> <tr> <td>Models || RbM-IRL (ours)</td> <td>64.02</td> <td>37.72</td> <td>43.09</td> <td>31.97</td> <td>56.86</td> <td>29.90</td> <td>34.79</td> <td>26.67</td> </tr> </tbody></table>
Table 2
table_2
D18-1421
7
emnlp2018
Automatic evaluation. Table 2 shows the performances of the models on Quora datasets. In both settings, we find that the proposed RbM-SL and RbM-IRL models outperform the baseline models in terms of all the evaluation measures. Particularly in Quora-II, RbM-SL and RbM-IRL make significant improvements over the baselines, which demonstrates their higher ability in learning for paraphrase generation. On Quora dataset, RbM-SL is constantly better than RbM-IRL for all the automatic measures, which is reasonable because RbM-SL makes use of additional labeled data to train the evaluator. Quora datasets contains a large number of high-quality non-paraphrases, i.e., they are literally similar but semantically different, for instance are analogue clocks better than digital and is analogue better than digital. Trained with the data, the evaluator tends to become more capable in paraphrase identification. With additional evaluation on Quora data, the evaluator used in RbM-SL can achieve an accuracy of 87% on identifying positive and negative pairs of paraphrases.
[2, 1, 1, 1, 1, 2, 2, 2]
['Automatic evaluation.', 'Table 2 shows the performances of the models on Quora datasets.', 'In both settings, we find that the proposed RbM-SL and RbM-IRL models outperform the baseline models in terms of all the evaluation measures.', 'Particularly in Quora-II, RbM-SL and RbM-IRL make significant improvements over the baselines, which demonstrates their higher ability in learning for paraphrase generation.', 'On Quora dataset, RbM-SL is constantly better than RbM-IRL for all the automatic measures, which is reasonable because RbM-SL makes use of additional labeled data to train the evaluator.', 'Quora datasets contains a large number of high-quality non-paraphrases, i.e., they are literally similar but semantically different, for instance are analogue clocks better than digital and is analogue better than digital.', 'Trained with the data, the evaluator tends to become more capable in paraphrase identification.', 'With additional evaluation on Quora data, the evaluator used in RbM-SL can achieve an accuracy of 87% on identifying positive and negative pairs of paraphrases.']
[None, None, ['RbM-SL (ours)', 'RbM-IRL (ours)'], ['Quora-II', 'RbM-SL (ours)', 'RbM-IRL (ours)'], ['RbM-SL (ours)', 'RbM-IRL (ours)'], None, None, ['RbM-SL (ours)']]
1
D18-1421table_4
Human evaluation on Quora datasets.
2
[['Models', 'Pointer-generator'], ['Models', 'RL-ROUGE'], ['Models', 'RbM-SL (ours)'], ['Models', 'RbM-IRL (ours)'], ['Models', 'Reference']]
2
[['Quora-I', 'Relevance'], ['Quora-I', 'Fluency'], [' Quora-II', 'Relevance'], [' Quora-II', 'Fluency']]
[['3.23', '4.55', '2.34', '2.96'], ['3.56', '4.61', '2.58', '3.14'], ['4.08', '4.67', '3.20', '3.48'], ['4.07', '4.69', '2.80', '3.53'], ['4.69', '4.95', '4.68', '4.90']]
column
['Relevance', 'Fluency', 'Relevance', 'Fluency']
['RbM-SL (ours)', 'RbM-IRL (ours)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Quora-I || Relevance</th> <th>Quora-I || Fluency</th> <th>Quora-II || Relevance</th> <th>Quora-II || Fluency</th> </tr> </thead> <tbody> <tr> <td>Models || Pointer-generator</td> <td>3.23</td> <td>4.55</td> <td>2.34</td> <td>2.96</td> </tr> <tr> <td>Models || RL-ROUGE</td> <td>3.56</td> <td>4.61</td> <td>2.58</td> <td>3.14</td> </tr> <tr> <td>Models || RbM-SL (ours)</td> <td>4.08</td> <td>4.67</td> <td>3.20</td> <td>3.48</td> </tr> <tr> <td>Models || RbM-IRL (ours)</td> <td>4.07</td> <td>4.69</td> <td>2.80</td> <td>3.53</td> </tr> <tr> <td>Models || Reference</td> <td>4.69</td> <td>4.95</td> <td>4.68</td> <td>4.90</td> </tr> </tbody></table>
Table 4
table_4
D18-1421
7
emnlp2018
Table 4 demonstrates the average ratings for each model, including the ground-truth references. Our models of RbM-SL and RbM-IRL get better scores in terms of relevance and fluency than the baseline models, and their differences are statistically significant (paired t-test, p-value < 0.01). We note that in human evaluation, RbM-SL achieves the best relevance score while RbM-IRL achieves the best fluency score.
[1, 1, 1]
['Table 4 demonstrates the average ratings for each model, including the ground-truth references.', 'Our models of RbM-SL and RbM-IRL get better scores in terms of relevance and fluency than the baseline models, and their differences are statistically significant (paired t-test, p-value < 0.01).', 'We note that in human evaluation, RbM-SL achieves the best relevance score while RbM-IRL achieves the best fluency score.']
[None, ['RbM-SL (ours)', 'RbM-IRL (ours)', 'Relevance', 'Fluency'], ['RbM-SL (ours)', 'RbM-IRL (ours)', 'Relevance', 'Fluency']]
1
D18-1424table_1
Performance of our models on Split1 with both sentence-level input and paragraph-level input. Sen. means sentence, while Par. means paragraph.
2
[['Model', 's2s'], ['Model', 's2s-a'], ['Model', 's2s-a-at'], ['Model', 's2s-a-at-cp'], ['Model', 's2s-a-at-mcp'], ['Model', 's2s-a-at-mcp-gsa']]
2
[['BLEU 1', 'Sen.'], ['BLEU 1', 'Par.'], ['BLEU 2', 'Sen.'], ['BLEU 2', 'Par.'], ['BLEU 3', 'Sen.'], ['BLEU 3', 'Par.'], ['BLEU 4', 'Sen.'], ['BLEU 4', 'Par.'], ['METEOR', 'Sen.'], ['METEOR', 'Par.'], ['ROUGE-L', 'Sen.'], ['ROUGE-L', 'Par.']]
[['30.41', '28.49', '12.68', '10.43', '6.33', '4.70', '3.44', '2.38', '11.98', '10.69', '29.93', '27.32'], ['34.46', '31.26', '18.07', '14.37', '11.20', '8.02', '7.42', '4.80', '14.95', '12.52', '34.69', '30.11'], ['40.57', '40.56', '24.30', '24.23', '16.40', '16.33', '11.54', '11.46', '18.35', '18.42', '40.76', '40.40'], ['42.15', '41.66', '26.28', '25.52', '18.35', '17.48', '13.37', '12.43', '18.02', '17.76', '41.97', '41.30'], ['43.65', '44.22', '28.23', '28.56', '20.33', '20.57', '15.23', '15.43', '19.19', '19.55', '43.60', '43.65'], ['43.47', '45.07', '28.23', '29.58', '20.40', '21.60', '15.32', '16.38', '19.29', '20.25', '43.91', '44.48']]
column
['BLEU 1', 'BLEU 1', 'BLEU 2', 'BLEU 2', 'BLEU 3', 'BLEU 3', 'BLEU 4', 'BLEU 4', 'METEOR', 'METEOR', 'ROUGE-L', 'ROUGE-L']
['s2s-a', 's2s-a-at', 's2s-a-at-cp', 's2s-a-at-mcp', 's2s-a-at-mcp-gsa']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>BLEU 1 || Sen.</th> <th>BLEU 1 || Par.</th> <th>BLEU 2 || Sen.</th> <th>BLEU 2 || Par.</th> <th>BLEU 3 || Sen.</th> <th>BLEU 3 || Par.</th> <th>BLEU 4 || Sen.</th> <th>BLEU 4 || Par.</th> <th>METEOR || Sen.</th> <th>METEOR || Par.</th> <th>ROUGE-L || Sen.</th> <th>ROUGE-L || Par.</th> </tr> </thead> <tbody> <tr> <td>Model || s2s</td> <td>30.41</td> <td>28.49</td> <td>12.68</td> <td>10.43</td> <td>6.33</td> <td>4.70</td> <td>3.44</td> <td>2.38</td> <td>11.98</td> <td>10.69</td> <td>29.93</td> <td>27.32</td> </tr> <tr> <td>Model || s2s-a</td> <td>34.46</td> <td>31.26</td> <td>18.07</td> <td>14.37</td> <td>11.20</td> <td>8.02</td> <td>7.42</td> <td>4.80</td> <td>14.95</td> <td>12.52</td> <td>34.69</td> <td>30.11</td> </tr> <tr> <td>Model || s2s-a-at</td> <td>40.57</td> <td>40.56</td> <td>24.30</td> <td>24.23</td> <td>16.40</td> <td>16.33</td> <td>11.54</td> <td>11.46</td> <td>18.35</td> <td>18.42</td> <td>40.76</td> <td>40.40</td> </tr> <tr> <td>Model || s2s-a-at-cp</td> <td>42.15</td> <td>41.66</td> <td>26.28</td> <td>25.52</td> <td>18.35</td> <td>17.48</td> <td>13.37</td> <td>12.43</td> <td>18.02</td> <td>17.76</td> <td>41.97</td> <td>41.30</td> </tr> <tr> <td>Model || s2s-a-at-mcp</td> <td>43.65</td> <td>44.22</td> <td>28.23</td> <td>28.56</td> <td>20.33</td> <td>20.57</td> <td>15.23</td> <td>15.43</td> <td>19.19</td> <td>19.55</td> <td>43.60</td> <td>43.65</td> </tr> <tr> <td>Model || s2s-a-at-mcp-gsa</td> <td>43.47</td> <td>45.07</td> <td>28.23</td> <td>29.58</td> <td>20.40</td> <td>21.60</td> <td>15.32</td> <td>16.38</td> <td>19.29</td> <td>20.25</td> <td>43.91</td> <td>44.48</td> </tr> </tbody></table>
Table 1
table_1
D18-1424
5
emnlp2018
3.3 Evaluation We conduct automatic evaluation with metrics: BLEU 1, BLEU 2, BLEU 3, BLEU 4 (Papineni et al., 2002), METEOR (Denkowski and Lavie, 2014) and ROUGE-L (Lin, 2004), and use evaluation package released by (Sharma et al., 2017) to compute them. 4 Results and Analysis. 4.1 Comparison of Techniques. Table 1 shows evaluation results for different models on SQuAD Split1. Results with both sentence-level and paragraph-level inputs are included. Similar results also have been observed on SQuAD Split2. The definitions of the models under comparison are:. s2s: basic sequence to sequence model. s2s-a: s2s + attention mechanism. s2s-a-at: s2s-a + answer tagging. s2s-a-at-cp: s2s-a-at + copy mechanism. s2s-a-at-mp: s2s-a-at + maxout pointer mechanism. s2s-a-at-mp-gsa: s2s-a-at-mp + gated self-attention. Attention Mechanism. s2s-a vs. s2s: we can see attention brings in large improvement on both sentence and paragraph inputs. The lower performance on paragraph indicates the challenge of encoding paragraph-level information. Answer Tagging. s2s-a-at vs. s2s-a: answer tagging dramatically boosts the performance, which confirms the importance of answer-aware QG: to generate good question, we need to control/learn which part of the context the generated question is asking about. More importantly, answer tagging clearly reduces the gap between sentence and paragraph inputs, which could be explained with: by providing guidance on answer words, we can make the model learn to neglect noise when processing a long context. Copy Mechanism. s2s-a-at-cp vs. s2s-a-at: as expected, copy mechanism further improves the performance on the QG task. (Du et al., 2017) pointed out most of the sentence-question pairs in SQuAD have over 50% overlaps in non-stop-words. Our results prove that sequence to sequence models with copy mechanism can very well learn when to generate a word and when to copy one from input on such QG task. More interestingly, the performance is lower when paragraph is given as input than sentence as input. The gap, again, reveals the challenge of leveraging longer context. We found that, when paragraph is given, the model tends to generate more repetitive words, and those words (often entities/concepts) usually appear multiple times in the context, Figure 3. The repetition issue can also be seen for sentence input, but it is more severe for paragraph. Maxout Pointer. s2s-a-at-mp vs. s2s-a-at-cp: Maxout pointer is designed to resolve the repetition issue brought by the basic copy mechanism, for example Figure 3. The maxout pointer mechanism outperforms the basic copy mechanism in all metrics. Moreover, the effectiveness of maxout pointer is more significant when paragraph is given as the model input, as it reverses the performance gap between models trained with sentence and paragraph inputs, Table 1. Gated Self-attention. s2s-a-at-mp-gsa vs. s2s-a-at-mp: the results demonstrate the effectiveness of gated selfattention, in particular, when working with paragraph inputs. This is the first time, as we know, taking paragraph as input is better than sentence for neural QG tasks. The observation is consistent across all metrics. Gated self-attention helps refine encoded context by fusing important information with the context’s self representation properly, especially when the context is long. To better understand how gated self-attention works, we visualize the self alignment vectors at each time step of the encoded sequence for one example, in Figure 5. This example corresponds to the example 1 in Figure 1. We can see the alignments distribution concentrates near the answer sequence and the most relevant context: ”thomas davis” in this example. Such alignments in turn would help to promote the most valuable information for decoding.
[2, 2, 2, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 2, 2, 1, 2, 2, 1, 2, 2, 1, 2, 2, 2, 2, 2, 1, 1, 2, 1, 2, 1, 2, 2, 0, 0, 0]
['3.3 Evaluation We conduct automatic evaluation with metrics: BLEU 1, BLEU 2, BLEU 3, BLEU 4 (Papineni et al., 2002), METEOR (Denkowski and Lavie, 2014) and ROUGE-L (Lin, 2004), and use evaluation package released by (Sharma et al., 2017) to compute them.', '4 Results and Analysis.', '4.1 Comparison of Techniques.', 'Table 1 shows evaluation results for different models on SQuAD Split1.', 'Results with both sentence-level and paragraph-level inputs are included.', 'Similar results also have been observed on SQuAD Split2.', 'The definitions of the models under comparison are:.', 's2s: basic sequence to sequence model.', 's2s-a: s2s + attention mechanism.', 's2s-a-at: s2s-a + answer tagging.', 's2s-a-at-cp: s2s-a-at + copy mechanism.', 's2s-a-at-mp: s2s-a-at + maxout pointer mechanism.', 's2s-a-at-mp-gsa: s2s-a-at-mp + gated self-attention.', 'Attention Mechanism.', 's2s-a vs. s2s: we can see attention brings in large improvement on both sentence and paragraph inputs.', 'The lower performance on paragraph indicates the challenge of encoding paragraph-level information.', 'Answer Tagging.', 's2s-a-at vs. s2s-a: answer tagging dramatically boosts the performance, which confirms the importance of answer-aware QG: to generate good question, we need to control/learn which part of the context the generated question is asking about.', 'More importantly, answer tagging clearly reduces the gap between sentence and paragraph inputs, which could be explained with: by providing guidance on answer words, we can make the model learn to neglect noise when processing a long context.', 'Copy Mechanism.', 's2s-a-at-cp vs. s2s-a-at: as expected, copy mechanism further improves the performance on the QG task.', '(Du et al., 2017) pointed out most of the sentence-question pairs in SQuAD have over 50% overlaps in non-stop-words.', 'Our results prove that sequence to sequence models with copy mechanism can very well learn when to generate a word and when to copy one from input on such QG task.', 'More interestingly, the performance is lower when paragraph is given as input than sentence as input.', 'The gap, again, reveals the challenge of leveraging longer context.', 'We found that, when paragraph is given, the model tends to generate more repetitive words, and those words (often entities/concepts) usually appear multiple times in the context, Figure 3.', 'The repetition issue can also be seen for sentence input, but it is more severe for paragraph.', 'Maxout Pointer.', 's2s-a-at-mp vs. s2s-a-at-cp: Maxout pointer is designed to resolve the repetition issue brought by the basic copy mechanism, for example Figure 3.', 'The maxout pointer mechanism outperforms the basic copy mechanism in all metrics.', 'Moreover, the effectiveness of maxout pointer is more significant when paragraph is given as the model input, as it reverses the performance gap between models trained with sentence and paragraph inputs, Table 1.', 'Gated Self-attention.', 's2s-a-at-mp-gsa vs. s2s-a-at-mp: the results demonstrate the effectiveness of gated selfattention, in particular, when working with paragraph inputs.', 'This is the first time, as we know, taking paragraph as input is better than sentence for neural QG tasks.', 'The observation is consistent across all metrics.', 'Gated self-attention helps refine encoded context by fusing important information with the context’s self representation properly, especially when the context is long.', 'To better understand how gated self-attention works, we visualize the self alignment vectors at each time step of the encoded sequence for one example, in Figure 5.', 'This example corresponds to the example 1 in Figure 1.', 'We can see the alignments distribution concentrates near the answer sequence and the most relevant context: ”thomas davis” in this example.', 'Such alignments in turn would help to promote the most valuable information for decoding.']
[None, None, None, None, None, None, None, ['s2s'], ['s2s-a'], ['s2s-a-at'], ['s2s-a-at-cp'], ['s2s-a-at-mcp'], ['s2s-a-at-mcp-gsa'], None, ['s2s', 's2s-a', 'Sen.', 'Par.'], ['Par.'], None, ['s2s-a', 's2s-a-at'], ['Sen.', 'Par.', 's2s-a-at'], None, ['s2s-a-at', 's2s-a-at-cp'], None, ['s2s-a-at-cp'], ['s2s-a-at-cp', 'Par.', 'Sen.'], None, None, None, None, None, ['s2s-a-at-cp', 's2s-a-at-mcp'], ['s2s-a-at-mcp', 'Sen.', 'Par.'], None, ['s2s-a-at-mcp-gsa', 'Par.'], None, None, None, None, None, None, None]
1
D18-1429table_8
Performance obtained by training on different types of noisy questions (WikiMovies).
2
[['Type of Noise', 'None'], ['Type of Noise', 'Stop Words'], ['Type of Noise', 'Question Type'], ['Type of Noise', 'Content Words'], ['Type of Noise', 'Named Entity']]
1
[['BLEU'], ['QBLEU'], ['Hit 1']]
[['100', '100', '76.5'], ['25.4', '84.0', '75.6'], ['74.0', '79.3', '73.5'], ['29.4', '64.3', '54.7'], ['41.9', '48.5', '17.97']]
column
['BLEU', 'QBLEU', 'Hit 1']
['Question Type', 'Stop Words', 'Content Words', 'Named Entity']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>BLEU</th> <th>QBLEU</th> <th>Hit 1</th> </tr> </thead> <tbody> <tr> <td>Type of Noise || None</td> <td>100</td> <td>100</td> <td>76.5</td> </tr> <tr> <td>Type of Noise || Stop Words</td> <td>25.4</td> <td>84.0</td> <td>75.6</td> </tr> <tr> <td>Type of Noise || Question Type</td> <td>74.0</td> <td>79.3</td> <td>73.5</td> </tr> <tr> <td>Type of Noise || Content Words</td> <td>29.4</td> <td>64.3</td> <td>54.7</td> </tr> <tr> <td>Type of Noise || Named Entity</td> <td>41.9</td> <td>48.5</td> <td>17.97</td> </tr> </tbody></table>
Table 8
table_8
D18-1429
8
emnlp2018
The results of our experiments are summarized in Table 8 - 10. The first column for each table shows the manner in which the noisy training data was created. The second column shows the BLEU4 score of the noisy questions when compared to the original reference questions (thus it tells us the perceived quality of these questions under the BLEU4 metric). We consider BLEU4 because of all the current metrics used for AQG it is the most popular. Similarly, the third column tells us the perceived quality of these questions under the Q-BLEU4 metric. Ideally, we would want that the performance of the model should correlate better with the perceived quality of the training questions as identified by a given metric. We observe that the general trend is better w.r.t. the Q-BLEU4 metric than the BLEU4 metric (i.e., in general, higher Q-BLEU4 indicates better performance and lower Q-BLEU4 indicates poor performance). In particular, notice that BLEU4 gives much importance to stop words, but these words hardly have any influence on the final performance. We believe that such an extrinsic evaluation should also be used while designing better metrics and it would help us get better insights.
['1', '1', '1', '2', '1', '2', '1', '2', '2']
['The results of our experiments are summarized in Table 8 - 10.', 'The first column for each table shows the manner in which the noisy training data was created.', 'The second column shows the BLEU4 score of the noisy questions when compared to the original reference questions (thus it tells us the perceived quality of these questions under the BLEU4 metric).', 'We consider BLEU4 because of all the current metrics used for AQG it is the most popular.', 'Similarly, the third column tells us the perceived quality of these questions under the Q-BLEU4 metric.', 'Ideally, we would want that the performance of the model should correlate better with the perceived quality of the training questions as identified by a given metric.', 'We observe that the general trend is better w.r.t. the Q-BLEU4 metric than the BLEU4 metric (i.e., in general, higher Q-BLEU4 indicates better performance and lower Q-BLEU4 indicates poor performance).', 'In particular, notice that BLEU4 gives much importance to stop words, but these words hardly have any influence on the final performance.', 'We believe that such an extrinsic evaluation should also be used while designing better metrics and it would help us get better insights.']
[None, ['None', 'Stop Words', 'Question Type', 'Content Words', 'Named Entity'], ['BLEU'], ['BLEU'], ['QBLEU'], None, ['BLEU', 'QBLEU'], None, None]
1
D18-1434table_3
State-of-the-Art (SOTA) comparison on VQGCOCO Dataset. The first block consists of the SOTA results, second block refers to the baselines mentioned in section 5.2, third block shows the results for the best method for different ablations mentioned in table 1.
2
[['Context', 'Natural 2016'], ['Context', 'Creative 2017'], ['Context', 'Image Only'], ['Context', 'Caption Only'], ['Context', 'Tag-Hadamard'], ['Context', 'Place CNN-Joint'], ['Context', 'Diff.Image-Joint'], ['Context', 'MDN-Joint (Ours)'], ['Context', 'Humans 2016']]
1
[['BLEU1'], ['METEOR'], ['ROUGE'], ['CIDEr']]
[['19.2', '19.7', '-', '-'], ['35.6', '19.9', '-', '-'], ['20.8', '8.6', '22.6', '18.8'], ['21.1', '8.5', '25.9', '22.3'], ['24.4', '10.8', '24.3', '55.0'], ['25.7', '10.8', '24.5', '56.1'], ['30.4', '11.7', '26.3', '38.8'], ['36.0', '23.4', '41.8', '50.7'], ['86.0', '60.8', '-', '-']]
column
['BLEU1', 'METEOR', 'ROUGE', 'CIDEr']
['MDN-Joint (Ours)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>BLEU1</th> <th>METEOR</th> <th>ROUGE</th> <th>CIDEr</th> </tr> </thead> <tbody> <tr> <td>Context || Natural 2016</td> <td>19.2</td> <td>19.7</td> <td>-</td> <td>-</td> </tr> <tr> <td>Context || Creative 2017</td> <td>35.6</td> <td>19.9</td> <td>-</td> <td>-</td> </tr> <tr> <td>Context || Image Only</td> <td>20.8</td> <td>8.6</td> <td>22.6</td> <td>18.8</td> </tr> <tr> <td>Context || Caption Only</td> <td>21.1</td> <td>8.5</td> <td>25.9</td> <td>22.3</td> </tr> <tr> <td>Context || Tag-Hadamard</td> <td>24.4</td> <td>10.8</td> <td>24.3</td> <td>55.0</td> </tr> <tr> <td>Context || Place CNN-Joint</td> <td>25.7</td> <td>10.8</td> <td>24.5</td> <td>56.1</td> </tr> <tr> <td>Context || Diff.Image-Joint</td> <td>30.4</td> <td>11.7</td> <td>26.3</td> <td>38.8</td> </tr> <tr> <td>Context || MDN-Joint (Ours)</td> <td>36.0</td> <td>23.4</td> <td>41.8</td> <td>50.7</td> </tr> <tr> <td>Context || Humans 2016</td> <td>86.0</td> <td>60.8</td> <td>-</td> <td>-</td> </tr> </tbody></table>
Table 3
table_3
D18-1434
8
emnlp2018
5.2 Baseline and State-of-the-Art. The comparison of our method with various baselines and state-of-the-art methods is provided in table 2 for VQA 1.0 and table 3 for VQG-COCO dataset. The comparable baselines for our method are the image based and caption based models in which we use either only the image or the caption embedding and generate the question. In both the tables, the first block consists of the current stateof-the-art methods on that dataset and the second contains the baselines. We observe that for the VQA dataset we achieve an improvement of 8% in BLEU and 7% in METEOR metric scores over the baselines, whereas for VQG-COCO dataset this is 15% for both the metrics. We improve over the previous state-of-the-art (Yang et al., 2015) for VQA dataset by around 6% in BLEU score and 10% in METEOR score. In the VQG-COCO dataset, we improve over (Mostafazadeh et al., 2016) by 3.7% and (Jain et al., 2017) by 3.5% in terms of METEOR scores.
[2, 1, 2, 1, 1, 0, 1]
['5.2 Baseline and State-of-the-Art.', 'The comparison of our method with various baselines and state-of-the-art methods is provided in table 2 for VQA 1.0 and table 3 for VQG-COCO dataset.', 'The comparable baselines for our method are the image based and caption based models in which we use either only the image or the caption embedding and generate the question.', 'In both the tables, the first block consists of the current stateof-the-art methods on that dataset and the second contains the baselines.', 'We observe that for the VQA dataset we achieve an improvement of 8% in BLEU and 7% in METEOR metric scores over the baselines, whereas for VQG-COCO dataset this is 15% for both the metrics.', 'We improve over the previous state-of-the-art (Yang et al., 2015) for VQA dataset by around 6% in BLEU score and 10% in METEOR score.', 'In the VQG-COCO dataset, we improve over (Mostafazadeh et al., 2016) by 3.7% and (Jain et al., 2017) by 3.5% in terms of METEOR scores.']
[None, ['Natural 2016', 'Creative 2017', 'Image Only', 'Caption Only', 'Tag-Hadamard', 'Place CNN-Joint', 'Diff.Image-Joint', 'MDN-Joint (Ours)', 'Humans 2016'], ['Image Only', 'Caption Only'], ['Natural 2016', 'Creative 2017', 'Image Only', 'Caption Only'], ['MDN-Joint (Ours)', 'Image Only', 'Caption Only', 'BLEU1', 'METEOR'], None, ['MDN-Joint (Ours)', 'Natural 2016', 'Creative 2017', 'METEOR']]
1
D18-1435table_2
Comparison of Template Generator with coarse/fine-grained entity type. Coarse Template is generalized by coarse-grained entity type (name tagger) and fine template is generalized by fine-grained entity type (EDL).
2
[['Approach', 'Raw-caption'], ['Approach', 'Coarse Template'], ['Approach', 'Fine Template']]
1
[['Vocabulary Size'], ['BLEU-1'], ['BLEU-2'], ['BLEU-3'], ['BLEU-4'], ['METEOR'], ['ROUGE'], ['CIDEr']]
[['10979', '15.1', '11.7', '9.9', '8.8', '8.8', '24.2', '34.7'], ['3533', '46.7', '36.1', '29.8', '25.7', '22.4', '43.5', '161.6'], ['3642', '43.0', '33.4', '27.8', '24.3', '20.3', '39.8', '165.3']]
column
['Vocabulary Size', 'BLEU-1', 'BLEU-2', 'BLEU-3', 'BLEU-4', 'METEOR', 'ROUGE', 'CIDEr']
['Coarse Template', 'Fine Template']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Vocabulary Size</th> <th>BLEU-1</th> <th>BLEU-2</th> <th>BLEU-3</th> <th>BLEU-4</th> <th>METEOR</th> <th>ROUGE</th> <th>CIDEr</th> </tr> </thead> <tbody> <tr> <td>Approach || Raw-caption</td> <td>10979</td> <td>15.1</td> <td>11.7</td> <td>9.9</td> <td>8.8</td> <td>8.8</td> <td>24.2</td> <td>34.7</td> </tr> <tr> <td>Approach || Coarse Template</td> <td>3533</td> <td>46.7</td> <td>36.1</td> <td>29.8</td> <td>25.7</td> <td>22.4</td> <td>43.5</td> <td>161.6</td> </tr> <tr> <td>Approach || Fine Template</td> <td>3642</td> <td>43.0</td> <td>33.4</td> <td>27.8</td> <td>24.3</td> <td>20.3</td> <td>39.8</td> <td>165.3</td> </tr> </tbody></table>
Table 2
table_2
D18-1435
8
emnlp2018
Table 2 shows the performances of template generator based on coarse-grained and finegrained type respectively, and Figure 5 shows an example of the template generated. Coarse templates are the ones after we replace names with these coarse-grained types. Entity Linking classifies names into more fine-grained types, so the corresponding templates are fine templates. The generalization method of replacing the named entities with entity types can reduce the vocabulary size significantly, which reduces the impact of sparse named entities in training data. The template generation achieves close performance with stateof-the-art generic image captioning on MSCOCO dataset (Xu et al., 2015). The template generator based on coarse-grained entity type outperforms the one based on fine-grained entity type for two reasons: (1) fine template relies on EDL, and incorrect linkings import noise; (2) named entities usually has multiple types, but we only choose one during generalization. Thus the caption, ‘Bob Dylan performs at the Wiltern Theatre in Los Angeles’, is generalized into ‘<Writer> performs at the <Theater> in <Loaction>’, but the correct type for Bob Dylan in this context should be Artist.
[1, 2, 2, 2, 2, 1, 2]
['Table 2 shows the performances of template generator based on coarse-grained and finegrained type respectively, and Figure 5 shows an example of the template generated.', 'Coarse templates are the ones after we replace names with these coarse-grained types.', 'Entity Linking classifies names into more fine-grained types, so the corresponding templates are fine templates.', 'The generalization method of replacing the named entities with entity types can reduce the vocabulary size significantly, which reduces the impact of sparse named entities in training data.', 'The template generation achieves close performance with stateof-the-art generic image captioning on MSCOCO dataset (Xu et al., 2015).', 'The template generator based on coarse-grained entity type outperforms the one based on fine-grained entity type for two reasons: (1) fine template relies on EDL, and incorrect linkings import noise; (2) named entities usually has multiple types, but we only choose one during generalization.', 'Thus the caption, ‘Bob Dylan performs at the Wiltern Theatre in Los Angeles’, is generalized into ‘<Writer> performs at the <Theater> in <Loaction>’, but the correct type for Bob Dylan in this context should be Artist.']
[None, ['Coarse Template'], ['Fine Template'], ['Vocabulary Size'], None, ['Coarse Template', 'Fine Template'], None]
1
D18-1438table_4
Comparison results using Rouge recall at 75 bytes without OOV replacement. HNNattTI-3-OOV is the version of HNNattTI-3 without the OOV replacement mechanism.
2
[['Method', 'HNNattTI-3-OOV'], ['Method', 'HNNattTC-3-OOV'], ['Method', 'HNNTattTIC-3-OOV'], ['Method', 'HNNattT-3-OOV']]
1
[['Rouge-1'], ['Rouge-2'], ['Rouge-L']]
[['24.03', '8.2', '16.52'], ['18.18', '6.53', '12.87'], ['20.50', '7.67', '14.36'], ['21.60', '7.82', '15.05']]
column
['Rouge-1', 'Rouge-2', 'Rouge-L']
['HNNattTI-3-OOV', 'HNNattTC-3-OOV', 'HNNTattTIC-3-OOV', 'HNNattT-3-OOV']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Rouge-1</th> <th>Rouge-2</th> <th>Rouge-L</th> </tr> </thead> <tbody> <tr> <td>Method || HNNattTI-3-OOV</td> <td>24.03</td> <td>8.2</td> <td>16.52</td> </tr> <tr> <td>Method || HNNattTC-3-OOV</td> <td>18.18</td> <td>6.53</td> <td>12.87</td> </tr> <tr> <td>Method || HNNTattTIC-3-OOV</td> <td>20.50</td> <td>7.67</td> <td>14.36</td> </tr> <tr> <td>Method || HNNattT-3-OOV</td> <td>21.60</td> <td>7.82</td> <td>15.05</td> </tr> </tbody></table>
Table 4
table_4
D18-1438
8
emnlp2018
We use the 1-image and 2-image random selected image summaries as the baselines which we compare our models with. The top 1 or 2 images ranked by our model are selected out to form the summaries. Results in Table 4 show that HNNattTI outperforms the random baseline, while HNNattTC and HNNattTIC perform worse. This implies that attending images can generate better sentence-image alignment in the multimodal summaries than the model attending captions does. And this can also partly explain why our summarization model attending images when decoding text summaries than the one attending captions does. To show the influence of our OOV replacement mechanism, we eliminate the mechanism from our models, and show the evaluation results in Table 4 and Table 5. We can see from the two tables that the scores are lower than the corresponding scores in Table 2 and Table 3. Our OOV replacement mechanism improves the summarization models, though the mechanism is relatively simple.
[2, 2, 1, 2, 2, 1, 1, 2]
['We use the 1-image and 2-image random selected image summaries as the baselines which we compare our models with.', 'The top 1 or 2 images ranked by our model are selected out to form the summaries.', 'Results in Table 4 show that HNNattTI outperforms the random baseline, while HNNattTC and HNNattTIC perform worse.', 'This implies that attending images can generate better sentence-image alignment in the multimodal summaries than the model attending captions does.', 'And this can also partly explain why our summarization model attending images when decoding text summaries than the one attending captions does.', 'To show the influence of our OOV replacement mechanism, we eliminate the mechanism from our models, and show the evaluation results in Table 4 and Table 5.', 'We can see from the two tables that the scores are lower than the corresponding scores in Table 2 and Table 3.', 'Our OOV replacement mechanism improves the summarization models, though the mechanism is relatively simple.']
[None, None, ['HNNattTI-3-OOV', 'HNNattTC-3-OOV', 'HNNTattTIC-3-OOV'], None, None, ['HNNattTI-3-OOV', 'HNNattTC-3-OOV', 'HNNTattTIC-3-OOV', 'HNNattT-3-OOV'], ['HNNattTI-3-OOV', 'HNNattTC-3-OOV', 'HNNTattTIC-3-OOV', 'HNNattT-3-OOV'], None]
1
D18-1442table_1
Comparison with other baselines on DailyMail test dataset using Rouge recall score with respect to the abstractive ground truth at 75 bytes and at 275 bytes.
2
[['DailyMail', 'Lead-3'], ['DailyMail', 'LReg(500)'], ['DailyMail', 'Cheng et.al 16'], ['DailyMail', 'SummaRuNNer'], ['DailyMail', 'REFRESH'], ['DailyMail', 'Hybrid MemNet'], ['DailyMail', 'ITS']]
2
[['b75', 'Rouge-1'], ['b75', 'Rouge-2'], ['b75', 'Rouge-L'], ['b275', 'Rouge-1'], ['b275', 'Rouge-2'], ['b275', 'Rouge-L']]
[['21.9', '7.2', '11.6', '40.5', '14.9', '32.6'], ['18.5', '6.9', '10.2', '-', '-', '-'], ['22.7', '8.5', '12.5', '42.2', '17.3', '34.8'], ['26.2', '10.8', '14.4', '42', '16.9', '34.1'], ['24.1', '11.5', '12.5', '40.3', '15.1', '32.9'], ['26.3', '11.2', '15.5', '41.4', '16.7', '33.2'], ['27.4', '11.9', '16.1', '42.4', '17.4', '34.1']]
column
['Rouge-1', 'Rouge-2', 'Rouge-L', 'Rouge-1', 'Rouge-2', 'Rouge-L']
['ITS']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>b75 || Rouge-1</th> <th>b75 || Rouge-2</th> <th>b75 || Rouge-L</th> <th>b275 || Rouge-1</th> <th>b275 || Rouge-2</th> <th>b275 || Rouge-L</th> </tr> </thead> <tbody> <tr> <td>DailyMail || Lead-3</td> <td>21.9</td> <td>7.2</td> <td>11.6</td> <td>40.5</td> <td>14.9</td> <td>32.6</td> </tr> <tr> <td>DailyMail || LReg(500)</td> <td>18.5</td> <td>6.9</td> <td>10.2</td> <td>-</td> <td>-</td> <td>-</td> </tr> <tr> <td>DailyMail || Cheng et.al 16</td> <td>22.7</td> <td>8.5</td> <td>12.5</td> <td>42.2</td> <td>17.3</td> <td>34.8</td> </tr> <tr> <td>DailyMail || SummaRuNNer</td> <td>26.2</td> <td>10.8</td> <td>14.4</td> <td>42</td> <td>16.9</td> <td>34.1</td> </tr> <tr> <td>DailyMail || REFRESH</td> <td>24.1</td> <td>11.5</td> <td>12.5</td> <td>40.3</td> <td>15.1</td> <td>32.9</td> </tr> <tr> <td>DailyMail || Hybrid MemNet</td> <td>26.3</td> <td>11.2</td> <td>15.5</td> <td>41.4</td> <td>16.7</td> <td>33.2</td> </tr> <tr> <td>DailyMail || ITS</td> <td>27.4</td> <td>11.9</td> <td>16.1</td> <td>42.4</td> <td>17.4</td> <td>34.1</td> </tr> </tbody></table>
Table 1
table_1
D18-1442
7
emnlp2018
6 Experiment analysis. Table 1 shows the performance comparison of our model with other baselines on the DailyMail dataset with respect to Rouge score at 75 bytes and 275 bytes of summary length. Our model performs consistently and significantly better than other models on 75 bytes, while on 275 bytes, the improvement margin is smaller. One possible interpretation is that our model has high precision on top rank outputs, but the accuracy is lower for lower rank sentences. In addition, (Cheng and Lapata, 2016) used additional supervised training to create sentence-level extractive labels to train their model, while our model uses an unsupervised greedy approximation instead.
[2, 1, 1, 2, 2]
['6 Experiment analysis.', 'Table 1 shows the performance comparison of our model with other baselines on the DailyMail dataset with respect to Rouge score at 75 bytes and 275 bytes of summary length.', 'Our model performs consistently and significantly better than other models on 75 bytes, while on 275 bytes, the improvement margin is smaller.', 'One possible interpretation is that our model has high precision on top rank outputs, but the accuracy is lower for lower rank sentences.', 'In addition, (Cheng and Lapata, 2016) used additional supervised training to create sentence-level extractive labels to train their model, while our model uses an unsupervised greedy approximation instead.']
[None, ['Lead-3', 'LReg(500)', 'Cheng et.al 16', 'SummaRuNNer', 'REFRESH', 'Hybrid MemNet', 'ITS', 'b75', 'b275'], ['ITS', 'b75', 'b275'], ['ITS'], ['ITS']]
1
D18-1442table_5
System ranking comparison with other baselines on DailyMail corpus. Rank 1 is the best and Rank 4 is the worst. Each score represents the percentage of the summary under this rank.
2
[['Models', 'Lead-3'], ['Models', 'Hybrid MemNet'], ['Models', 'ITS'], ['Models', 'Gold']]
1
[['1st'], ['2nd'], ['3rd'], ['4th']]
[['0.12', '0.11', '0.25', '0.52'], ['0.24', '0.25', '0.28', '0.23'], ['0.31', '0.34', '0.23', '0.12'], ['0.33', '0.30', '0.24', '0.13']]
column
['percentage', 'percentage', 'percentage', 'percentage']
['ITS']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>1st</th> <th>2nd</th> <th>3rd</th> <th>4th</th> </tr> </thead> <tbody> <tr> <td>Models || Lead-3</td> <td>0.12</td> <td>0.11</td> <td>0.25</td> <td>0.52</td> </tr> <tr> <td>Models || Hybrid MemNet</td> <td>0.24</td> <td>0.25</td> <td>0.28</td> <td>0.23</td> </tr> <tr> <td>Models || ITS</td> <td>0.31</td> <td>0.34</td> <td>0.23</td> <td>0.12</td> </tr> <tr> <td>Models || Gold</td> <td>0.33</td> <td>0.30</td> <td>0.24</td> <td>0.13</td> </tr> </tbody></table>
Table 5
table_5
D18-1442
8
emnlp2018
Human Evaluation:. We gave human evaluators three system-generated summaries, generated by Lead-3, Hybrid MemNet, ITS, as well as the human-written gold standard summary, and asked them to rank these summaries based on summary informativeness and coherence. Table 5 shows the percentages of summaries of different models under each rank scored by human experts. It is not surprising that gold standard has the most summaries of the highest quality. Our model has the most summaries under 2nd rank, thus can be considered 2nd best, following are Hybrid MemNet and Lead-3, as they are ranked mostly 3rd and 4th. By case study, we found that a number of summaries generated by Hybrid MemNet have two sentences the same as ITS out of three, however, the third distinct sentence from our model always leads to a better evaluation result considering overall informativeness and coherence.
[2, 2, 1, 1, 1, 2]
['Human Evaluation:.', 'We gave human evaluators three system-generated summaries, generated by Lead-3, Hybrid MemNet, ITS, as well as the human-written gold standard summary, and asked them to rank these summaries based on summary informativeness and coherence.', 'Table 5 shows the percentages of summaries of different models under each rank scored by human experts.', 'It is not surprising that gold standard has the most summaries of the highest quality.', 'Our model has the most summaries under 2nd rank, thus can be considered 2nd best, following are Hybrid MemNet and Lead-3, as they are ranked mostly 3rd and 4th.', 'By case study, we found that a number of summaries generated by Hybrid MemNet have two sentences the same as ITS out of three, however, the third distinct sentence from our model always leads to a better evaluation result considering overall informativeness and coherence.']
[None, ['Lead-3', 'Hybrid MemNet', 'ITS', 'Gold'], ['Lead-3', 'Hybrid MemNet', 'ITS', 'Gold'], ['Gold', '1st'], ['ITS', '2nd', 'Lead-3', 'Hybrid MemNet', '3rd', '4th'], ['Hybrid MemNet', 'ITS']]
1
D18-1443table_2
Results on the NYT corpus, where we compare to RL trained models. * marks models and results by Paulus et al. (2017), and † results by Celikyilmaz et al. (2018).
2
[['Method', 'ML*'], ['Method', 'ML+RL*'], ['Method', 'DCA†'], ['Method', 'Point.Gen. + Coverage Pen.'], ['Method', 'Bottom-Up Summarization']]
1
[['R-1'], ['R-2'], ['R-L']]
[['44.26', '27.43', '40.41'], ['47.03', '30.72', '43.10'], ['48.08', '31.19', '42.33'], ['45.13', '30.13', '39.67'], ['47.38', '31.23', '41.81']]
column
['R-1', 'R-2', 'R-L']
['Bottom-Up Summarization']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>R-1</th> <th>R-2</th> <th>R-L</th> </tr> </thead> <tbody> <tr> <td>Method || ML*</td> <td>44.26</td> <td>27.43</td> <td>40.41</td> </tr> <tr> <td>Method || ML+RL*</td> <td>47.03</td> <td>30.72</td> <td>43.10</td> </tr> <tr> <td>Method || DCA†</td> <td>48.08</td> <td>31.19</td> <td>42.33</td> </tr> <tr> <td>Method || Point.Gen. + Coverage Pen.</td> <td>45.13</td> <td>30.13</td> <td>39.67</td> </tr> <tr> <td>Method || Bottom-Up Summarization</td> <td>47.38</td> <td>31.23</td> <td>41.81</td> </tr> </tbody></table>
Table 2
table_2
D18-1443
7
emnlp2018
Table 2 shows experiments with the same systems on the NYT corpus. We see that the 2 point improvement compared to the baseline PointerGenerator maximum-likelihood approach carries over to this dataset. Here, the model outperforms the RL based model by Paulus et al. (2017) in ROUGE-1 and 2, but not L, and is comparable to the results of (Celikyilmaz et al., 2018) except for ROUGE-L. The same can be observed when comparing ML and our Pointer-Generator. We suspect that a difference in summary lengths due to our inference parameter choices leads to this difference, but did not have access to their models or summaries to investigate this claim. This shows that a bottom-up approach achieves competitive results even to models that are trained on summary-specific objectives.
[1, 1, 1, 1, 2, 2]
['Table 2 shows experiments with the same systems on the NYT corpus.', 'We see that the 2 point improvement compared to the baseline PointerGenerator maximum-likelihood approach carries over to this dataset.', 'Here, the model outperforms the RL based model by Paulus et al. (2017) in ROUGE-1 and 2, but not L, and is comparable to the results of (Celikyilmaz et al., 2018) except for ROUGE-L.', 'The same can be observed when comparing ML and our Pointer-Generator.', 'We suspect that a difference in summary lengths due to our inference parameter choices leads to this difference, but did not have access to their models or summaries to investigate this claim.', 'This shows that a bottom-up approach achieves competitive results even to models that are trained on summary-specific objectives.']
[None, ['Bottom-Up Summarization', 'Point.Gen. + Coverage Pen.'], ['Bottom-Up Summarization', 'ML*', 'ML+RL*', 'DCA†', 'R-1', 'R-2', 'R-L'], ['Point.Gen. + Coverage Pen.', 'ML*', 'R-1', 'R-2', 'R-L'], None, ['Bottom-Up Summarization']]
1
D18-1445table_3
Upper-bound performance comparison. Results are averaged over all clusters in DUC’04.
2
[['Method', 'TD(λ)'], ['Method', 'LSTD(λ)'], ['Method', 'ILP']]
1
[['R1'], ['R2'], ['RL'], ['RSU4']]
[['.484', '.184', '.388', '.199'], ['.458', '.159', '.366', '.185'], ['.470', '.212', 'N/A', '.185']]
column
['R1', 'R2', 'RL', 'RSU4']
['TD(λ)', 'LSTD(λ)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>R1</th> <th>R2</th> <th>RL</th> <th>RSU4</th> </tr> </thead> <tbody> <tr> <td>Method || TD(λ)</td> <td>.484</td> <td>.184</td> <td>.388</td> <td>.199</td> </tr> <tr> <td>Method || LSTD(λ)</td> <td>.458</td> <td>.159</td> <td>.366</td> <td>.185</td> </tr> <tr> <td>Method || ILP</td> <td>.470</td> <td>.212</td> <td>N/A</td> <td>.185</td> </tr> </tbody></table>
Table 3
table_3
D18-1445
7
emnlp2018
Table 3 shows the performance of RL and ILP on the DUC’04 dataset. TD(λ) significantly outperforms LSTD(λ) in terms of all ROUGE scores we consider. Although the least-square RL algorithms (which LSTD belongs to) have been proved to achieve better performance than standard TD methods in large-scale problems (see Lagoudakis and Parr, 2003), their performance is sensitive to many factors, e.g., initialisation values in the diagonal matrix, regularisation parameters, etc. We note that a similar observation about the inferior performance of least-square RL in EMDS is reported by Rioux et al. (2014). TD(λ) also significantly outperforms ILP in terms of all metrics except ROUGE-2. This is not surprising, because the bigram-based ILP is optimised for ROUGE-2, whereas our reward function U∗ considers other metrics as well (see Eq. (6)). Since ILP is widely used as a strong baseline for EMDS, these results confirm the advantage of using RL for EMDS problems.
[1, 1, 2, 2, 1, 2, 2]
['Table 3 shows the performance of RL and ILP on the DUC’04 dataset.', 'TD(λ) significantly outperforms LSTD(λ) in terms of all ROUGE scores we consider.', 'Although the least-square RL algorithms (which LSTD belongs to) have been proved to achieve better performance than standard TD methods in large-scale problems (see Lagoudakis and Parr, 2003), their performance is sensitive to many factors, e.g., initialisation values in the diagonal matrix, regularisation parameters, etc.', 'We note that a similar observation about the inferior performance of least-square RL in EMDS is reported by Rioux et al. (2014).', 'TD(λ) also significantly outperforms ILP in terms of all metrics except ROUGE-2.', 'This is not surprising, because the bigram-based ILP is optimised for ROUGE-2, whereas our reward function U∗ considers other metrics as well (see Eq. (6)).', 'Since ILP is widely used as a strong baseline for EMDS, these results confirm the advantage of using RL for EMDS problems.']
[['TD(λ)', 'LSTD(λ)', 'ILP'], ['TD(λ)', 'LSTD(λ)', 'R1', 'R2', 'RL', 'RSU4'], ['TD(λ)', 'LSTD(λ)'], ['LSTD(λ)'], ['TD(λ)', 'ILP', 'R1', 'RL', 'RSU4'], ['ILP', 'R2'], ['ILP', 'TD(λ)', 'LSTD(λ)']]
1
D18-1447table_4
Results of keyphrase generation for news from DUC dataset with F1. Results of unsupervised learning methods are adopted from Hasan and Ng (2010).
3
[['Model', 'Our Models', 'SEQ2SEQ'], ['Model', 'Our Models', 'SYN.UNSUPER.'], ['Model', 'Our Models', 'SYN.SELF-LEARN.'], ['Model', 'Our Models', 'MULTI-TASK'], ['Model', 'Unsupervised', 'TF-IDF'], ['Model', 'Unsupervised', 'TEXTRANK'], ['Model', 'Unsupervised', 'SINGLERANK'], ['Model', 'Unsupervised', 'EXPANDRANK']]
1
[['F1']]
[['0.056'], ['0.083'], ['0.065'], ['0.109'], ['0.270'], ['0.097'], ['0.256'], ['0.269']]
column
['F1']
['Our Models']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Model || Our Models || SEQ2SEQ</td> <td>0.056</td> </tr> <tr> <td>Model || Our Models || SYN.UNSUPER.</td> <td>0.083</td> </tr> <tr> <td>Model || Our Models || SYN.SELF-LEARN.</td> <td>0.065</td> </tr> <tr> <td>Model || Our Models || MULTI-TASK</td> <td>0.109</td> </tr> <tr> <td>Model || Unsupervised || TF-IDF</td> <td>0.270</td> </tr> <tr> <td>Model || Unsupervised || TEXTRANK</td> <td>0.097</td> </tr> <tr> <td>Model || Unsupervised || SINGLERANK</td> <td>0.256</td> </tr> <tr> <td>Model || Unsupervised || EXPANDRANK</td> <td>0.269</td> </tr> </tbody></table>
Table 4
table_4
D18-1447
8
emnlp2018
The experimental results are shown in Table 4 which indicate that: 1) though trained on scientific papers, our models still have the ability to generate keyphrases for news articles, illustrating that our models have learned some universal features between the two domains; and 2) semi-supervised learning by leveraging unlabeled data improves the generation performances more, indicating that our proposed method is reasonably effective when being tested on cross-domain data. Though unsupervised methods are still superior, for future work, we can leverage unlabeled out-of-domain corpora to improve cross-domain keyphrase generation performance, which could be a promising direction for domain adaption or transfer learning.
[1, 1]
['The experimental results are shown in Table 4 which indicate that: 1) though trained on scientific papers, our models still have the ability to generate keyphrases for news articles, illustrating that our models have learned some universal features between the two domains; and 2) semi-supervised learning by leveraging unlabeled data improves the generation performances more, indicating that our proposed method is reasonably effective when being tested on cross-domain data.', 'Though unsupervised methods are still superior, for future work, we can leverage unlabeled out-of-domain corpora to improve cross-domain keyphrase generation performance, which could be a promising direction for domain adaption or transfer learning.']
[['Our Models'], ['Unsupervised', 'F1']]
1
D18-1454table_2
Results across different metrics on the test set of NarrativeQA-summaries task. † indicates span prediction models trained on the Rouge-L retrieval oracle.
2
[['Model', 'Seq2Seq (Kocisky et al. 2018)'], ['Model', 'ASR (Kocisky et al. 2018)'], ['Model', 'BiDAF (Kocisky et al. 2018)'], ['Model', 'BiAttn + MRU-LSTM (Tay et al. 2018)'], ['Model', 'MHPGM'], ['Model', 'MHPGM+ NOIC']]
1
[['BLEU-1'], ['BLEU-4'], ['METEOR'], ['Rouge-L'], ['CIDEr']]
[['15.89', '1.26', '4.08', '13.15', '-'], ['23.20', '6.39', '7.77', '22.26', '-'], ['33.72', '15.53', '15.38', '36.30', '-'], ['36.55', '19.79', '17.87', '41.44', '-'], ['40.24', '17.40', '17.33', '41.49', '139.23'], ['43.63', '21.07', '19.03', '44.16', '152.98']]
column
['BLEU-1', 'BLEU-4', 'METEOR', 'Rouge-L', 'CIDEr']
['MHPGM', 'MHPGM+ NOIC']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>BLEU-1</th> <th>BLEU-4</th> <th>METEOR</th> <th>Rouge-L</th> <th>CIDEr</th> </tr> </thead> <tbody> <tr> <td>Model || Seq2Seq (Kocisky et al. 2018)</td> <td>15.89</td> <td>1.26</td> <td>4.08</td> <td>13.15</td> <td>-</td> </tr> <tr> <td>Model || ASR (Kocisky et al. 2018)</td> <td>23.20</td> <td>6.39</td> <td>7.77</td> <td>22.26</td> <td>-</td> </tr> <tr> <td>Model || BiDAF (Kocisky et al. 2018)</td> <td>33.72</td> <td>15.53</td> <td>15.38</td> <td>36.30</td> <td>-</td> </tr> <tr> <td>Model || BiAttn + MRU-LSTM (Tay et al. 2018)</td> <td>36.55</td> <td>19.79</td> <td>17.87</td> <td>41.44</td> <td>-</td> </tr> <tr> <td>Model || MHPGM</td> <td>40.24</td> <td>17.40</td> <td>17.33</td> <td>41.49</td> <td>139.23</td> </tr> <tr> <td>Model || MHPGM+ NOIC</td> <td>43.63</td> <td>21.07</td> <td>19.03</td> <td>44.16</td> <td>152.98</td> </tr> </tbody></table>
Table 2
table_2
D18-1454
8
emnlp2018
5.1 Main Experiment. The results of our model on both NarrativeQA and WikiHop with and without commonsense incorporation are shown in Table 2 and Table 3. We see empirically that our model outperforms all generative models on NarrativeQA, and is competitive with the top span prediction models. Furthermore, with the NOIC commonsense integration, we were able to further improve performance (p < 0.001 on all metrics6), establishing a new state-of-the-art for the task. We also see that our model performs well on WikiHop,7 and is further improved via the addition of commonsense (p < 0.001), demonstrating the generalizability of both our model and commonsense addition techniques.8.
[2, 1, 1, 1, 2]
['5.1 Main Experiment.', 'The results of our model on both NarrativeQA and WikiHop with and without commonsense incorporation are shown in Table 2 and Table 3.', 'We see empirically that our model outperforms all generative models on NarrativeQA, and is competitive with the top span prediction models.', 'Furthermore, with the NOIC commonsense integration, we were able to further improve performance (p < 0.001 on all metrics6), establishing a new state-of-the-art for the task.', 'We also see that our model performs well on WikiHop,7 and is further improved via the addition of commonsense (p < 0.001), demonstrating the generalizability of both our model and commonsense addition techniques.8.']
[None, ['MHPGM', 'MHPGM+ NOIC'], ['MHPGM', 'Seq2Seq (Kocisky et al. 2018)', 'ASR (Kocisky et al. 2018)', 'BiDAF (Kocisky et al. 2018)', 'BiAttn + MRU-LSTM (Tay et al. 2018)'], ['MHPGM+ NOIC'], None]
1
D18-1462table_2
Automatic evaluations of the proposed model and the state-of-the-art models.
2
[['Models', 'EE-Seq2Seq'], ['Models', 'DE-Seq2Seq'], ['Models', 'GE-Seq2Seq'], ['Models', 'Proposed Model']]
1
[['BLEU']]
[['0.0029'], ['0.0027'], ['0.0022'], ['0.0042 (+44.8%)']]
column
['BLEU']
['Proposed Model']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>BLEU</th> </tr> </thead> <tbody> <tr> <td>Models || EE-Seq2Seq</td> <td>0.0029</td> </tr> <tr> <td>Models || DE-Seq2Seq</td> <td>0.0027</td> </tr> <tr> <td>Models || GE-Seq2Seq</td> <td>0.0022</td> </tr> <tr> <td>Models || Proposed Model</td> <td>0.0042 (+44.8%)</td> </tr> </tbody></table>
Table 2
table_2
D18-1462
6
emnlp2018
4.5 Experimental Results. Table 2 shows the results of automatic evaluation. The proposed model performs the best according to BLEU. In particular, the differences between the existing state-of-the-art models are within 0.07, while the proposed model supersedes the best of them by 0.13.
[2, 1, 1, 1]
['4.5 Experimental Results.', 'Table 2 shows the results of automatic evaluation.', 'The proposed model performs the best according to BLEU.', 'In particular, the differences between the existing state-of-the-art models are within 0.07, while the proposed model supersedes the best of them by 0.13.']
[None, None, ['Proposed Model', 'BLEU'], ['EE-Seq2Seq', 'DE-Seq2Seq', 'GE-Seq2Seq', 'Proposed Model', 'BLEU']]
1
D18-1462table_6
Human evaluations of the key components.
2
[['Models', 'Seq2Seq'], ['Models', '+Skeleton Extraction Module'], ['Models', '+Reinforcement Learning']]
1
[['Fluency'], ['Coherence'], ['G-Score']]
[['7.54', '4.98', '6.13'], ['7.26', '4.32', '5.60'], ['8.69', '5.62', '6.99']]
column
['Fluency', 'Coherence', 'G-Score']
['+Skeleton Extraction Module', '+Reinforcement Learning']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Fluency</th> <th>Coherence</th> <th>G-Score</th> </tr> </thead> <tbody> <tr> <td>Models || Seq2Seq</td> <td>7.54</td> <td>4.98</td> <td>6.13</td> </tr> <tr> <td>Models || +Skeleton Extraction Module</td> <td>7.26</td> <td>4.32</td> <td>5.60</td> </tr> <tr> <td>Models || +Reinforcement Learning</td> <td>8.69</td> <td>5.62</td> <td>6.99</td> </tr> </tbody></table>
Table 6
table_6
D18-1462
8
emnlp2018
Table 6 shows the human evaluation results. The slight improvement with the skeleton extraction module in BLEU reflecs as the decreases in both fluency and coherence. It suggests the necessity of human evaluation. The decreased results can be explained by the fact that the style of the dataset for pre-training the skeleton extraction module is very different from the narrative story dataset. While it may help extract some useful skeletons, it is likely that many of them are not suitable for learning the dependency of sentences. Finally, when the skeleton extraction module is trained on the target domain using reinforcement learning, the human evaluation is improved significantly by 14% on G-score.
[1, 1, 2, 2, 2, 1]
['Table 6 shows the human evaluation results.', 'The slight improvement with the skeleton extraction module in BLEU reflecs as the decreases in both fluency and coherence.', 'It suggests the necessity of human evaluation.', 'The decreased results can be explained by the fact that the style of the dataset for pre-training the skeleton extraction module is very different from the narrative story dataset.', 'While it may help extract some useful skeletons, it is likely that many of them are not suitable for learning the dependency of sentences.', 'Finally, when the skeleton extraction module is trained on the target domain using reinforcement learning, the human evaluation is improved significantly by 14% on G-score.']
[None, ['Fluency', 'Coherence', 'G-Score', '+Skeleton Extraction Module'], ['+Skeleton Extraction Module'], ['+Skeleton Extraction Module'], None, ['G-Score', '+Reinforcement Learning']]
1
D18-1463table_1
Results of embedding-based metrics. * indicates statistically significant difference (p < 0.05) from the best baselines. The same mark is used in Table 2
2
[['Model', 'Greedy'], ['Model', 'Beam'], ['Model', 'MMI'], ['Model', 'RL'], ['Model', 'VHRED'], ['Model', 'NEXUS-H'], ['Model', 'NEXUS-F'], ['Model', 'NEXUS']]
2
[['DailyDialog', 'Average'], ['DailyDialog', 'Greedy'], ['DailyDialog', 'Extreme'], ['Twitter', 'Average'], ['Twitter', 'Greedy'], ['Twitter', 'Extreme']]
[['0.443', '0.376', '0.328', '0.510', '0.341', '0.356'], ['0.437', '0.350', '0.369', '0.505', '0.345', '0.352'], ['0.457', '0.371', '0.371', '0.518', '0.353', '0.365'], ['0.405', '0.329', '0.305', '0.460', '0.349', '0.323'], ['0.491', '0.375', '0.313', '0.525', '0.389', '0.372'], ['0.479', '0.381', '0.385', '0.558', '0.392', '0.373'], ['0.476', '0.383', '0.373', '0.549', '0.393', '0.386'], ['0.488', '0.392', '0.384', '0.556', '0.397', '0.391']]
column
['Average', 'Greedy', 'Extreme', 'Average', 'Greedy', 'Extreme']
['NEXUS-H', 'NEXUS-F', 'NEXUS']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>DailyDialog || Average</th> <th>DailyDialog || Greedy</th> <th>DailyDialog || Extreme</th> <th>Twitter || Average</th> <th>Twitter || Greedy</th> <th>Twitter || Extreme</th> </tr> </thead> <tbody> <tr> <td>Model || Greedy</td> <td>0.443</td> <td>0.376</td> <td>0.328</td> <td>0.510</td> <td>0.341</td> <td>0.356</td> </tr> <tr> <td>Model || Beam</td> <td>0.437</td> <td>0.350</td> <td>0.369</td> <td>0.505</td> <td>0.345</td> <td>0.352</td> </tr> <tr> <td>Model || MMI</td> <td>0.457</td> <td>0.371</td> <td>0.371</td> <td>0.518</td> <td>0.353</td> <td>0.365</td> </tr> <tr> <td>Model || RL</td> <td>0.405</td> <td>0.329</td> <td>0.305</td> <td>0.460</td> <td>0.349</td> <td>0.323</td> </tr> <tr> <td>Model || VHRED</td> <td>0.491</td> <td>0.375</td> <td>0.313</td> <td>0.525</td> <td>0.389</td> <td>0.372</td> </tr> <tr> <td>Model || NEXUS-H</td> <td>0.479</td> <td>0.381</td> <td>0.385</td> <td>0.558</td> <td>0.392</td> <td>0.373</td> </tr> <tr> <td>Model || NEXUS-F</td> <td>0.476</td> <td>0.383</td> <td>0.373</td> <td>0.549</td> <td>0.393</td> <td>0.386</td> </tr> <tr> <td>Model || NEXUS</td> <td>0.488</td> <td>0.392</td> <td>0.384</td> <td>0.556</td> <td>0.397</td> <td>0.391</td> </tr> </tbody></table>
Table 1
table_1
D18-1463
6
emnlp2018
Table 1 reports the embedding scores on both datasets. NEXUS network significantly outperforms the best baseline model in most cases. Notably, NEXUS can absorb the advantages from both NEXUS-H and NEXUS-F. The history and future information seem to help the model from different perspectives. Taking into account both of them does not create a conflict and the combination leads to an overall improvement. RL performs rather poorly on this metric, which is understandable as it does not target the ground-truth responses during training (Li et al., 2016c).
[1, 1, 2, 2, 2, 2]
['Table 1 reports the embedding scores on both datasets.', 'NEXUS network significantly outperforms the best baseline model in most cases.', 'Notably, NEXUS can absorb the advantages from both NEXUS-H and NEXUS-F.', 'The history and future information seem to help the model from different perspectives.', 'Taking into account both of them does not create a conflict and the combination leads to an overall improvement.', 'RL performs rather poorly on this metric, which is understandable as it does not target the ground-truth responses during training (Li et al., 2016c).']
[['DailyDialog', 'Twitter'], ['NEXUS'], ['NEXUS-H', 'NEXUS-F', 'NEXUS'], None, ['NEXUS'], None]
1
D18-1463table_2
Results of BLEU score. It is computed based on the smooth BLEU algorithm (Lin and Och, 2004). p-value interval is computed base on the altered bootstrap resampling algorithm (Riezler and Maxwell, 2005)
2
[['Model', 'Greedy'], ['Model', 'Beam'], ['Model', 'MMI'], ['Model', 'RL'], ['Model', 'VHRED'], ['Model', 'NEXUS-H'], ['Model', 'NEXUS-F'], ['Model', 'NEXUS']]
2
[['DailyDialog', 'BLEU-1'], ['DailyDialog', 'BLEU-2'], ['DailyDialog', 'BLEU-3'], ['Twitter', 'BLEU-1'], ['Twitter', 'BLEU-2'], ['Twitter', 'BLEU-3']]
[['0.394', '0.245', '0.157', '0.340', '0.203', '0.116'], ['0.386', '0.251', '0.163', '0.338', '0.205', '0.112'], ['0.407', '0.269', '0.172', '0.347', '0.208', '0.118'], ['0.298', '0.186', '0.075', '0.314', '0.199', '0.103'], ['0.395', '0.281', '0.190', '0.355', '0.211', '0.124'], ['0.418', '0.279', '0.199', '0.366', '0.212', '0.126'], ['0.399', '0.260', '0.167', '0.359', '0.213', '0.123'], ['0.424', '0.276', '0.198', '0.363', '0.220', '0.131']]
column
['BLEU-1', 'BLEU-2', 'BLEU-3', 'BLEU-1', 'BLEU-2', 'BLEU-3']
['NEXUS-H', 'NEXUS-F', 'NEXUS']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>DailyDialog || BLEU-1</th> <th>DailyDialog || BLEU-2</th> <th>DailyDialog || BLEU-3</th> <th>Twitter || BLEU-1</th> <th>Twitter || BLEU-2</th> <th>Twitter || BLEU-3</th> </tr> </thead> <tbody> <tr> <td>Model || Greedy</td> <td>0.394</td> <td>0.245</td> <td>0.157</td> <td>0.340</td> <td>0.203</td> <td>0.116</td> </tr> <tr> <td>Model || Beam</td> <td>0.386</td> <td>0.251</td> <td>0.163</td> <td>0.338</td> <td>0.205</td> <td>0.112</td> </tr> <tr> <td>Model || MMI</td> <td>0.407</td> <td>0.269</td> <td>0.172</td> <td>0.347</td> <td>0.208</td> <td>0.118</td> </tr> <tr> <td>Model || RL</td> <td>0.298</td> <td>0.186</td> <td>0.075</td> <td>0.314</td> <td>0.199</td> <td>0.103</td> </tr> <tr> <td>Model || VHRED</td> <td>0.395</td> <td>0.281</td> <td>0.190</td> <td>0.355</td> <td>0.211</td> <td>0.124</td> </tr> <tr> <td>Model || NEXUS-H</td> <td>0.418</td> <td>0.279</td> <td>0.199</td> <td>0.366</td> <td>0.212</td> <td>0.126</td> </tr> <tr> <td>Model || NEXUS-F</td> <td>0.399</td> <td>0.260</td> <td>0.167</td> <td>0.359</td> <td>0.213</td> <td>0.123</td> </tr> <tr> <td>Model || NEXUS</td> <td>0.424</td> <td>0.276</td> <td>0.198</td> <td>0.363</td> <td>0.220</td> <td>0.131</td> </tr> </tbody></table>
Table 2
table_2
D18-1463
7
emnlp2018
BLEU Score. BLEU is a popular metric that measures the geometric mean of the modified ngram precision with a length penalty (Papineni et al., 2002). Table 2 reports the BLEU 1-3 scores. Compared with embedding-based metrics, the BLEU score quantifies the word-overlap between generated responses and the ground-truth. One challenge of evaluating dialogue generation by BLEU score is the difficulty of accessing multiple references for the one-to-many alignment relation. Following Sordoni et al. (2015); Zhao et al. (2017); Shen et al. (2018), for each context, 10 more candidate references are acquired by using information retrieval methods (see Appendix A.4 for more details). All candidates are then passed to human annotators to filter unsuitable ones, resulting in 6.74 and 5.13 references for DailyDialog and Twitter dataset respectively. The human annotation is costly, so we evaluate it on 1000 sampled test cases for each dataset. As the BLEU score is not the simple mean of individual sentence scores, we compute the 95% significance interval by bootstrap resampling (Koehn, 2004; Riezler and Maxwell, 2005). As can be seen, NEXUS network achieves best or near-best performances with only greedy decoders. NEXUS-H generally outperforms NEXUS-F as the connection with future context is not explicitly addressed by the BLEU score metric. MMI and VHRED bring minor improvements over the seq2seq model. Even when evaluated on multiple references, RL still performs worse than most models.
[2, 2, 1, 2, 2, 2, 2, 2, 2, 1, 1, 1, 1]
['BLEU Score.', 'BLEU is a popular metric that measures the geometric mean of the modified ngram precision with a length penalty (Papineni et al., 2002).', 'Table 2 reports the BLEU 1-3 scores.', 'Compared with embedding-based metrics, the BLEU score quantifies the word-overlap between generated responses and the ground-truth.', 'One challenge of evaluating dialogue generation by BLEU score is the difficulty of accessing multiple references for the one-to-many alignment relation.', 'Following Sordoni et al. (2015); Zhao et al. (2017); Shen et al. (2018), for each context, 10 more candidate references are acquired by using information retrieval methods (see Appendix A.4 for more details).', 'All candidates are then passed to human annotators to filter unsuitable ones, resulting in 6.74 and 5.13 references for DailyDialog and Twitter dataset respectively.', 'The human annotation is costly, so we evaluate it on 1000 sampled test cases for each dataset.', 'As the BLEU score is not the simple mean of individual sentence scores, we compute the 95% significance interval by bootstrap resampling (Koehn, 2004; Riezler and Maxwell, 2005).', 'As can be seen, NEXUS network achieves best or near-best performances with only greedy decoders.', 'NEXUS-H generally outperforms NEXUS-F as the connection with future context is not explicitly addressed by the BLEU score metric.', 'MMI and VHRED bring minor improvements over the seq2seq model.', 'Even when evaluated on multiple references, RL still performs worse than most models.']
[None, None, ['BLEU-1', 'BLEU-2', 'BLEU-3'], ['BLEU-1', 'BLEU-2', 'BLEU-3'], None, None, ['DailyDialog', 'Twitter'], ['DailyDialog', 'Twitter'], None, ['NEXUS'], ['NEXUS-H', 'NEXUS-F'], ['Greedy', 'Beam', 'MMI', 'VHRED'], ['RL']]
1
D18-1464table_1
Results on readability assessment. The first system is the state-of-the-art coherence model on this dataset. The last one is a full readability system. “∗” indicates statistically significant difference with the bold result.
2
[['Model', 'Mesgar and Strube (2016)'], ['Model', 'CohEmb'], ['Model', 'CohLSTM'], ['Model', 'De Clercq and Hoste (2016)']]
1
[['Accuracy (%)']]
[['85.70'], ['92.17'], ['97.77'], ['96.88']]
column
['Accuracy (%)']
['CohEmb', 'CohLSTM']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Accuracy (%)</th> </tr> </thead> <tbody> <tr> <td>Model || Mesgar and Strube (2016)</td> <td>85.70</td> </tr> <tr> <td>Model || CohEmb</td> <td>92.17</td> </tr> <tr> <td>Model || CohLSTM</td> <td>97.77</td> </tr> <tr> <td>Model || De Clercq and Hoste (2016)</td> <td>96.88</td> </tr> </tbody></table>
Table 1
table_1
D18-1464
8
emnlp2018
Results. Table 1 summarizes the results of different systems for the readability assessment task. CohEmb significantly outperforms the graph-based coherence model proposed by Mesgar and Strube (2016) by a large margin (6%), showing that our model captures coherence better than their model. In our model, the CNN layer automatically learns which connections are important to be considered for coherence patterns, whereas this is performed in Mesgar and Strube (2016) by defining a threshold for eliminating connections. CohLSTM significantly outperforms both the coherence model proposed by Mesgar and Strube (2016) and the CohEmb model by 11% and 5%, respectively, and defines a new state-of-the-art on this dataset. CohLSTM, unlike Mesgar and Strube (2016)’s model and CohEmb, considers words of sentences in their sentence context. This supports our intuition that actual context information of words contributes to the perceived coherence of texts. CohLSTM, which captures exclusively local coherence, even outperforms the readability system proposed by De Clercq and Hoste (2016), which relies on a wide range of lexical, syntactic and semantic features.
[2, 1, 1, 2, 1, 2, 2, 1]
['Results.', 'Table 1 summarizes the results of different systems for the readability assessment task.', 'CohEmb significantly outperforms the graph-based coherence model proposed by Mesgar and Strube (2016) by a large margin (6%), showing that our model captures coherence better than their model.', 'In our model, the CNN layer automatically learns which connections are important to be considered for coherence patterns, whereas this is performed in Mesgar and Strube (2016) by defining a threshold for eliminating connections.', 'CohLSTM significantly outperforms both the coherence model proposed by Mesgar and Strube (2016) and the CohEmb model by 11% and 5%, respectively, and defines a new state-of-the-art on this dataset.', 'CohLSTM, unlike Mesgar and Strube (2016)’s model and CohEmb, considers words of sentences in their sentence context.', 'This supports our intuition that actual context information of words contributes to the perceived coherence of texts.', 'CohLSTM, which captures exclusively local coherence, even outperforms the readability system proposed by De Clercq and Hoste (2016), which relies on a wide range of lexical, syntactic and semantic features.']
[None, ['Mesgar and Strube (2016)', 'CohEmb', 'CohLSTM', 'De Clercq and Hoste (2016)'], ['CohEmb', 'Mesgar and Strube (2016)', 'CohLSTM', 'Accuracy (%)'], ['CohLSTM'], ['CohLSTM', 'Mesgar and Strube (2016)', 'CohEmb', 'Accuracy (%)'], ['CohLSTM', 'Mesgar and Strube (2016)', 'CohEmb'], None, ['CohLSTM', 'De Clercq and Hoste (2016)']]
1
D18-1465table_4
The performance of correctly predicting the first and the last sentences on arXiv abstract and SIND caption datasets.
2
[['Models', 'Random'], ['Models', 'Pairwise Ranking Model'], ['Models', 'CNN+PtrNet'], ['Models', 'LSTM+PtrNet'], ['Models', 'ATTOrderNet (ATT)'], ['Models', 'ATTOrderNet (CNN)'], ['Models', 'ATTOrderNet']]
2
[['arXiv abstract', 'head'], ['arXiv abstract', 'tail'], ['SIND caption', 'head'], ['SIND caption', 'tail']]
[['23.06', '23.16', '22.78', '22.56'], ['84.85', '62.37', '-', '-'], ['89.43', '65.36', '73.53', '53.26'], ['90.47', '66.49', '74.66', '53.30'], ['89.68', '65.75', '75.88', '54.30'], ['90.86', '67.85', '75.95', '54.37'], ['91.00', '68.08', '76.00', '54.42']]
column
['accuracy', 'accuracy', 'accuracy', 'accuracy']
['ATTOrderNet (ATT)', 'ATTOrderNet (CNN)', 'ATTOrderNet']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>arXiv abstract || head</th> <th>arXiv abstract || tail</th> <th>SIND caption || head</th> <th>SIND caption || tail</th> </tr> </thead> <tbody> <tr> <td>Models || Random</td> <td>23.06</td> <td>23.16</td> <td>22.78</td> <td>22.56</td> </tr> <tr> <td>Models || Pairwise Ranking Model</td> <td>84.85</td> <td>62.37</td> <td>-</td> <td>-</td> </tr> <tr> <td>Models || CNN+PtrNet</td> <td>89.43</td> <td>65.36</td> <td>73.53</td> <td>53.26</td> </tr> <tr> <td>Models || LSTM+PtrNet</td> <td>90.47</td> <td>66.49</td> <td>74.66</td> <td>53.30</td> </tr> <tr> <td>Models || ATTOrderNet (ATT)</td> <td>89.68</td> <td>65.75</td> <td>75.88</td> <td>54.30</td> </tr> <tr> <td>Models || ATTOrderNet (CNN)</td> <td>90.86</td> <td>67.85</td> <td>75.95</td> <td>54.37</td> </tr> <tr> <td>Models || ATTOrderNet</td> <td>91.00</td> <td>68.08</td> <td>76.00</td> <td>54.42</td> </tr> </tbody></table>
Table 4
table_4
D18-1465
7
emnlp2018
Since the first and the last sentences of the text are more special to discern (Chen et al., 2016; Gong et al., 2016), we also evaluate the ratio of correctly predicting the first and the last sentences. Table 4 summarizes our performances on arXiv abstract and SIND caption. As we see, all models show fair well in predicting the first sentence, and the prediction accuracy declines for the last one. It is observed that ATTOrderNet still achieves a boost in predicting two positions compared to the previous state-of-the-art system on both datasets.
[2, 1, 1, 1]
['Since the first and the last sentences of the text are more special to discern (Chen et al., 2016; Gong et al., 2016), we also evaluate the ratio of correctly predicting the first and the last sentences.', 'Table 4 summarizes our performances on arXiv abstract and SIND caption.', 'As we see, all models show fair well in predicting the first sentence, and the prediction accuracy declines for the last one.', 'It is observed that ATTOrderNet still achieves a boost in predicting two positions compared to the previous state-of-the-art system on both datasets.']
[None, ['arXiv abstract', 'SIND caption'], ['Random', 'Pairwise Ranking Model', 'CNN+PtrNet', 'LSTM+PtrNet', 'ATTOrderNet (ATT)', 'ATTOrderNet (CNN)', 'ATTOrderNet', 'head', 'tail'], ['ATTOrderNet', 'arXiv abstract', 'SIND caption', 'head', 'tail']]
1
D18-1465table_5
Experimental results of Pairwise Accuracy for different approaches on two datasets in the Order Discrimination task.
2
[['Models', 'Random'], ['Models', 'Graph'], ['Models', 'HMM+Entity'], ['Models', 'HMM'], ['Models', 'Entity Grid'], ['Models', 'Recurrent'], ['Models', 'Recursive'], ['Models', 'Discriminative Model'], ['Models', 'Varient-LSTM+PtrNet'], ['Models', 'CNN+PtrNet'], ['Models', 'LSTM+PtrNet'], ['Models', 'ATTOrderNet (ATT)'], ['Models', 'ATTOrderNet (CNN)'], ['Models', 'ATTOrderNet']]
1
[['Accident'], ['Earthquake']]
[['50.0', '50.0'], ['84.6', '63.5'], ['84.2', '91.1'], ['82.2', '93.8'], ['90.4', '87.2'], ['84.0', '95.1'], ['86.4', '97.6'], ['93.0', '99.2'], ['94.4', '99.7'], ['93.5', '99.4'], ['93.7', '99.5'], ['95.4', '99.6'], ['95.8', '99.7'], ['96.2', '99.8']]
column
['accuracy', 'accuracy']
['ATTOrderNet (ATT)', 'ATTOrderNet (CNN)', 'ATTOrderNet']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Accident</th> <th>Earthquake</th> </tr> </thead> <tbody> <tr> <td>Models || Random</td> <td>50.0</td> <td>50.0</td> </tr> <tr> <td>Models || Graph</td> <td>84.6</td> <td>63.5</td> </tr> <tr> <td>Models || HMM+Entity</td> <td>84.2</td> <td>91.1</td> </tr> <tr> <td>Models || HMM</td> <td>82.2</td> <td>93.8</td> </tr> <tr> <td>Models || Entity Grid</td> <td>90.4</td> <td>87.2</td> </tr> <tr> <td>Models || Recurrent</td> <td>84.0</td> <td>95.1</td> </tr> <tr> <td>Models || Recursive</td> <td>86.4</td> <td>97.6</td> </tr> <tr> <td>Models || Discriminative Model</td> <td>93.0</td> <td>99.2</td> </tr> <tr> <td>Models || Varient-LSTM+PtrNet</td> <td>94.4</td> <td>99.7</td> </tr> <tr> <td>Models || CNN+PtrNet</td> <td>93.5</td> <td>99.4</td> </tr> <tr> <td>Models || LSTM+PtrNet</td> <td>93.7</td> <td>99.5</td> </tr> <tr> <td>Models || ATTOrderNet (ATT)</td> <td>95.4</td> <td>99.6</td> </tr> <tr> <td>Models || ATTOrderNet (CNN)</td> <td>95.8</td> <td>99.7</td> </tr> <tr> <td>Models || ATTOrderNet</td> <td>96.2</td> <td>99.8</td> </tr> </tbody></table>
Table 5
table_5
D18-1465
9
emnlp2018
3.4.2 Results. Table 5 reports the results of ATTOrderNet and currently competing architectures in this evaluation task. ATTOrderNet also achieves the stateof-the-art performance, showing a remarkable advancement of about 1.8% gain on Accident dataset and further improving the pairwise accuracy to 99.8 on Earthquake dataset. LSTM+PtrNet and CNN+ PtrNet (Gong et al., 2016) fall short of Varient-LSTM+PtrNet (Logeswaran et al., 2018) in performance. This could also be blamed for their paragraph encoder. Documents in both datasets are much longer than those in others, which brings more trouble for LSTMs in paragraph encoder to build logical representations. Compared to the result in the sentence ordering task, Entity Grid (Barzilay and Lapata, 2008) achieves a good performance in this task and even outperforms Recurrent neural networks and Recursive neural networks (Li and Hovy, 2014) on Accident dataset. However, Entity Grid requires hand-engineered features and heavily relies on linguistic knowledge which restrain the model to be adapted to other tasks.
[2, 1, 1, 1, 2, 2, 1, 2]
['3.4.2 Results.', 'Table 5 reports the results of ATTOrderNet and currently competing architectures in this evaluation task.', 'ATTOrderNet also achieves the stateof-the-art performance, showing a remarkable advancement of about 1.8% gain on Accident dataset and further improving the pairwise accuracy to 99.8 on Earthquake dataset.', 'LSTM+PtrNet and CNN+ PtrNet (Gong et al., 2016) fall short of Varient-LSTM+PtrNet (Logeswaran et al., 2018) in performance.', 'This could also be blamed for their paragraph encoder.', 'Documents in both datasets are much longer than those in others, which brings more trouble for LSTMs in paragraph encoder to build logical representations.', 'Compared to the result in the sentence ordering task, Entity Grid (Barzilay and Lapata, 2008) achieves a good performance in this task and even outperforms Recurrent neural networks and Recursive neural networks (Li and Hovy, 2014) on Accident dataset.', 'However, Entity Grid requires hand-engineered features and heavily relies on linguistic knowledge which restrain the model to be adapted to other tasks.']
[None, ['ATTOrderNet', 'Random', 'Graph', 'HMM+Entity', 'HMM', 'Entity Grid', 'Recurrent', 'Recursive', 'Discriminative Model', 'Varient-LSTM+PtrNet', 'CNN+PtrNet', 'LSTM+PtrNet'], ['ATTOrderNet', 'Varient-LSTM+PtrNet', 'Accident', 'Earthquake'], ['Varient-LSTM+PtrNet', 'CNN+PtrNet', 'LSTM+PtrNet'], None, ['Accident', 'Earthquake'], ['Entity Grid', 'Recurrent', 'Recursive', 'Accident'], ['Entity Grid']]
1
D18-1483table_5
Crosslingual clustering results when considering two different approaches to compute distances across crosslingual clusters on the test set for Spanish, German and English. See text for details.
2
[['crosslingual model', 'τsearch (global)'], ['crosslingual model', 'τsearch (pivot)']]
1
[['F1'], ['P'], ['R']]
[['72.7', '89.8', '61.0'], ['84.0', '83.0', '85.0']]
column
['F1', 'P', 'R']
['τsearch (global)', 'τsearch (pivot)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>F1</th> <th>P</th> <th>R</th> </tr> </thead> <tbody> <tr> <td>crosslingual model || τsearch (global)</td> <td>72.7</td> <td>89.8</td> <td>61.0</td> </tr> <tr> <td>crosslingual model || τsearch (pivot)</td> <td>84.0</td> <td>83.0</td> <td>85.0</td> </tr> </tbody></table>
Table 5
table_5
D18-1483
8
emnlp2018
We test two different scenarios for optimizing the similarity threshold τ for the crosslingual case. Table 5 shows the results for these experiments. First, we consider the simpler case of adjusting a global τ parameter for the crosslingual distances, as also described for the monolingual case. As shown, this method works poorly, since the τ grid-search could not find a reasonable τ which worked well for every possible language pair. Subsequently, we also consider the case of using English as a pivot language (see §3), where distances for every other language are only compared to English, and crosslingual clustering decisions are made only based on this distance. This yielded our best crosslingual score of F1=84.0, confirming that crosslingual similarity is of higher quality between each language and English, for the embeddings we used. This score represents only a small degradation in respect to the monolingual results, since clustering across different languages is a harder problem.
[2, 1, 2, 1, 2, 1, 2]
['We test two different scenarios for optimizing the similarity threshold τ for the crosslingual case.', 'Table 5 shows the results for these experiments.', 'First, we consider the simpler case of adjusting a global τ parameter for the crosslingual distances, as also described for the monolingual case.', 'As shown, this method works poorly, since the τ grid-search could not find a reasonable τ which worked well for every possible language pair.', 'Subsequently, we also consider the case of using English as a pivot language (see §3), where distances for every other language are only compared to English, and crosslingual clustering decisions are made only based on this distance.', 'This yielded our best crosslingual score of F1=84.0, confirming that crosslingual similarity is of higher quality between each language and English, for the embeddings we used.', 'This score represents only a small degradation in respect to the monolingual results, since clustering across different languages is a harder problem.']
[None, None, ['τsearch (global)'], ['τsearch (global)'], ['τsearch (pivot)'], ['τsearch (pivot)', 'F1'], ['τsearch (pivot)', 'F1']]
1
D18-1484table_4
Results of Hot Update, Cold Update and Zero Update in different cases
2
[['Model', 'Before Update'], ['Model', 'Cold Update'], ['Model', 'Hot Update'], ['Model', 'Zero Update']]
2
[['Case 1', 'SST-1'], ['Case 1', 'SST-2'], ['Case 1', 'IMDB'], ['Case 2', 'B'], ['Case 2', 'D'], ['Case 2', 'E'], ['Case 2', 'K'], ['Case 3', 'RN'], ['Case 3', 'QC'], ['Case 3', 'IMDB']]
[['48.6', '87.6', '-', '83.7', '84.5', '85.9', '-', '84.8', '93.4', '-'], ['49.8', '88.5', '91.2', '84.4', '85.2', '87.2', '86.9', '85.5', '93.2', '91.0'], ['49.6', '88.1', '91.4', '84.2', '84.9', '87.0', '87.1', '85.2', '92.9', '91.1'], ['-', '-', '90.9', '-', '-', '-', '86.7', '-', '-', '74.2']]
column
['accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy']
['Before Update', 'Cold Update', 'Hot Update', 'Zero Update']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Case 1 || SST-1</th> <th>Case 1 || SST-2</th> <th>Case 1 || IMDB</th> <th>Case 2 || B</th> <th>Case 2 || D</th> <th>Case 2 || E</th> <th>Case 2 || K</th> <th>Case 3 || RN</th> <th>Case 3 || QC</th> <th>Case 3 || IMDB</th> </tr> </thead> <tbody> <tr> <td>Model || Before Update</td> <td>48.6</td> <td>87.6</td> <td>-</td> <td>83.7</td> <td>84.5</td> <td>85.9</td> <td>-</td> <td>84.8</td> <td>93.4</td> <td>-</td> </tr> <tr> <td>Model || Cold Update</td> <td>49.8</td> <td>88.5</td> <td>91.2</td> <td>84.4</td> <td>85.2</td> <td>87.2</td> <td>86.9</td> <td>85.5</td> <td>93.2</td> <td>91.0</td> </tr> <tr> <td>Model || Hot Update</td> <td>49.6</td> <td>88.1</td> <td>91.4</td> <td>84.2</td> <td>84.9</td> <td>87.0</td> <td>87.1</td> <td>85.2</td> <td>92.9</td> <td>91.1</td> </tr> <tr> <td>Model || Zero Update</td> <td>-</td> <td>-</td> <td>90.9</td> <td>-</td> <td>-</td> <td>-</td> <td>86.7</td> <td>-</td> <td>-</td> <td>74.2</td> </tr> </tbody></table>
Table 4
table_4
D18-1484
7
emnlp2018
where in Zero Update, we ignore the training set of C and just evaluate our model on the testing set. As Table 4 shows, Before Update denotes the model trained on the old tasks before the new tasks are involved, so only evaluations on the old tasks are conducted. Cold Update re-trains the model of Before Update with both the old tasks and the new tasks, thus achieving similar performances with the last line in Table 3. Different from Cold Update, Hot Update resumes training only on the new tasks, requires much less training time, while still obtains competitive results for all tasks. The new tasks like IMDB and Kitchen benefit more from Hot Update than the old tasks, as the parameters are further tuned according to annotations from these new tasks. Zero Update provides inspiring possibilities for completely unannotated tasks. There are no more annotations for additional training from the new tasks, so we just apply the model of Before Update for evaluations on the testing sets of the new tasks. Zero Update achieves competitive performances in Case 1 (90.9 for IMDB) and Case 2 (86.7 for Kitchen), as tasks from these two cases all belong to sentiment datasets of different cardinalities or domains that contain rich semantic correlations with each other. However, the result for IMDB in Case 3 is only 74.2, as sentiment shares less relevance with topic and question type, thus leading to poor transferring performances.
[2, 1, 1, 1, 1, 2, 2, 1, 1]
['where in Zero Update, we ignore the training set of C and just evaluate our model on the testing set.', 'As Table 4 shows, Before Update denotes the model trained on the old tasks before the new tasks are involved, so only evaluations on the old tasks are conducted.', 'Cold Update re-trains the model of Before Update with both the old tasks and the new tasks, thus achieving similar performances with the last line in Table 3.', 'Different from Cold Update, Hot Update resumes training only on the new tasks, requires much less training time, while still obtains competitive results for all tasks.', 'The new tasks like IMDB and Kitchen benefit more from Hot Update than the old tasks, as the parameters are further tuned according to annotations from these new tasks.', 'Zero Update provides inspiring possibilities for completely unannotated tasks.', 'There are no more annotations for additional training from the new tasks, so we just apply the model of Before Update for evaluations on the testing sets of the new tasks.', 'Zero Update achieves competitive performances in Case 1 (90.9 for IMDB) and Case 2 (86.7 for Kitchen), as tasks from these two cases all belong to sentiment datasets of different cardinalities or domains that contain rich semantic correlations with each other.', 'However, the result for IMDB in Case 3 is only 74.2, as sentiment shares less relevance with topic and question type, thus leading to poor transferring performances.']
[None, ['Before Update'], ['Cold Update', 'Before Update'], ['Cold Update', 'Hot Update'], ['Hot Update', 'IMDB', 'K'], ['Zero Update'], ['Before Update'], ['Zero Update', 'Case 1', 'IMDB', 'Case 2', 'K'], ['Zero Update', 'Case 3', 'IMDB']]
1
D18-1484table_5
Comparisons of MTLE against state-of-the-art models
2
[['Model', 'NBOW'], ['Model', 'PV'], ['Model', 'CNN'], ['Model', 'MT-CNN'], ['Model', 'MT-DNN'], ['Model', 'MT-RNN'], ['Model', 'DSM'], ['Model', 'GRNN'], ['Model', 'Tree-LSTM'], ['Model', 'MTLE']]
1
[['SST-1'], ['SST-2'], ['IMDB'], ['Books'], ['DVDs'], ['Electronics'], ['Kitchen'], ['QC']]
[['42.4', '80.5', '83.6', '-', '-', '-', '-', '88.2'], ['44.6', '82.7', '91.7', '-', '-', '-', '-', '91.8'], ['48.0', '88.1', '-', '-', '-', '-', '-', '93.6'], ['-', '-', '-', '80.2', '81.0', '83.4', '83.0', '-'], ['-', '-', '-', '79.7', '80.5', '82.5', '82.8', '-'], ['49.6', '87.9', '91.3', '-', '-', '-', '-', '-'], ['49.5', '87.8', '91.2', '82.8', '83.0', '85.5', '84.0', '-'], ['47.5', '85.5', '-', '-', '-', '-', '-', '93.8'], ['50.6', '86.9', '-', '-', '-', '-', '-', '-'], ['49.8', '88.4', '91.3', '84.5', '85.2', '87.3', '86.9', '93.2']]
column
['accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy']
['MTLE']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>SST-1</th> <th>SST-2</th> <th>IMDB</th> <th>Books</th> <th>DVDs</th> <th>Electronics</th> <th>Kitchen</th> <th>QC</th> </tr> </thead> <tbody> <tr> <td>Model || NBOW</td> <td>42.4</td> <td>80.5</td> <td>83.6</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>88.2</td> </tr> <tr> <td>Model || PV</td> <td>44.6</td> <td>82.7</td> <td>91.7</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>91.8</td> </tr> <tr> <td>Model || CNN</td> <td>48.0</td> <td>88.1</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>93.6</td> </tr> <tr> <td>Model || MT-CNN</td> <td>-</td> <td>-</td> <td>-</td> <td>80.2</td> <td>81.0</td> <td>83.4</td> <td>83.0</td> <td>-</td> </tr> <tr> <td>Model || MT-DNN</td> <td>-</td> <td>-</td> <td>-</td> <td>79.7</td> <td>80.5</td> <td>82.5</td> <td>82.8</td> <td>-</td> </tr> <tr> <td>Model || MT-RNN</td> <td>49.6</td> <td>87.9</td> <td>91.3</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> </tr> <tr> <td>Model || DSM</td> <td>49.5</td> <td>87.8</td> <td>91.2</td> <td>82.8</td> <td>83.0</td> <td>85.5</td> <td>84.0</td> <td>-</td> </tr> <tr> <td>Model || GRNN</td> <td>47.5</td> <td>85.5</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>93.8</td> </tr> <tr> <td>Model || Tree-LSTM</td> <td>50.6</td> <td>86.9</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> </tr> <tr> <td>Model || MTLE</td> <td>49.8</td> <td>88.4</td> <td>91.3</td> <td>84.5</td> <td>85.2</td> <td>87.3</td> <td>86.9</td> <td>93.2</td> </tr> </tbody></table>
Table 5
table_5
D18-1484
8
emnlp2018
As Table 5 shows, MTLE achieves competitive or better performances on most tasks except for QC, as it contains less correlations with other tasks. Tree-LSTM outperforms our model on SST1 (50.6 against 49.8), but it requires an external parser to get the sentence topological structure and utilizes treebank annotations. PV slightly surpasses MTLE on IMDB (91.7 against 91.3), as sentences from IMDB are much longer than SST and MDSD, which require stronger abilities of long-term dependency learning.
[1, 1, 1]
['As Table 5 shows, MTLE achieves competitive or better performances on most tasks except for QC, as it contains less correlations with other tasks.', 'Tree-LSTM outperforms our model on SST1 (50.6 against 49.8), but it requires an external parser to get the sentence topological structure and utilizes treebank annotations.', 'PV slightly surpasses MTLE on IMDB (91.7 against 91.3), as sentences from IMDB are much longer than SST and MDSD, which require stronger abilities of long-term dependency learning.']
[['MTLE', 'SST-1', 'SST-2', 'IMDB', 'Books', 'DVDs', 'Electronics', 'Kitchen'], ['Tree-LSTM', 'MTLE', 'SST-1'], ['PV', 'MTLE', 'IMDB']]
1
D18-1485table_5
Performance of the hierarchical model and our model on the RCV1-V2 test set. Hier refers to hierarchical model, and the subsequent number refers to the length of sentence (word) for sentence-level representations (p < 0.05).
2
[['Models', 'Hier-5'], ['Models', 'Hier-10'], ['Models', 'Hier-15'], ['Models', 'Hier-20'], ['Models', 'Our model']]
1
[['HL(-)'], ['P(+)'], ['R(+)'], ['F1(+)']]
[['0.0075', '0.887', '0.869', '0.878'], ['0.0077', '0.883', '0.873', '0.878'], ['0.0076', '0.879', '0.879', '0.879'], ['0.0076', '0.876', '0.881', '0.878'], ['0.0072', '0.891', '0.873', '0.882']]
column
['HL(-)', 'P(+)', 'R(+)', 'F1(+)']
['Our model']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>HL(-)</th> <th>P(+)</th> <th>R(+)</th> <th>F1(+)</th> </tr> </thead> <tbody> <tr> <td>Models || Hier-5</td> <td>0.0075</td> <td>0.887</td> <td>0.869</td> <td>0.878</td> </tr> <tr> <td>Models || Hier-10</td> <td>0.0077</td> <td>0.883</td> <td>0.873</td> <td>0.878</td> </tr> <tr> <td>Models || Hier-15</td> <td>0.0076</td> <td>0.879</td> <td>0.879</td> <td>0.879</td> </tr> <tr> <td>Models || Hier-20</td> <td>0.0076</td> <td>0.876</td> <td>0.881</td> <td>0.878</td> </tr> <tr> <td>Models || Our model</td> <td>0.0072</td> <td>0.891</td> <td>0.873</td> <td>0.882</td> </tr> </tbody></table>
Table 5
table_5
D18-1485
8
emnlp2018
We present the results of the evaluation on Table 5, where it can be found that our model with fewer parameters still outperforms the hierarchical model with the deterministic setting of sentence or phrase. Moreover, in order to alleviate the influence of the deterministic sentence boundary, we compare the performance of different hierarchical models with different boundaries, which sets the boundaries at the end of every 5, 10, 15 and 20 words respectively. The results in Table 5 show that the hierarchical models achieve similar performances, which are all higher than the performances of the baselines. This shows that highlevel representations can contribute to the performance of the Seq2Seq model on the multi-label text classification task. Furthermore, as these performances are no better than that of our proposed model, it can reflect that the learnable high-level representations can contribute more than deterministic sentence-level representations, as it can be more flexible to represent information of diverse levels, instead of fixed phrase or sentence level.
[1, 2, 1, 2, 2]
['We present the results of the evaluation on Table 5, where it can be found that our model with fewer parameters still outperforms the hierarchical model with the deterministic setting of sentence or phrase.', 'Moreover, in order to alleviate the influence of the deterministic sentence boundary, we compare the performance of different hierarchical models with different boundaries, which sets the boundaries at the end of every 5, 10, 15 and 20 words respectively.', 'The results in Table 5 show that the hierarchical models achieve similar performances, which are all higher than the performances of the baselines.', 'This shows that highlevel representations can contribute to the performance of the Seq2Seq model on the multi-label text classification task.', 'Furthermore, as these performances are no better than that of our proposed model, it can reflect that the learnable high-level representations can contribute more than deterministic sentence-level representations, as it can be more flexible to represent information of diverse levels, instead of fixed phrase or sentence level.']
[['Our model'], ['Hier-5', 'Hier-10', 'Hier-15', 'Hier-20'], ['Hier-5', 'Hier-10', 'Hier-15', 'Hier-20'], None, None]
1