Datasets:

bibtex_url
stringlengths
41
50
bibtext
stringlengths
693
2.88k
abstract
stringlengths
0
2k
authors
sequencelengths
1
45
title
stringlengths
21
199
id
stringlengths
7
16
type
stringclasses
2 values
arxiv_id
stringlengths
0
10
GitHub
sequencelengths
1
1
paper_page
stringlengths
0
40
n_linked_authors
int64
-1
28
upvotes
int64
-1
255
num_comments
int64
-1
23
n_authors
int64
-1
35
proceedings
stringlengths
38
47
Models
sequencelengths
0
57
Datasets
sequencelengths
0
19
Spaces
sequencelengths
0
100
paper_page_exists_pre_conf
int64
0
1
https://aclanthology.org/2024.privatenlp-1.6.bib
@inproceedings{platnick-etal-2024-preset, title = "Preset-Voice Matching for Privacy Regulated Speech-to-Speech Translation Systems", author = "Platnick, Daniel and Abdelnour, Bishoy and Earl, Eamon and Kumar, Rahul and Rezaei, Zahra and Tsangaris, Thomas and Lagum, Faraj", editor = "Habernal, Ivan and Ghanavati, Sepideh and Ravichander, Abhilasha and Jain, Vijayanta and Thaine, Patricia and Igamberdiev, Timour and Mireshghallah, Niloofar and Feyisetan, Oluwaseyi", booktitle = "Proceedings of the Fifth Workshop on Privacy in Natural Language Processing", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.privatenlp-1.6", pages = "52--62", abstract = "In recent years, there has been increased demand for speech-to-speech translation (S2ST) systems in industry settings. Although successfully commercialized, cloning-based S2ST systems expose their distributors to liabilities when misused by individuals and can infringe on personality rights when exploited by media organizations. This work proposes a regulated S2ST framework called Preset-Voice Matching (PVM). PVM removes cross-lingual voice cloning in S2ST by first matching the input voice to a similar prior consenting speaker voice in the target-language. With this separation, PVM avoids cloning the input speaker, ensuring PVM systems comply with regulations and reduce risk of misuse. Our results demonstrate PVM can significantly improve S2ST system run-time in multi-speaker settings and the naturalness of S2ST synthesized speech. To our knowledge, PVM is the first explicitly regulated S2ST framework leveraging similarly-matched preset-voices for dynamic S2ST tasks.", }
In recent years, there has been increased demand for speech-to-speech translation (S2ST) systems in industry settings. Although successfully commercialized, cloning-based S2ST systems expose their distributors to liabilities when misused by individuals and can infringe on personality rights when exploited by media organizations. This work proposes a regulated S2ST framework called Preset-Voice Matching (PVM). PVM removes cross-lingual voice cloning in S2ST by first matching the input voice to a similar prior consenting speaker voice in the target-language. With this separation, PVM avoids cloning the input speaker, ensuring PVM systems comply with regulations and reduce risk of misuse. Our results demonstrate PVM can significantly improve S2ST system run-time in multi-speaker settings and the naturalness of S2ST synthesized speech. To our knowledge, PVM is the first explicitly regulated S2ST framework leveraging similarly-matched preset-voices for dynamic S2ST tasks.
[ "Platnick, Daniel", "Abdelnour, Bishoy", "Earl, Eamon", "Kumar, Rahul", "Rezaei, Zahra", "Tsangaris, Thomas", "Lagum, Faraj" ]
Preset-Voice Matching for Privacy Regulated Speech-to-Speech Translation Systems
privatenlp-1.6
Poster
2407.13153
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.privatenlp-1.6/
[]
[]
[]
0
https://aclanthology.org/2024.privatenlp-1.7.bib
@inproceedings{nakka-etal-2024-pii, title = "{PII}-Compass: Guiding {LLM} training data extraction prompts towards the target {PII} via grounding", author = "Nakka, Krishna and Frikha, Ahmed and Mendes, Ricardo and Jiang, Xue and Zhou, Xuebing", editor = "Habernal, Ivan and Ghanavati, Sepideh and Ravichander, Abhilasha and Jain, Vijayanta and Thaine, Patricia and Igamberdiev, Timour and Mireshghallah, Niloofar and Feyisetan, Oluwaseyi", booktitle = "Proceedings of the Fifth Workshop on Privacy in Natural Language Processing", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.privatenlp-1.7", pages = "63--73", abstract = "The latest and most impactful advances in large models stem from their increased size. Unfortunately, this translates into an improved memorization capacity, raising data privacy concerns. Specifically, it has been shown that models can output personal identifiable information (PII) contained in their training data. However, reported PII extraction performance varies widely, and there is no consensus on the optimal methodology to evaluate this risk, resulting in underestimating realistic adversaries. In this work, we empirically demonstrate that it is possible to improve the extractability of PII by over ten-fold by grounding the prefix of the manually constructed extraction prompt with in-domain data. This approach achieves phone number extraction rates of 0.92{\%}, 3.9{\%}, and 6.86{\%} with 1, 128, and 2308 queries, respectively, i.e., the phone number of 1 person in 15 is extractable.", }
The latest and most impactful advances in large models stem from their increased size. Unfortunately, this translates into an improved memorization capacity, raising data privacy concerns. Specifically, it has been shown that models can output personal identifiable information (PII) contained in their training data. However, reported PII extraction performance varies widely, and there is no consensus on the optimal methodology to evaluate this risk, resulting in underestimating realistic adversaries. In this work, we empirically demonstrate that it is possible to improve the extractability of PII by over ten-fold by grounding the prefix of the manually constructed extraction prompt with in-domain data. This approach achieves phone number extraction rates of 0.92{\%}, 3.9{\%}, and 6.86{\%} with 1, 128, and 2308 queries, respectively, i.e., the phone number of 1 person in 15 is extractable.
[ "Nakka, Krishna", "Frikha, Ahmed", "Mendes, Ricardo", "Jiang, Xue", "Zhou, Xuebing" ]
PII-Compass: Guiding LLM training data extraction prompts towards the target PII via grounding
privatenlp-1.7
Poster
2407.02943
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.privatenlp-1.7/
[]
[]
[]
0
https://aclanthology.org/2024.privatenlp-1.8.bib
@inproceedings{pissarra-etal-2024-unlocking, title = "Unlocking the Potential of Large Language Models for Clinical Text Anonymization: A Comparative Study", author = "Pissarra, David and Curioso, Isabel and Alveira, Jo{\~a}o and Pereira, Duarte and Ribeiro, Bruno and Souper, Tom{\'a}s and Gomes, Vasco and Carreiro, Andr{\'e} and Rolla, Vitor", editor = "Habernal, Ivan and Ghanavati, Sepideh and Ravichander, Abhilasha and Jain, Vijayanta and Thaine, Patricia and Igamberdiev, Timour and Mireshghallah, Niloofar and Feyisetan, Oluwaseyi", booktitle = "Proceedings of the Fifth Workshop on Privacy in Natural Language Processing", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.privatenlp-1.8", pages = "74--84", abstract = "Automated clinical text anonymization has the potential to unlock the widespread sharing of textual health data for secondary usage while assuring patient privacy. Despite the proposal of many complex and theoretically successful anonymization solutions in literature, these techniques remain flawed. As such, clinical institutions are still reluctant to apply them for open access to their data. Recent advances in developing Large Language Models (LLMs) pose a promising opportunity to further the field, given their capability to perform various tasks. This paper proposes six new evaluation metrics tailored to the challenges of generative anonymization with LLMs. Moreover, we present a comparative study of LLM-based methods, testing them against two baseline techniques. Our results establish LLM-based models as a reliable alternative to common approaches, paving the way toward trustworthy anonymization of clinical text.", }
Automated clinical text anonymization has the potential to unlock the widespread sharing of textual health data for secondary usage while assuring patient privacy. Despite the proposal of many complex and theoretically successful anonymization solutions in literature, these techniques remain flawed. As such, clinical institutions are still reluctant to apply them for open access to their data. Recent advances in developing Large Language Models (LLMs) pose a promising opportunity to further the field, given their capability to perform various tasks. This paper proposes six new evaluation metrics tailored to the challenges of generative anonymization with LLMs. Moreover, we present a comparative study of LLM-based methods, testing them against two baseline techniques. Our results establish LLM-based models as a reliable alternative to common approaches, paving the way toward trustworthy anonymization of clinical text.
[ "Pissarra, David", "Curioso, Isabel", "Alveira, Jo{\\~a}o", "Pereira, Duarte", "Ribeiro, Bruno", "Souper, Tom{\\'a}s", "Gomes, Vasco", "Carreiro, Andr{\\'e}", "Rolla, Vitor" ]
Unlocking the Potential of Large Language Models for Clinical Text Anonymization: A Comparative Study
privatenlp-1.8
Poster
2406.00062
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.privatenlp-1.8/
[]
[]
[]
0
https://aclanthology.org/2024.privatenlp-1.9.bib
@inproceedings{alves-etal-2024-anonymization, title = "Anonymization Through Substitution: Words vs Sentences", author = "Alves, Vasco and Rolla, Vitor and Alveira, Jo{\~a}o and Pissarra, David and Pereira, Duarte and Curioso, Isabel and Carreiro, Andr{\'e} and Lopes Cardoso, Henrique", editor = "Habernal, Ivan and Ghanavati, Sepideh and Ravichander, Abhilasha and Jain, Vijayanta and Thaine, Patricia and Igamberdiev, Timour and Mireshghallah, Niloofar and Feyisetan, Oluwaseyi", booktitle = "Proceedings of the Fifth Workshop on Privacy in Natural Language Processing", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.privatenlp-1.9", pages = "85--90", abstract = "Anonymization of clinical text is crucial to allow the sharing and disclosure of health records while safeguarding patient privacy. However, automated anonymization processes are still highly limited in healthcare practice, as these systems cannot assure the anonymization of all private information. This paper explores the application of a novel technique that guarantees the removal of all sensitive information through the usage of text embeddings obtained from a de-identified dataset, replacing every word or sentence of a clinical note. We analyze the performance of different embedding techniques and models by evaluating them using recently proposed evaluation metrics. The results demonstrate that sentence replacement is better at keeping relevant medical information untouched, while the word replacement strategy performs better in terms of anonymization sensitivity.", }
Anonymization of clinical text is crucial to allow the sharing and disclosure of health records while safeguarding patient privacy. However, automated anonymization processes are still highly limited in healthcare practice, as these systems cannot assure the anonymization of all private information. This paper explores the application of a novel technique that guarantees the removal of all sensitive information through the usage of text embeddings obtained from a de-identified dataset, replacing every word or sentence of a clinical note. We analyze the performance of different embedding techniques and models by evaluating them using recently proposed evaluation metrics. The results demonstrate that sentence replacement is better at keeping relevant medical information untouched, while the word replacement strategy performs better in terms of anonymization sensitivity.
[ "Alves, Vasco", "Rolla, Vitor", "Alveira, Jo{\\~a}o", "Pissarra, David", "Pereira, Duarte", "Curioso, Isabel", "Carreiro, Andr{\\'e}", "Lopes Cardoso, Henrique" ]
Anonymization Through Substitution: Words vs Sentences
privatenlp-1.9
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.privatenlp-1.9/
[]
[]
[]
0
https://aclanthology.org/2024.privatenlp-1.10.bib
@inproceedings{peng-etal-2024-pocketllm, title = "{P}ocket{LLM}: Enabling On-Device Fine-Tuning for Personalized {LLM}s", author = "Peng, Dan and Fu, Zhihui and Wang, Jun", editor = "Habernal, Ivan and Ghanavati, Sepideh and Ravichander, Abhilasha and Jain, Vijayanta and Thaine, Patricia and Igamberdiev, Timour and Mireshghallah, Niloofar and Feyisetan, Oluwaseyi", booktitle = "Proceedings of the Fifth Workshop on Privacy in Natural Language Processing", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.privatenlp-1.10", pages = "91--96", abstract = "Recent advancements in large language models (LLMs) have indeed showcased their impressive capabilities. On mobile devices, the wealth of valuable, non-public data generated daily holds great promise for locally fine-tuning personalized LLMs, while maintaining privacy through on-device processing. However, the constraints of mobile device resources pose challenges to direct on-device LLM fine-tuning, mainly due to the memory-intensive nature of derivative-based optimization required for saving gradients and optimizer states. To tackle this, we propose employing derivative-free optimization techniques to enable on-device fine-tuning of LLM, even on memory-limited mobile devices. Empirical results demonstrate that the RoBERTa-large model and OPT-1.3B can be fine-tuned locally on the OPPO Reno 6 smartphone using around 4GB and 6.5GB of memory respectively, using derivative-free optimization techniques. This highlights the feasibility of on-device LLM fine-tuning on mobile devices, paving the way for personalized LLMs on resource-constrained devices while safeguarding data privacy.", }
Recent advancements in large language models (LLMs) have indeed showcased their impressive capabilities. On mobile devices, the wealth of valuable, non-public data generated daily holds great promise for locally fine-tuning personalized LLMs, while maintaining privacy through on-device processing. However, the constraints of mobile device resources pose challenges to direct on-device LLM fine-tuning, mainly due to the memory-intensive nature of derivative-based optimization required for saving gradients and optimizer states. To tackle this, we propose employing derivative-free optimization techniques to enable on-device fine-tuning of LLM, even on memory-limited mobile devices. Empirical results demonstrate that the RoBERTa-large model and OPT-1.3B can be fine-tuned locally on the OPPO Reno 6 smartphone using around 4GB and 6.5GB of memory respectively, using derivative-free optimization techniques. This highlights the feasibility of on-device LLM fine-tuning on mobile devices, paving the way for personalized LLMs on resource-constrained devices while safeguarding data privacy.
[ "Peng, Dan", "Fu, Zhihui", "Wang, Jun" ]
PocketLLM: Enabling On-Device Fine-Tuning for Personalized LLMs
privatenlp-1.10
Poster
2407.01031
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.privatenlp-1.10/
[]
[]
[]
0
https://aclanthology.org/2024.privatenlp-1.11.bib
@inproceedings{gutierrez-megias-etal-2024-smart, title = "Smart Lexical Search for Label Flipping Adversial Attack", author = "Guti{\'e}rrez-Meg{\'\i}as, Alberto and Jim{\'e}nez-Zafra, Salud Mar{\'\i}a and Ure{\~n}a, L. Alfonso and Mart{\'\i}nez-C{\'a}mara, Eugenio", editor = "Habernal, Ivan and Ghanavati, Sepideh and Ravichander, Abhilasha and Jain, Vijayanta and Thaine, Patricia and Igamberdiev, Timour and Mireshghallah, Niloofar and Feyisetan, Oluwaseyi", booktitle = "Proceedings of the Fifth Workshop on Privacy in Natural Language Processing", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.privatenlp-1.11", pages = "97--106", abstract = "Language models are susceptible to vulnerability through adversarial attacks, using manipulations of the input data to disrupt their performance. Accordingly, it represents a cibersecurity leak. Data manipulations are intended to be unidentifiable by the learning model and by humans, small changes can disturb the final label of a classification task. Hence, we propose a novel attack built upon explainability methods to identify the salient lexical units to alter in order to flip the classification label. We asses our proposal on a disinformation dataset, and we show that our attack reaches high balance among stealthiness and efficiency.", }
Language models are susceptible to vulnerability through adversarial attacks, using manipulations of the input data to disrupt their performance. Accordingly, it represents a cibersecurity leak. Data manipulations are intended to be unidentifiable by the learning model and by humans, small changes can disturb the final label of a classification task. Hence, we propose a novel attack built upon explainability methods to identify the salient lexical units to alter in order to flip the classification label. We asses our proposal on a disinformation dataset, and we show that our attack reaches high balance among stealthiness and efficiency.
[ "Guti{\\'e}rrez-Meg{\\'\\i}as, Alberto", "Jim{\\'e}nez-Zafra, Salud Mar{\\'\\i}a", "Ure{\\~n}a, L. Alfonso", "Mart{\\'\\i}nez-C{\\'a}mara, Eugenio" ]
Smart Lexical Search for Label Flipping Adversial Attack
privatenlp-1.11
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.privatenlp-1.11/
[]
[]
[]
0
https://aclanthology.org/2024.privatenlp-1.12.bib
@inproceedings{hartmann-etal-2024-llms, title = "Can {LLM}s get help from other {LLM}s without revealing private information?", author = "Hartmann, Florian and Tran, Duc-Hieu and Kairouz, Peter and C{\u{a}}rbune, Victor and Aguera Y Arcas, Blaise", editor = "Habernal, Ivan and Ghanavati, Sepideh and Ravichander, Abhilasha and Jain, Vijayanta and Thaine, Patricia and Igamberdiev, Timour and Mireshghallah, Niloofar and Feyisetan, Oluwaseyi", booktitle = "Proceedings of the Fifth Workshop on Privacy in Natural Language Processing", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.privatenlp-1.12", pages = "107--122", abstract = "Cascades are a common type of machine learning systems in which a large, remote model can be queried if a local model is not able to accurately label a user{'}s data by itself. Serving stacks for large language models (LLMs) increasingly use cascades due to their ability to preserve task performance while dramatically reducing inference costs. However, applying cascade systems in situations where the local model has access to sensitive data constitutes a significant privacy risk for users since such data could be forwarded to the remote model. In this work, we show the feasibility of applying cascade systems in such setups by equipping the local model with privacy-preserving techniques that reduce the risk of leaking private information when querying the remote model. To quantify information leakage in such setups, we introduce two privacy measures. We then propose a system that leverages the recently introduced social learning paradigm in which LLMs collaboratively learn from each other by exchanging natural language. Using this paradigm, we demonstrate on several datasets that our methods minimize the privacy loss while at the same time improving task performance compared to a non-cascade baseline.", }
Cascades are a common type of machine learning systems in which a large, remote model can be queried if a local model is not able to accurately label a user{'}s data by itself. Serving stacks for large language models (LLMs) increasingly use cascades due to their ability to preserve task performance while dramatically reducing inference costs. However, applying cascade systems in situations where the local model has access to sensitive data constitutes a significant privacy risk for users since such data could be forwarded to the remote model. In this work, we show the feasibility of applying cascade systems in such setups by equipping the local model with privacy-preserving techniques that reduce the risk of leaking private information when querying the remote model. To quantify information leakage in such setups, we introduce two privacy measures. We then propose a system that leverages the recently introduced social learning paradigm in which LLMs collaboratively learn from each other by exchanging natural language. Using this paradigm, we demonstrate on several datasets that our methods minimize the privacy loss while at the same time improving task performance compared to a non-cascade baseline.
[ "Hartmann, Florian", "Tran, Duc-Hieu", "Kairouz, Peter", "C{\\u{a}}rbune, Victor", "Aguera Y Arcas, Blaise" ]
Can LLMs get help from other LLMs without revealing private information?
privatenlp-1.12
Poster
2404.01041
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.privatenlp-1.12/
[]
[]
[]
0
https://aclanthology.org/2024.privatenlp-1.13.bib
@inproceedings{riabi-etal-2024-cloaked, title = "Cloaked Classifiers: Pseudonymization Strategies on Sensitive Classification Tasks", author = "Riabi, Arij and Mahamdi, Menel and Mouilleron, Virginie and Seddah, Djam{\'e}", editor = "Habernal, Ivan and Ghanavati, Sepideh and Ravichander, Abhilasha and Jain, Vijayanta and Thaine, Patricia and Igamberdiev, Timour and Mireshghallah, Niloofar and Feyisetan, Oluwaseyi", booktitle = "Proceedings of the Fifth Workshop on Privacy in Natural Language Processing", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.privatenlp-1.13", pages = "123--136", abstract = "Protecting privacy is essential when sharing data, particularly in the case of an online radicalization dataset that may contain personal information. In this paper, we explore the balance between preserving data usefulness and ensuring robust privacy safeguards, since regulations like the European GDPR shape how personal information must be handled. We share our method for manually pseudonymizing a multilingual radicalization dataset, ensuring performance comparable to the original data. Furthermore, we highlight the importance of establishing comprehensive guidelines for processing sensitive NLP data by sharing our complete pseudonymization process, our guidelines, the challenges we encountered as well as the resulting dataset.", }
Protecting privacy is essential when sharing data, particularly in the case of an online radicalization dataset that may contain personal information. In this paper, we explore the balance between preserving data usefulness and ensuring robust privacy safeguards, since regulations like the European GDPR shape how personal information must be handled. We share our method for manually pseudonymizing a multilingual radicalization dataset, ensuring performance comparable to the original data. Furthermore, we highlight the importance of establishing comprehensive guidelines for processing sensitive NLP data by sharing our complete pseudonymization process, our guidelines, the challenges we encountered as well as the resulting dataset.
[ "Riabi, Arij", "Mahamdi, Menel", "Mouilleron, Virginie", "Seddah, Djam{\\'e}" ]
Cloaked Classifiers: Pseudonymization Strategies on Sensitive Classification Tasks
privatenlp-1.13
Poster
2406.17875
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.privatenlp-1.13/
[]
[]
[]
0
https://aclanthology.org/2024.privatenlp-1.14.bib
@inproceedings{kandula-etal-2024-improving, title = "Improving Authorship Privacy: Adaptive Obfuscation with the Dynamic Selection of Techniques", author = "Kandula, Hemanth and Karakos, Damianos and Qiu, Haoling and Ulicny, Brian", editor = "Habernal, Ivan and Ghanavati, Sepideh and Ravichander, Abhilasha and Jain, Vijayanta and Thaine, Patricia and Igamberdiev, Timour and Mireshghallah, Niloofar and Feyisetan, Oluwaseyi", booktitle = "Proceedings of the Fifth Workshop on Privacy in Natural Language Processing", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.privatenlp-1.14", pages = "137--142", abstract = "Authorship obfuscation, the task of rewriting text to protect the original author{'}s identity, is becoming increasingly important due to the rise of advanced NLP tools for authorship attribution techniques. Traditional methods for authorship obfuscation face significant challenges in balancing content preservation, fluency, and style concealment. This paper introduces a novel approach, the Obfuscation Strategy Optimizer (OSO), which dynamically selects the optimal obfuscation technique based on a combination of metrics including embedding distance, meaning similarity, and fluency. By leveraging an ensemble of language models OSO achieves superior performance in preserving the original content{'}s meaning and grammatical fluency while effectively concealing the author{'}s unique writing style. Experimental results demonstrate that the OSO outperforms existing methods and approaches the performance of larger language models. Our evaluation framework incorporates adversarial testing against state-of-the-art attribution systems to validate the robustness of the obfuscation techniques. We release our code publicly at https://github.com/BBN-E/ObfuscationStrategyOptimizer", }
Authorship obfuscation, the task of rewriting text to protect the original author{'}s identity, is becoming increasingly important due to the rise of advanced NLP tools for authorship attribution techniques. Traditional methods for authorship obfuscation face significant challenges in balancing content preservation, fluency, and style concealment. This paper introduces a novel approach, the Obfuscation Strategy Optimizer (OSO), which dynamically selects the optimal obfuscation technique based on a combination of metrics including embedding distance, meaning similarity, and fluency. By leveraging an ensemble of language models OSO achieves superior performance in preserving the original content{'}s meaning and grammatical fluency while effectively concealing the author{'}s unique writing style. Experimental results demonstrate that the OSO outperforms existing methods and approaches the performance of larger language models. Our evaluation framework incorporates adversarial testing against state-of-the-art attribution systems to validate the robustness of the obfuscation techniques. We release our code publicly at https://github.com/BBN-E/ObfuscationStrategyOptimizer
[ "K", "ula, Hemanth", "Karakos, Damianos", "Qiu, Haoling", "Ulicny, Brian" ]
Improving Authorship Privacy: Adaptive Obfuscation with the Dynamic Selection of Techniques
privatenlp-1.14
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.privatenlp-1.14/
[]
[]
[]
0
https://aclanthology.org/2024.privatenlp-1.15.bib
@inproceedings{elmahdy-salem-2024-deconstructing, title = "Deconstructing Classifiers: Towards A Data Reconstruction Attack Against Text Classification Models", author = "Elmahdy, Adel and Salem, Ahmed", editor = "Habernal, Ivan and Ghanavati, Sepideh and Ravichander, Abhilasha and Jain, Vijayanta and Thaine, Patricia and Igamberdiev, Timour and Mireshghallah, Niloofar and Feyisetan, Oluwaseyi", booktitle = "Proceedings of the Fifth Workshop on Privacy in Natural Language Processing", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.privatenlp-1.15", pages = "143--158", abstract = "Natural language processing (NLP) models have become increasingly popular in real-world applications, such as text classification. However, they are vulnerable to privacy attacks, including data reconstruction attacks that aim to extract the data used to train the model. Most previous studies on data reconstruction attacks have focused on LLM, while classification models were assumed to be more secure. In this work, we propose a new targeted data reconstruction attack called the Mix And Match attack, which takes advantage of the fact that most classification models are based on LLM. The Mix And Match attack uses the base model of the target model to generate candidate tokens and then prunes them using the classification head. We extensively demonstrate the effectiveness of the attack using both random and organic canaries. This work highlights the importance of considering the privacy risks associated with data reconstruction attacks in classification models and offers insights into possible leakages.", }
Natural language processing (NLP) models have become increasingly popular in real-world applications, such as text classification. However, they are vulnerable to privacy attacks, including data reconstruction attacks that aim to extract the data used to train the model. Most previous studies on data reconstruction attacks have focused on LLM, while classification models were assumed to be more secure. In this work, we propose a new targeted data reconstruction attack called the Mix And Match attack, which takes advantage of the fact that most classification models are based on LLM. The Mix And Match attack uses the base model of the target model to generate candidate tokens and then prunes them using the classification head. We extensively demonstrate the effectiveness of the attack using both random and organic canaries. This work highlights the importance of considering the privacy risks associated with data reconstruction attacks in classification models and offers insights into possible leakages.
[ "Elmahdy, Adel", "Salem, Ahmed" ]
Deconstructing Classifiers: Towards A Data Reconstruction Attack Against Text Classification Models
privatenlp-1.15
Poster
2306.13789
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.privatenlp-1.15/
[]
[]
[]
0
https://aclanthology.org/2024.privatenlp-1.16.bib
@inproceedings{zoubi-etal-2024-privat5, title = "{P}riva{T}5: A Generative Language Model for Privacy Policies", author = "Zoubi, Mohammad and T.y.s.s, Santosh and Rosas, Edgar and Grabmair, Matthias", editor = "Habernal, Ivan and Ghanavati, Sepideh and Ravichander, Abhilasha and Jain, Vijayanta and Thaine, Patricia and Igamberdiev, Timour and Mireshghallah, Niloofar and Feyisetan, Oluwaseyi", booktitle = "Proceedings of the Fifth Workshop on Privacy in Natural Language Processing", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.privatenlp-1.16", pages = "159--169", abstract = "In the era of of digital privacy, users often neglect to read privacy policies due to their complexity. To bridge this gap, NLP models have emerged to assist in understanding privacy policies. While recent generative language models like BART and T5 have shown prowess in text generation and discriminative tasks being framed as generative ones, their application to privacy policy domain tasks remains unexplored. To address that, we introduce PrivaT5, a T5-based model that is further pre-trained on privacy policy text. We evaluate PrivaT5 over a diverse privacy policy related tasks and notice its superior performance over T5, showing the utility of continued domain-specific pre-training. Our results also highlight challenges faced by these generative models in complex structured output label space, especially in sequence tagging tasks, where they fall short compared to lighter encoder-only models.", }
In the era of of digital privacy, users often neglect to read privacy policies due to their complexity. To bridge this gap, NLP models have emerged to assist in understanding privacy policies. While recent generative language models like BART and T5 have shown prowess in text generation and discriminative tasks being framed as generative ones, their application to privacy policy domain tasks remains unexplored. To address that, we introduce PrivaT5, a T5-based model that is further pre-trained on privacy policy text. We evaluate PrivaT5 over a diverse privacy policy related tasks and notice its superior performance over T5, showing the utility of continued domain-specific pre-training. Our results also highlight challenges faced by these generative models in complex structured output label space, especially in sequence tagging tasks, where they fall short compared to lighter encoder-only models.
[ "Zoubi, Mohammad", "T.y.s.s, Santosh", "Rosas, Edgar", "Grabmair, Matthias" ]
PrivaT5: A Generative Language Model for Privacy Policies
privatenlp-1.16
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.privatenlp-1.16/
[]
[]
[]
0
https://aclanthology.org/2024.privatenlp-1.17.bib
@inproceedings{wang-etal-2024-reinforcement-learning, title = "Reinforcement Learning-Driven {LLM} Agent for Automated Attacks on {LLM}s", author = "Wang, Xiangwen and Peng, Jie and Xu, Kaidi and Yao, Huaxiu and Chen, Tianlong", editor = "Habernal, Ivan and Ghanavati, Sepideh and Ravichander, Abhilasha and Jain, Vijayanta and Thaine, Patricia and Igamberdiev, Timour and Mireshghallah, Niloofar and Feyisetan, Oluwaseyi", booktitle = "Proceedings of the Fifth Workshop on Privacy in Natural Language Processing", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.privatenlp-1.17", pages = "170--177", abstract = "Recently, there has been a growing focus on conducting attacks on large language models (LLMs) to assess LLMs{'} safety. Yet, existing attack methods face challenges, including the need to access model weights or merely ensuring LLMs output harmful information without controlling the specific content of their output. Exactly control of the LLM output can produce more inconspicuous attacks which could reveal a new page for LLM security. To achieve this, we propose RLTA: the Reinforcement Learning Targeted Attack, a framework that is designed for attacking language models (LLMs) and is adaptable to both white box (weight accessible) and black box (weight inaccessible) scenarios. It is capable of automatically generating malicious prompts that trigger target LLMs to produce specific outputs. We demonstrate RLTA in two different scenarios: LLM trojan detection and jailbreaking. The comprehensive experimental results show the potential of RLTA in enhancing the security measures surrounding contemporary LLMs.", }
Recently, there has been a growing focus on conducting attacks on large language models (LLMs) to assess LLMs{'} safety. Yet, existing attack methods face challenges, including the need to access model weights or merely ensuring LLMs output harmful information without controlling the specific content of their output. Exactly control of the LLM output can produce more inconspicuous attacks which could reveal a new page for LLM security. To achieve this, we propose RLTA: the Reinforcement Learning Targeted Attack, a framework that is designed for attacking language models (LLMs) and is adaptable to both white box (weight accessible) and black box (weight inaccessible) scenarios. It is capable of automatically generating malicious prompts that trigger target LLMs to produce specific outputs. We demonstrate RLTA in two different scenarios: LLM trojan detection and jailbreaking. The comprehensive experimental results show the potential of RLTA in enhancing the security measures surrounding contemporary LLMs.
[ "Wang, Xiangwen", "Peng, Jie", "Xu, Kaidi", "Yao, Huaxiu", "Chen, Tianlong" ]
Reinforcement Learning-Driven LLM Agent for Automated Attacks on LLMs
privatenlp-1.17
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.privatenlp-1.17/
[]
[]
[]
0
https://aclanthology.org/2024.privatenlp-1.18.bib
@inproceedings{soni-demner-fushman-2024-privacy, title = "A Privacy-preserving Approach to Ingest Knowledge from Proprietary Web-based to Locally Run Models for Medical Progress Note Generation", author = "Soni, Sarvesh and Demner-Fushman, Dina", editor = "Habernal, Ivan and Ghanavati, Sepideh and Ravichander, Abhilasha and Jain, Vijayanta and Thaine, Patricia and Igamberdiev, Timour and Mireshghallah, Niloofar and Feyisetan, Oluwaseyi", booktitle = "Proceedings of the Fifth Workshop on Privacy in Natural Language Processing", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.privatenlp-1.18", pages = "178--183", abstract = "Clinical documentation is correlated with increasing clinician burden, leading to the rise of automated methods to generate medical notes. Due to the sensitive nature of patient electronic health records (EHRs), locally run models are preferred for a variety of reasons including privacy, bias, and cost. However, most open-source locally run models (including medical-specific) are much smaller with limited input context size compared to the more powerful closed-source large language models (LLMs) generally available through web APIs (Application Programming Interfaces). In this paper, we propose a framework to harness superior reasoning capabilities and medical knowledge from closed-source online LLMs in a privacy-preserving manner and seamlessly incorporate it into locally run models. Specifically, we leverage a web-based model to distill the vast patient information available in EHRs into a clinically relevant subset without sending sensitive patient health information online and use this distilled knowledge to generate progress notes by a locally run model. Our ablation results indicate that the proposed framework improves the performance of the Mixtral model on progress note generation by 4.6 points on ROUGE (a text-matching based metric) and 7.56 points on MEDCON F1 (a metric that measures the clinical concepts overlap).", }
Clinical documentation is correlated with increasing clinician burden, leading to the rise of automated methods to generate medical notes. Due to the sensitive nature of patient electronic health records (EHRs), locally run models are preferred for a variety of reasons including privacy, bias, and cost. However, most open-source locally run models (including medical-specific) are much smaller with limited input context size compared to the more powerful closed-source large language models (LLMs) generally available through web APIs (Application Programming Interfaces). In this paper, we propose a framework to harness superior reasoning capabilities and medical knowledge from closed-source online LLMs in a privacy-preserving manner and seamlessly incorporate it into locally run models. Specifically, we leverage a web-based model to distill the vast patient information available in EHRs into a clinically relevant subset without sending sensitive patient health information online and use this distilled knowledge to generate progress notes by a locally run model. Our ablation results indicate that the proposed framework improves the performance of the Mixtral model on progress note generation by 4.6 points on ROUGE (a text-matching based metric) and 7.56 points on MEDCON F1 (a metric that measures the clinical concepts overlap).
[ "Soni, Sarvesh", "Demner-Fushman, Dina" ]
A Privacy-preserving Approach to Ingest Knowledge from Proprietary Web-based to Locally Run Models for Medical Progress Note Generation
privatenlp-1.18
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.privatenlp-1.18/
[]
[]
[]
0
https://aclanthology.org/2024.repl4nlp-1.1.bib
@inproceedings{oda-etal-2024-learning, title = "Learning Contextualized Box Embeddings with Prototypical Networks", author = "Oda, Kohei and Shirai, Kiyoaki and Kertkeidkachorn, Natthawut", editor = "Zhao, Chen and Mosbach, Marius and Atanasova, Pepa and Goldfarb-Tarrent, Seraphina and Hase, Peter and Hosseini, Arian and Elbayad, Maha and Pezzelle, Sandro and Mozes, Maximilian", booktitle = "Proceedings of the 9th Workshop on Representation Learning for NLP (RepL4NLP-2024)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.repl4nlp-1.1", pages = "1--12", abstract = "This paper proposes ProtoBox, a novel method to learn contextualized box embeddings. Unlike an ordinary word embedding, which represents a word as a single vector, a box embedding represents the meaning of a word as a box in a high-dimensional space: that is suitable for representing semantic relations between words. In addition, our method aims to obtain a {``}contextualized{''} box embedding, which is an abstract representation of a word in a specific context. ProtoBox is based on Prototypical Networks, which is a robust method for classification problems, especially focusing on learning the hypernym{--}hyponym relation between senses. ProtoBox is evaluated on three tasks: Word Sense Disambiguation (WSD), New Sense Classification (NSC), and Hypernym Identification (HI). Experimental results show that ProtoBox outperforms baselines for the HI task and is comparable for the WSD and NSC tasks.", }
This paper proposes ProtoBox, a novel method to learn contextualized box embeddings. Unlike an ordinary word embedding, which represents a word as a single vector, a box embedding represents the meaning of a word as a box in a high-dimensional space: that is suitable for representing semantic relations between words. In addition, our method aims to obtain a {``}contextualized{''} box embedding, which is an abstract representation of a word in a specific context. ProtoBox is based on Prototypical Networks, which is a robust method for classification problems, especially focusing on learning the hypernym{--}hyponym relation between senses. ProtoBox is evaluated on three tasks: Word Sense Disambiguation (WSD), New Sense Classification (NSC), and Hypernym Identification (HI). Experimental results show that ProtoBox outperforms baselines for the HI task and is comparable for the WSD and NSC tasks.
[ "Oda, Kohei", "Shirai, Kiyoaki", "Kertkeidkachorn, Natthawut" ]
Learning Contextualized Box Embeddings with Prototypical Networks
repl4nlp-1.1
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.repl4nlp-1.1/
[]
[]
[]
0
https://aclanthology.org/2024.repl4nlp-1.2.bib
@inproceedings{khandelwal-2024-domaininv, title = "{D}omain{I}nv: Domain Invariant Fine Tuning and Adversarial Label Correction For Unsupervised {QA} Domain Adaptation", author = "Khandelwal, Anant", editor = "Zhao, Chen and Mosbach, Marius and Atanasova, Pepa and Goldfarb-Tarrent, Seraphina and Hase, Peter and Hosseini, Arian and Elbayad, Maha and Pezzelle, Sandro and Mozes, Maximilian", booktitle = "Proceedings of the 9th Workshop on Representation Learning for NLP (RepL4NLP-2024)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.repl4nlp-1.2", pages = "13--25", abstract = "Existing Question Answering (QA) systems are limited in their ability to answer questions from unseen domains or any out-of-domain distributions, making them less reliable for deployment in real scenarios. Importantly, all existing QA domain adaptation methods are either based on generating synthetic data or pseudo-labeling the target domain data. Domain adaptation methods relying on synthetic data and pseudo-labeling suffer from either the need for extensive computational resources or an additional overhead of carefully selecting the confidence threshold to distinguish noisy examples from the training dataset. In this paper, we propose unsupervised domain adaptation for an unlabeled target domain by transferring the target representation close to the source domain without using supervision from the target domain. To achieve this, we introduce the idea of domain-invariant fine-tuning along with adversarial label correction (DomainInv) to identify target instances that are distant from the source domain. This involves learning the domain invariant feature encoder to minimize the distance between such target instances and source instances class-wisely. This eliminates the possibility of learning features of the target domain that are still close to the source support but are ambiguous. The evaluation of our QA domain adaptation method, namely DomainInv, on multiple target QA datasets reveals a performance improvement over the strongest baseline.", }
Existing Question Answering (QA) systems are limited in their ability to answer questions from unseen domains or any out-of-domain distributions, making them less reliable for deployment in real scenarios. Importantly, all existing QA domain adaptation methods are either based on generating synthetic data or pseudo-labeling the target domain data. Domain adaptation methods relying on synthetic data and pseudo-labeling suffer from either the need for extensive computational resources or an additional overhead of carefully selecting the confidence threshold to distinguish noisy examples from the training dataset. In this paper, we propose unsupervised domain adaptation for an unlabeled target domain by transferring the target representation close to the source domain without using supervision from the target domain. To achieve this, we introduce the idea of domain-invariant fine-tuning along with adversarial label correction (DomainInv) to identify target instances that are distant from the source domain. This involves learning the domain invariant feature encoder to minimize the distance between such target instances and source instances class-wisely. This eliminates the possibility of learning features of the target domain that are still close to the source support but are ambiguous. The evaluation of our QA domain adaptation method, namely DomainInv, on multiple target QA datasets reveals a performance improvement over the strongest baseline.
[ "Kh", "elwal, Anant" ]
DomainInv: Domain Invariant Fine Tuning and Adversarial Label Correction For Unsupervised QA Domain Adaptation
repl4nlp-1.2
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.repl4nlp-1.2/
[]
[]
[]
0
https://aclanthology.org/2024.repl4nlp-1.3.bib
@inproceedings{ju-etal-2024-relevance, title = "Relevance-aware Diverse Query Generation for Out-of-domain Text Ranking", author = "Ju, Jia-Huei and Yang, Chao-Han and Fu, Szu-Wei and Tsai, Ming-Feng and Wang, Chuan-Ju", editor = "Zhao, Chen and Mosbach, Marius and Atanasova, Pepa and Goldfarb-Tarrent, Seraphina and Hase, Peter and Hosseini, Arian and Elbayad, Maha and Pezzelle, Sandro and Mozes, Maximilian", booktitle = "Proceedings of the 9th Workshop on Representation Learning for NLP (RepL4NLP-2024)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.repl4nlp-1.3", pages = "26--36", abstract = "Domain adaptation presents significant challenges for out-of-domain text ranking, especially when supervised data is limited. In this paper, we present ReadQG (Relevance-Aware Diverse Query Generation), a method to generate informative synthetic queries to facilitate the adaptation process of text ranking models. Unlike previous approaches focusing solely on relevant query generation, our ReadQG generates diverse queries with continuous relevance scores. Specifically, we propose leveraging soft-prompt tuning and diverse generation objectives to control query generation according to the given relevance. Our experiments show that integrating negative queries into the learning process enhances the effectiveness of text ranking models in out-of-domain information retrieval (IR) benchmarks. Furthermore, we measure the quality of query generation, highlighting the underlying beneficial characteristics of negative queries. Our empirical results and analysis also shed light on potential directions for more advanced data augmentation in IR. The data and code have been released.", }
Domain adaptation presents significant challenges for out-of-domain text ranking, especially when supervised data is limited. In this paper, we present ReadQG (Relevance-Aware Diverse Query Generation), a method to generate informative synthetic queries to facilitate the adaptation process of text ranking models. Unlike previous approaches focusing solely on relevant query generation, our ReadQG generates diverse queries with continuous relevance scores. Specifically, we propose leveraging soft-prompt tuning and diverse generation objectives to control query generation according to the given relevance. Our experiments show that integrating negative queries into the learning process enhances the effectiveness of text ranking models in out-of-domain information retrieval (IR) benchmarks. Furthermore, we measure the quality of query generation, highlighting the underlying beneficial characteristics of negative queries. Our empirical results and analysis also shed light on potential directions for more advanced data augmentation in IR. The data and code have been released.
[ "Ju, Jia-Huei", "Yang, Chao-Han", "Fu, Szu-Wei", "Tsai, Ming-Feng", "Wang, Chuan-Ju" ]
Relevance-aware Diverse Query Generation for Out-of-domain Text Ranking
repl4nlp-1.3
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.repl4nlp-1.3/
[]
[]
[]
0
https://aclanthology.org/2024.repl4nlp-1.4.bib
@inproceedings{igbaria-belinkov-2024-learning, title = "Learning from Others: Similarity-based Regularization for Mitigating Dataset Bias.", author = "Igbaria, Reda and Belinkov, Yonatan", editor = "Zhao, Chen and Mosbach, Marius and Atanasova, Pepa and Goldfarb-Tarrent, Seraphina and Hase, Peter and Hosseini, Arian and Elbayad, Maha and Pezzelle, Sandro and Mozes, Maximilian", booktitle = "Proceedings of the 9th Workshop on Representation Learning for NLP (RepL4NLP-2024)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.repl4nlp-1.4", pages = "37--50", abstract = "Common methods for mitigating spurious correlations in natural language understanding (NLU) usually operate in the output space, encouraging a main model to behave differently from a bias model by down-weighing examples where the bias model is confident.While improving out of distribution (OOD) performance, it was recently observed that the internal representations of the presumably debiased models are actually more, rather than less biased. We propose SimgReg, a new method for debiasing internal model components via similarity-based regularization, in representation space: We encourage the model to learn representations that are either similar to an unbiased model or different from a biased model. We experiment with three NLU tasks and different kinds of biases.We find that SimReg improves OOD performance, with little in-distribution degradation. Moreover, the representations learned by SimReg are less biased than in other methods.", }
Common methods for mitigating spurious correlations in natural language understanding (NLU) usually operate in the output space, encouraging a main model to behave differently from a bias model by down-weighing examples where the bias model is confident.While improving out of distribution (OOD) performance, it was recently observed that the internal representations of the presumably debiased models are actually more, rather than less biased. We propose SimgReg, a new method for debiasing internal model components via similarity-based regularization, in representation space: We encourage the model to learn representations that are either similar to an unbiased model or different from a biased model. We experiment with three NLU tasks and different kinds of biases.We find that SimReg improves OOD performance, with little in-distribution degradation. Moreover, the representations learned by SimReg are less biased than in other methods.
[ "Igbaria, Reda", "Belinkov, Yonatan" ]
Learning from Others: Similarity-based Regularization for Mitigating Dataset Bias.
repl4nlp-1.4
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.repl4nlp-1.4/
[]
[]
[]
0
https://aclanthology.org/2024.repl4nlp-1.5.bib
@inproceedings{pereira-etal-2024-prior, title = "Prior Knowledge-Guided Adversarial Training", author = "Pereira, Lis and Cheng, Fei and She, Wan Jou and Asahara, Masayuki and Kobayashi, Ichiro", editor = "Zhao, Chen and Mosbach, Marius and Atanasova, Pepa and Goldfarb-Tarrent, Seraphina and Hase, Peter and Hosseini, Arian and Elbayad, Maha and Pezzelle, Sandro and Mozes, Maximilian", booktitle = "Proceedings of the 9th Workshop on Representation Learning for NLP (RepL4NLP-2024)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.repl4nlp-1.5", pages = "51--57", abstract = "We introduce a simple yet effective Prior Knowledge-Guided ADVersarial Training (PKG-ADV) algorithm to improve adversarial training for natural language understanding. Our method simply utilizes task-specific label distribution to guide the training process. By prioritizing the use of prior knowledge of labels, we aim to generate more informative adversarial perturbations. We apply our model to several challenging temporal reasoning tasks. Our method enables a more reliable and controllable data training process than relying on randomized adversarial perturbation. Albeit simple, our method achieved significant improvements in these tasks. To facilitate further research, we will release the code and models.", }
We introduce a simple yet effective Prior Knowledge-Guided ADVersarial Training (PKG-ADV) algorithm to improve adversarial training for natural language understanding. Our method simply utilizes task-specific label distribution to guide the training process. By prioritizing the use of prior knowledge of labels, we aim to generate more informative adversarial perturbations. We apply our model to several challenging temporal reasoning tasks. Our method enables a more reliable and controllable data training process than relying on randomized adversarial perturbation. Albeit simple, our method achieved significant improvements in these tasks. To facilitate further research, we will release the code and models.
[ "Pereira, Lis", "Cheng, Fei", "She, Wan Jou", "Asahara, Masayuki", "Kobayashi, Ichiro" ]
Prior Knowledge-Guided Adversarial Training
repl4nlp-1.5
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.repl4nlp-1.5/
[]
[]
[]
0
https://aclanthology.org/2024.repl4nlp-1.6.bib
@inproceedings{kim-kim-2024-tuning, title = "{IT}-Tuning : Parameter Efficient Information Token Tuning for Language Model", author = "Kim, Jungu and Kim, Hyeoncheol", editor = "Zhao, Chen and Mosbach, Marius and Atanasova, Pepa and Goldfarb-Tarrent, Seraphina and Hase, Peter and Hosseini, Arian and Elbayad, Maha and Pezzelle, Sandro and Mozes, Maximilian", booktitle = "Proceedings of the 9th Workshop on Representation Learning for NLP (RepL4NLP-2024)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.repl4nlp-1.6", pages = "58--68", abstract = "Recently, language models have demonstrated exceptional performance compared to their predecessors. In this context, attention mechanisms and pre-training significantly contribute to the enhanced performance of modern language models. Additionally, a continuously increasing number of parameters plays a crucial role in these advancements . However, an increase in the number of parameters significantly increases the GPU memory and training time required during fine-tuning of language models, this makes fine-tuning infeasible in environments with limited computing resources. Furthermore, after fine-tuning, the storage space required for deployment increases proportionally with the number of tasks, making it challenging to deploy devices with limited storage capacities. In this study, we propose IT-Tuning, a Parameter Efficient Fine-Tuning method that introduces a new concept called information tokens to address these issues.", }
Recently, language models have demonstrated exceptional performance compared to their predecessors. In this context, attention mechanisms and pre-training significantly contribute to the enhanced performance of modern language models. Additionally, a continuously increasing number of parameters plays a crucial role in these advancements . However, an increase in the number of parameters significantly increases the GPU memory and training time required during fine-tuning of language models, this makes fine-tuning infeasible in environments with limited computing resources. Furthermore, after fine-tuning, the storage space required for deployment increases proportionally with the number of tasks, making it challenging to deploy devices with limited storage capacities. In this study, we propose IT-Tuning, a Parameter Efficient Fine-Tuning method that introduces a new concept called information tokens to address these issues.
[ "Kim, Jungu", "Kim, Hyeoncheol" ]
IT-Tuning : Parameter Efficient Information Token Tuning for Language Model
repl4nlp-1.6
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.repl4nlp-1.6/
[]
[]
[]
0
https://aclanthology.org/2024.repl4nlp-1.7.bib
@inproceedings{raj-etal-2024-bridging, title = "Bridging the Gap: Transfer Learning from {E}nglish {PLM}s to {M}alaysian {E}nglish", author = "Raj, Mohan and Soon, Lay-Ki and Ong, Huey Fang and Selvaretnam, Bhawani", editor = "Zhao, Chen and Mosbach, Marius and Atanasova, Pepa and Goldfarb-Tarrent, Seraphina and Hase, Peter and Hosseini, Arian and Elbayad, Maha and Pezzelle, Sandro and Mozes, Maximilian", booktitle = "Proceedings of the 9th Workshop on Representation Learning for NLP (RepL4NLP-2024)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.repl4nlp-1.7", pages = "69--77", abstract = "Malaysian English is a low resource creole languages, where it carries the elements of Malay, Chinese, and Tamil languages, in addition to Standard English. Named Entity Recognition (NER) models underperforms when capturing entities from Malaysian English text due to its distinctive morphosyntactic adaptations, semantic features and code-switching (mixing English and Malay). Considering these gaps, we introduce MENmBERT and MENBERT, a pre-trained language model with contextual understanding, specifically tailored for Malaysian English. We have fine-tuned MENmBERT and MENBERT using manually annotated entities and relations from the Malaysian English News Article (MEN) Dataset. This fine-tuning process allows the PLM to learn representations that capture the nuances of Malaysian English relevant for NER and RE tasks. MENmBERT achieved a 1.52{\%} and 26.27{\%} improvement on NER and RE tasks respectively compared to the bert-base-multilingual-cased model. While the overall performance for NER does not have significant improvement, our further analysis shows that there is a significant improvement when evaluated by the 12 entity labels. These findings suggest that pre-training language models on language-specific and geographically-focused corpora can be a promising approach for improving NER performance in low-resource settings. The dataset and code published through this paper provide valuable resources for NLP research work focusing on Malaysian English.", }
Malaysian English is a low resource creole languages, where it carries the elements of Malay, Chinese, and Tamil languages, in addition to Standard English. Named Entity Recognition (NER) models underperforms when capturing entities from Malaysian English text due to its distinctive morphosyntactic adaptations, semantic features and code-switching (mixing English and Malay). Considering these gaps, we introduce MENmBERT and MENBERT, a pre-trained language model with contextual understanding, specifically tailored for Malaysian English. We have fine-tuned MENmBERT and MENBERT using manually annotated entities and relations from the Malaysian English News Article (MEN) Dataset. This fine-tuning process allows the PLM to learn representations that capture the nuances of Malaysian English relevant for NER and RE tasks. MENmBERT achieved a 1.52{\%} and 26.27{\%} improvement on NER and RE tasks respectively compared to the bert-base-multilingual-cased model. While the overall performance for NER does not have significant improvement, our further analysis shows that there is a significant improvement when evaluated by the 12 entity labels. These findings suggest that pre-training language models on language-specific and geographically-focused corpora can be a promising approach for improving NER performance in low-resource settings. The dataset and code published through this paper provide valuable resources for NLP research work focusing on Malaysian English.
[ "Raj, Mohan", "Soon, Lay-Ki", "Ong, Huey Fang", "Selvaretnam, Bhawani" ]
Bridging the Gap: Transfer Learning from English PLMs to Malaysian English
repl4nlp-1.7
Poster
2407.01374
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.repl4nlp-1.7/
[]
[]
[]
0
https://aclanthology.org/2024.repl4nlp-1.8.bib
@inproceedings{feng-etal-2024-unified, title = "Unified Interpretation of Smoothing Methods for Negative Sampling Loss Functions in Knowledge Graph Embedding", author = "Feng, Xincan and Kamigaito, Hidetaka and Hayashi, Katsuhiko and Watanabe, Taro", editor = "Zhao, Chen and Mosbach, Marius and Atanasova, Pepa and Goldfarb-Tarrent, Seraphina and Hase, Peter and Hosseini, Arian and Elbayad, Maha and Pezzelle, Sandro and Mozes, Maximilian", booktitle = "Proceedings of the 9th Workshop on Representation Learning for NLP (RepL4NLP-2024)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.repl4nlp-1.8", pages = "78--98", abstract = "Knowledge Graphs (KGs) are fundamental resources in knowledge-intensive tasks in NLP. Due to the limitation of manually creating KGs, KG Completion (KGC) has an important role in automatically completing KGs by scoring their links with KG Embedding (KGE). To handle many entities in training, KGE relies on Negative Sampling (NS) loss that can reduce the computational cost by sampling. Since the appearance frequencies for each link are at most one in KGs, sparsity is an essential and inevitable problem. The NS loss is no exception. As a solution, the NS loss in KGE relies on smoothing methods like Self-Adversarial Negative Sampling (SANS) and subsampling. However, it is uncertain what kind of smoothing method is suitable for this purpose due to the lack of theoretical understanding. This paper provides theoretical interpretations of the smoothing methods for the NS loss in KGE and induces a new NS loss, Triplet Adaptive Negative Sampling (TANS), that can cover the characteristics of the conventional smoothing methods. Experimental results of TransE, DistMult, ComplEx, RotatE, HAKE, and HousE on FB15k-237, WN18RR, and YAGO3-10 datasets and their sparser subsets show the soundness of our interpretation and performance improvement by our TANS.", }
Knowledge Graphs (KGs) are fundamental resources in knowledge-intensive tasks in NLP. Due to the limitation of manually creating KGs, KG Completion (KGC) has an important role in automatically completing KGs by scoring their links with KG Embedding (KGE). To handle many entities in training, KGE relies on Negative Sampling (NS) loss that can reduce the computational cost by sampling. Since the appearance frequencies for each link are at most one in KGs, sparsity is an essential and inevitable problem. The NS loss is no exception. As a solution, the NS loss in KGE relies on smoothing methods like Self-Adversarial Negative Sampling (SANS) and subsampling. However, it is uncertain what kind of smoothing method is suitable for this purpose due to the lack of theoretical understanding. This paper provides theoretical interpretations of the smoothing methods for the NS loss in KGE and induces a new NS loss, Triplet Adaptive Negative Sampling (TANS), that can cover the characteristics of the conventional smoothing methods. Experimental results of TransE, DistMult, ComplEx, RotatE, HAKE, and HousE on FB15k-237, WN18RR, and YAGO3-10 datasets and their sparser subsets show the soundness of our interpretation and performance improvement by our TANS.
[ "Feng, Xincan", "Kamigaito, Hidetaka", "Hayashi, Katsuhiko", "Watanabe, Taro" ]
Unified Interpretation of Smoothing Methods for Negative Sampling Loss Functions in Knowledge Graph Embedding
repl4nlp-1.8
Poster
2407.04251
[ "https://github.com/xincanfeng/ss_kge" ]
-1
-1
-1
-1
https://aclanthology.org/2024.repl4nlp-1.8/
[]
[]
[]
0
https://aclanthology.org/2024.repl4nlp-1.9.bib
@inproceedings{uppaal-etal-2024-useful, title = "How Useful is Continued Pre-Training for Generative Unsupervised Domain Adaptation?", author = "Uppaal, Rheeya and Li, Yixuan and Hu, Junjie", editor = "Zhao, Chen and Mosbach, Marius and Atanasova, Pepa and Goldfarb-Tarrent, Seraphina and Hase, Peter and Hosseini, Arian and Elbayad, Maha and Pezzelle, Sandro and Mozes, Maximilian", booktitle = "Proceedings of the 9th Workshop on Representation Learning for NLP (RepL4NLP-2024)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.repl4nlp-1.9", pages = "99--117", abstract = "Recent breakthroughs in scale have enabled the emergence of powerful generative language models, and the ability to fine-tune these models on various tasks by casting them into prompts or instructions. In this landscape, the problem of Unsupervised Domain Adaptation (UDA), or the problem of leveraging knowledge from a labeled source domain to an unlabeled target domain, has been left behind, with recent UDA methods still addressing discriminative classification. In particular, two popular UDA approaches, involving Continued Pre-Training (CPT) and learning domain invariant representations, have been under-explored in the generative setting, signaling a gap. In this work, we evaluate the utility of CPT for generative UDA. We first perform an empirical evaluation to measure the trade-offs between CPT and strong methods promoting domain invariance. We further evaluate how well the benefits of CPT extend to different architectures, tuning methods and data regimes. We then motivate the use of CPT by studying to what degree it benefits classification performance on the target domain. Finally, we attempt to understand the mechanism behind which CPT improves classification performance on the unlabeled target domain. Our findings suggest that a implicitly learns the downstream task while predicting masked words informative to that task. Our work connects the body of UDA research with that of instruction tuning, enabling an initial step towards a wider applicability of modern language models.", }
Recent breakthroughs in scale have enabled the emergence of powerful generative language models, and the ability to fine-tune these models on various tasks by casting them into prompts or instructions. In this landscape, the problem of Unsupervised Domain Adaptation (UDA), or the problem of leveraging knowledge from a labeled source domain to an unlabeled target domain, has been left behind, with recent UDA methods still addressing discriminative classification. In particular, two popular UDA approaches, involving Continued Pre-Training (CPT) and learning domain invariant representations, have been under-explored in the generative setting, signaling a gap. In this work, we evaluate the utility of CPT for generative UDA. We first perform an empirical evaluation to measure the trade-offs between CPT and strong methods promoting domain invariance. We further evaluate how well the benefits of CPT extend to different architectures, tuning methods and data regimes. We then motivate the use of CPT by studying to what degree it benefits classification performance on the target domain. Finally, we attempt to understand the mechanism behind which CPT improves classification performance on the unlabeled target domain. Our findings suggest that a implicitly learns the downstream task while predicting masked words informative to that task. Our work connects the body of UDA research with that of instruction tuning, enabling an initial step towards a wider applicability of modern language models.
[ "Uppaal, Rheeya", "Li, Yixuan", "Hu, Junjie" ]
How Useful is Continued Pre-Training for Generative Unsupervised Domain Adaptation?
repl4nlp-1.9
Poster
2401.17514
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.repl4nlp-1.9/
[]
[]
[]
0
https://aclanthology.org/2024.repl4nlp-1.10.bib
@inproceedings{gow-smith-etal-2024-word, title = "Word Boundary Information Isn{'}t Useful for Encoder Language Models", author = "Gow-Smith, Edward and Phelps, Dylan and Tayyar Madabushi, Harish and Scarton, Carolina and Villavicencio, Aline", editor = "Zhao, Chen and Mosbach, Marius and Atanasova, Pepa and Goldfarb-Tarrent, Seraphina and Hase, Peter and Hosseini, Arian and Elbayad, Maha and Pezzelle, Sandro and Mozes, Maximilian", booktitle = "Proceedings of the 9th Workshop on Representation Learning for NLP (RepL4NLP-2024)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.repl4nlp-1.10", pages = "118--135", abstract = "All existing transformer-based approaches to NLP using subword tokenisation algorithms encode whitespace (word boundary information) through the use of special space symbols (such as {\#}{\#} or {\_}) forming part of tokens. These symbols have been shown to a) lead to reduced morphological validity of tokenisations, and b) give substantial vocabulary redundancy. As such, removing these symbols has been shown to have a beneficial effect on the processing of morphologically complex words for transformer encoders in the pretrain-finetune paradigm. In this work, we explore whether word boundary information is at all useful to such models. In particular, we train transformer encoders across four different training scales, and investigate several alternative approaches to including word boundary information, evaluating on two languages (English and Finnish) with a range of tasks across different domains and problem set-ups: sentence classification datasets, NER (for token-level classification), and two classification datasets involving complex words (Superbizarre and FLOTA). Overall, through an extensive experimental setup that includes the pre-training of 35 models, we find no substantial improvements from our alternative approaches, suggesting that modifying tokenisers to remove word boundary information isn{'}t leading to a loss of useful information.", }
All existing transformer-based approaches to NLP using subword tokenisation algorithms encode whitespace (word boundary information) through the use of special space symbols (such as {\#}{\#} or {\_}) forming part of tokens. These symbols have been shown to a) lead to reduced morphological validity of tokenisations, and b) give substantial vocabulary redundancy. As such, removing these symbols has been shown to have a beneficial effect on the processing of morphologically complex words for transformer encoders in the pretrain-finetune paradigm. In this work, we explore whether word boundary information is at all useful to such models. In particular, we train transformer encoders across four different training scales, and investigate several alternative approaches to including word boundary information, evaluating on two languages (English and Finnish) with a range of tasks across different domains and problem set-ups: sentence classification datasets, NER (for token-level classification), and two classification datasets involving complex words (Superbizarre and FLOTA). Overall, through an extensive experimental setup that includes the pre-training of 35 models, we find no substantial improvements from our alternative approaches, suggesting that modifying tokenisers to remove word boundary information isn{'}t leading to a loss of useful information.
[ "Gow-Smith, Edward", "Phelps, Dylan", "Tayyar Madabushi, Harish", "Scarton, Carolina", "Villavicencio, Aline" ]
Word Boundary Information Isn't Useful for Encoder Language Models
repl4nlp-1.10
Poster
2401.07923
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.repl4nlp-1.10/
[]
[]
[]
0
https://aclanthology.org/2024.repl4nlp-1.11.bib
@inproceedings{ruffinelli-gemulla-2024-beyond, title = "Beyond Link Prediction: On Pre-Training Knowledge Graph Embeddings", author = "Ruffinelli, Daniel and Gemulla, Rainer", editor = "Zhao, Chen and Mosbach, Marius and Atanasova, Pepa and Goldfarb-Tarrent, Seraphina and Hase, Peter and Hosseini, Arian and Elbayad, Maha and Pezzelle, Sandro and Mozes, Maximilian", booktitle = "Proceedings of the 9th Workshop on Representation Learning for NLP (RepL4NLP-2024)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.repl4nlp-1.11", pages = "136--162", abstract = "Knowledge graph embeddings (KGEs) provide low-dimensional representations of the entities and relations in a knowledge graph (KG) in order to reason about the KG and to inject structured knowledge into various downstream applications. Most prior work, however, focuses almost exclusively on training and evaluating KGE models for the task of link prediction. In this work, we explore KGE models as general-purpose representations of KGs and study their suitability (i) for more generally capturing properties of the KG and (ii) for downstream tasks such as entity classification and regression. For (i), we designed a new set of graph-structure prediction tasks to assess whether models capture different structures in the graph. For (ii), we investigate whether models provide useful features for a variety of downstream tasks. We found that strong link prediction performance was neither an indication that models generally capture patterns in the graph, nor that they were more useful in downstream tasks. As a result, we included our proposed graph-structure prediction tasks as additional training objectives and found that models trained with this multi-task approach generally, but not always, performed better at both graph-structure prediction and downstream tasks. However, the most suitable choice of pre-training tasks varies across KGE models and types of downstream tasks, suggesting opportunities for more research into the relation between pre-training KGE models and their usability on downstream applications.", }
Knowledge graph embeddings (KGEs) provide low-dimensional representations of the entities and relations in a knowledge graph (KG) in order to reason about the KG and to inject structured knowledge into various downstream applications. Most prior work, however, focuses almost exclusively on training and evaluating KGE models for the task of link prediction. In this work, we explore KGE models as general-purpose representations of KGs and study their suitability (i) for more generally capturing properties of the KG and (ii) for downstream tasks such as entity classification and regression. For (i), we designed a new set of graph-structure prediction tasks to assess whether models capture different structures in the graph. For (ii), we investigate whether models provide useful features for a variety of downstream tasks. We found that strong link prediction performance was neither an indication that models generally capture patterns in the graph, nor that they were more useful in downstream tasks. As a result, we included our proposed graph-structure prediction tasks as additional training objectives and found that models trained with this multi-task approach generally, but not always, performed better at both graph-structure prediction and downstream tasks. However, the most suitable choice of pre-training tasks varies across KGE models and types of downstream tasks, suggesting opportunities for more research into the relation between pre-training KGE models and their usability on downstream applications.
[ "Ruffinelli, Daniel", "Gemulla, Rainer" ]
Beyond Link Prediction: On Pre-Training Knowledge Graph Embeddings
repl4nlp-1.11
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.repl4nlp-1.11/
[]
[]
[]
0
https://aclanthology.org/2024.repl4nlp-1.12.bib
@inproceedings{wang-etal-2024-learn, title = "Learn it or Leave it: Module Composition and Pruning for Continual Learning", author = {Wang, Mingyang and Adel, Heike and Lange, Lukas and Str{\"o}tgen, Jannik and Schuetze, Hinrich}, editor = "Zhao, Chen and Mosbach, Marius and Atanasova, Pepa and Goldfarb-Tarrent, Seraphina and Hase, Peter and Hosseini, Arian and Elbayad, Maha and Pezzelle, Sandro and Mozes, Maximilian", booktitle = "Proceedings of the 9th Workshop on Representation Learning for NLP (RepL4NLP-2024)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.repl4nlp-1.12", pages = "163--176", abstract = "In real-world environments, continual learning is essential for machine learning models, as they need to acquire new knowledge incrementally without forgetting what they have already learned. While pretrained language models have shown impressive capabilities on various static tasks, applying them to continual learning poses significant challenges, including avoiding catastrophic forgetting, facilitating knowledge transfer, and maintaining parameter efficiency. In this paper, we introduce MoCL-P, a novel lightweight continual learning method that addresses these challenges simultaneously. Unlike traditional approaches that continuously expand parameters for newly arriving tasks, MoCL-P integrates task representation-guided module composition with adaptive pruning, effectively balancing knowledge integration and computational overhead. Our evaluation across three continual learning benchmarks with up to 176 tasks shows that MoCL-P achieves state-of-the-art performance and improves parameter efficiency by up to three times, demonstrating its potential for practical applications where resource requirements are constrained.", }
In real-world environments, continual learning is essential for machine learning models, as they need to acquire new knowledge incrementally without forgetting what they have already learned. While pretrained language models have shown impressive capabilities on various static tasks, applying them to continual learning poses significant challenges, including avoiding catastrophic forgetting, facilitating knowledge transfer, and maintaining parameter efficiency. In this paper, we introduce MoCL-P, a novel lightweight continual learning method that addresses these challenges simultaneously. Unlike traditional approaches that continuously expand parameters for newly arriving tasks, MoCL-P integrates task representation-guided module composition with adaptive pruning, effectively balancing knowledge integration and computational overhead. Our evaluation across three continual learning benchmarks with up to 176 tasks shows that MoCL-P achieves state-of-the-art performance and improves parameter efficiency by up to three times, demonstrating its potential for practical applications where resource requirements are constrained.
[ "Wang, Mingyang", "Adel, Heike", "Lange, Lukas", "Str{\\\"o}tgen, Jannik", "Schuetze, Hinrich" ]
Learn it or Leave it: Module Composition and Pruning for Continual Learning
repl4nlp-1.12
Poster
2406.18708
[ "" ]
https://huggingface.co/papers/2406.18708
0
1
0
5
https://aclanthology.org/2024.repl4nlp-1.12/
[]
[]
[]
1
https://aclanthology.org/2024.repl4nlp-1.13.bib
@inproceedings{stephan-etal-2024-text-guided, title = "Text-Guided Alternative Image Clustering", author = "Stephan, Andreas and Miklautz, Lukas and Leiber, Collin and Luz De Araujo, Pedro Henrique and R{\'e}p{\'a}s, Dominik and Plant, Claudia and Roth, Benjamin", editor = "Zhao, Chen and Mosbach, Marius and Atanasova, Pepa and Goldfarb-Tarrent, Seraphina and Hase, Peter and Hosseini, Arian and Elbayad, Maha and Pezzelle, Sandro and Mozes, Maximilian", booktitle = "Proceedings of the 9th Workshop on Representation Learning for NLP (RepL4NLP-2024)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.repl4nlp-1.13", pages = "177--190", abstract = "Traditional image clustering techniques only find a single grouping within visual data. In particular, they do not provide a possibility to explicitly define multiple types of clustering. This work explores the potential of large vision-language models to facilitate alternative image clustering. We propose Text-Guided Alternative Image Consensus Clustering (TGAICC), a novel approach that leverages user-specified interests via prompts to guide the discovery of diverse clusterings. To achieve this, it generates a clustering for each prompt, groups them using hierarchical clustering, and then aggregates them using consensus clustering. TGAICC outperforms image- and text-based baselines on four alternative image clustering benchmark datasets. Furthermore, using count-based word statistics, we are able to obtain text-based explanations of the alternative clusterings. In conclusion, our research illustrates how contemporary large vision-language models can transform explanatory data analysis, enabling the generation of insightful, customizable, and diverse image clusterings.", }
Traditional image clustering techniques only find a single grouping within visual data. In particular, they do not provide a possibility to explicitly define multiple types of clustering. This work explores the potential of large vision-language models to facilitate alternative image clustering. We propose Text-Guided Alternative Image Consensus Clustering (TGAICC), a novel approach that leverages user-specified interests via prompts to guide the discovery of diverse clusterings. To achieve this, it generates a clustering for each prompt, groups them using hierarchical clustering, and then aggregates them using consensus clustering. TGAICC outperforms image- and text-based baselines on four alternative image clustering benchmark datasets. Furthermore, using count-based word statistics, we are able to obtain text-based explanations of the alternative clusterings. In conclusion, our research illustrates how contemporary large vision-language models can transform explanatory data analysis, enabling the generation of insightful, customizable, and diverse image clusterings.
[ "Stephan, Andreas", "Miklautz, Lukas", "Leiber, Collin", "Luz De Araujo, Pedro Henrique", "R{\\'e}p{\\'a}s, Dominik", "Plant, Claudia", "Roth, Benjamin" ]
Text-Guided Alternative Image Clustering
repl4nlp-1.13
Poster
2406.18589
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.repl4nlp-1.13/
[]
[]
[]
0
https://aclanthology.org/2024.repl4nlp-1.14.bib
@inproceedings{laube-eliasmith-2024-qavsa, title = "{QAVSA}: Question Answering using Vector Symbolic Algebras", author = "Laube, Ryan and Eliasmith, Chris", editor = "Zhao, Chen and Mosbach, Marius and Atanasova, Pepa and Goldfarb-Tarrent, Seraphina and Hase, Peter and Hosseini, Arian and Elbayad, Maha and Pezzelle, Sandro and Mozes, Maximilian", booktitle = "Proceedings of the 9th Workshop on Representation Learning for NLP (RepL4NLP-2024)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.repl4nlp-1.14", pages = "191--202", abstract = "With the advancement of large pretrained language models (PLMs), many question answering (QA) benchmarks have been developed in order to evaluate the reasoning capabilities of these models. Augmenting PLMs with external knowledge in the form of Knowledge Graphs (KGs) has been a popular method to improve their reasoning capabilities, and a common method to reason over KGs is to use Graph Neural Networks (GNNs). As an alternative to GNNs to augment PLMs, we propose a novel graph reasoning module using Vector Symbolic Algebra (VSA) graph representations and a k-layer MLP. We demonstrate that our VSA-based model performs as well as QA-GNN, a model combining a PLM and a GNN-module, on 3 multiple-choice question answering (MCQA) datasets. Our model has a simpler architecture than QA-GNN and also converges 39{\%} faster during training.", }
With the advancement of large pretrained language models (PLMs), many question answering (QA) benchmarks have been developed in order to evaluate the reasoning capabilities of these models. Augmenting PLMs with external knowledge in the form of Knowledge Graphs (KGs) has been a popular method to improve their reasoning capabilities, and a common method to reason over KGs is to use Graph Neural Networks (GNNs). As an alternative to GNNs to augment PLMs, we propose a novel graph reasoning module using Vector Symbolic Algebra (VSA) graph representations and a k-layer MLP. We demonstrate that our VSA-based model performs as well as QA-GNN, a model combining a PLM and a GNN-module, on 3 multiple-choice question answering (MCQA) datasets. Our model has a simpler architecture than QA-GNN and also converges 39{\%} faster during training.
[ "Laube, Ryan", "Eliasmith, Chris" ]
QAVSA: Question Answering using Vector Symbolic Algebras
repl4nlp-1.14
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.repl4nlp-1.14/
[]
[]
[]
0
https://aclanthology.org/2024.repl4nlp-1.15.bib
@inproceedings{nastase-merlo-2024-tracking, title = "Tracking linguistic information in transformer-based sentence embeddings through targeted sparsification", author = "Nastase, Vivi and Merlo, Paola", editor = "Zhao, Chen and Mosbach, Marius and Atanasova, Pepa and Goldfarb-Tarrent, Seraphina and Hase, Peter and Hosseini, Arian and Elbayad, Maha and Pezzelle, Sandro and Mozes, Maximilian", booktitle = "Proceedings of the 9th Workshop on Representation Learning for NLP (RepL4NLP-2024)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.repl4nlp-1.15", pages = "203--214", abstract = "Analyses of transformer-based models have shown that they encode a variety of linguistic information from their textual input. While these analyses have shed a light on the relation between linguistic information on one side, and internal architecture and parameters on the other, a question remains unanswered: how is this linguistic information reflected in sentence embeddings? Using datasets consisting of sentences with known structure, we test to what degree information about chunks (in particular noun, verb or prepositional phrases), such as grammatical number, or semantic role, can be localized in sentence embeddings. Our results show that such information is not distributed over the entire sentence embedding, but rather it is encoded in specific regions. Understanding how the information from an input text is compressed into sentence embeddings helps understand current transformer models and help build future explainable neural models.", }
Analyses of transformer-based models have shown that they encode a variety of linguistic information from their textual input. While these analyses have shed a light on the relation between linguistic information on one side, and internal architecture and parameters on the other, a question remains unanswered: how is this linguistic information reflected in sentence embeddings? Using datasets consisting of sentences with known structure, we test to what degree information about chunks (in particular noun, verb or prepositional phrases), such as grammatical number, or semantic role, can be localized in sentence embeddings. Our results show that such information is not distributed over the entire sentence embedding, but rather it is encoded in specific regions. Understanding how the information from an input text is compressed into sentence embeddings helps understand current transformer models and help build future explainable neural models.
[ "Nastase, Vivi", "Merlo, Paola" ]
Tracking linguistic information in transformer-based sentence embeddings through targeted sparsification
repl4nlp-1.15
Poster
2407.18119
[ "https://github.com/clcl-geneva/blm-snfdisentangling" ]
-1
-1
-1
-1
https://aclanthology.org/2024.repl4nlp-1.15/
[]
[]
[]
0
https://aclanthology.org/2024.repl4nlp-1.16.bib
@inproceedings{singh-etal-2024-learning, title = "Learning New Tasks from a Few Examples with Soft-Label Prototypes", author = "Singh, Avyav and Shutova, Ekaterina and Yannakoudakis, Helen", editor = "Zhao, Chen and Mosbach, Marius and Atanasova, Pepa and Goldfarb-Tarrent, Seraphina and Hase, Peter and Hosseini, Arian and Elbayad, Maha and Pezzelle, Sandro and Mozes, Maximilian", booktitle = "Proceedings of the 9th Workshop on Representation Learning for NLP (RepL4NLP-2024)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.repl4nlp-1.16", pages = "215--236", abstract = "Existing approaches to few-shot learning in NLP rely on large language models (LLMs) and/or fine-tuning of these to generalise on out-of-distribution data. In this work, we propose a novel few-shot learning approach based on soft-label prototypes (SLPs) designed to collectively capture the distribution of different classes across the input domain space. We focus on learning previously unseen NLP tasks from very few examples (4, 8, 16) per class and experimentally demonstrate that our approach achieves superior performance on the majority of tested tasks in this data-lean setting while being highly parameter efficient. We also show that our few-shot adaptation method can be integrated into more generalised learning settings, primarily meta-learning, to yield superior performance against strong baselines.", }
Existing approaches to few-shot learning in NLP rely on large language models (LLMs) and/or fine-tuning of these to generalise on out-of-distribution data. In this work, we propose a novel few-shot learning approach based on soft-label prototypes (SLPs) designed to collectively capture the distribution of different classes across the input domain space. We focus on learning previously unseen NLP tasks from very few examples (4, 8, 16) per class and experimentally demonstrate that our approach achieves superior performance on the majority of tested tasks in this data-lean setting while being highly parameter efficient. We also show that our few-shot adaptation method can be integrated into more generalised learning settings, primarily meta-learning, to yield superior performance against strong baselines.
[ "Singh, Avyav", "Shutova, Ekaterina", "Yannakoudakis, Helen" ]
Learning New Tasks from a Few Examples with Soft-Label Prototypes
repl4nlp-1.16
Poster
2210.17437
[ "https://github.com/avyavkumar/few-shot-learning-notebooks" ]
-1
-1
-1
-1
https://aclanthology.org/2024.repl4nlp-1.16/
[]
[]
[]
0
https://aclanthology.org/2024.repl4nlp-1.17.bib
@inproceedings{wennberg-henter-2024-learned, title = "Learned Transformer Position Embeddings Have a Low-Dimensional Structure", author = "Wennberg, Ulme and Henter, Gustav", editor = "Zhao, Chen and Mosbach, Marius and Atanasova, Pepa and Goldfarb-Tarrent, Seraphina and Hase, Peter and Hosseini, Arian and Elbayad, Maha and Pezzelle, Sandro and Mozes, Maximilian", booktitle = "Proceedings of the 9th Workshop on Representation Learning for NLP (RepL4NLP-2024)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.repl4nlp-1.17", pages = "237--244", abstract = "Position embeddings have long been essential for sequence-order encoding in transformer models, yet their structure is underexplored. This study uses principal component analysis (PCA) to quantitatively compare the dimensionality of absolute position and word embeddings in BERT and ALBERT. We find that, unlike word embeddings, position embeddings occupy a low-dimensional subspace, typically utilizing under 10{\%} of the dimensions available. Additionally, the principal vectors are dominated by a few low-frequency rotational components, a structure arising independently across models.", }
Position embeddings have long been essential for sequence-order encoding in transformer models, yet their structure is underexplored. This study uses principal component analysis (PCA) to quantitatively compare the dimensionality of absolute position and word embeddings in BERT and ALBERT. We find that, unlike word embeddings, position embeddings occupy a low-dimensional subspace, typically utilizing under 10{\%} of the dimensions available. Additionally, the principal vectors are dominated by a few low-frequency rotational components, a structure arising independently across models.
[ "Wennberg, Ulme", "Henter, Gustav" ]
Learned Transformer Position Embeddings Have a Low-Dimensional Structure
repl4nlp-1.17
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.repl4nlp-1.17/
[]
[]
[]
0
https://aclanthology.org/2024.repl4nlp-1.18.bib
@inproceedings{nishida-etal-2024-multi, title = "Multi-label Learning with Random Circular Vectors", author = "Nishida, Ken and Machi, Kojiro and Onishi, Kazuma and Hayashi, Katsuhiko and Kamigaito, Hidetaka", editor = "Zhao, Chen and Mosbach, Marius and Atanasova, Pepa and Goldfarb-Tarrent, Seraphina and Hase, Peter and Hosseini, Arian and Elbayad, Maha and Pezzelle, Sandro and Mozes, Maximilian", booktitle = "Proceedings of the 9th Workshop on Representation Learning for NLP (RepL4NLP-2024)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.repl4nlp-1.18", pages = "245--255", abstract = "The extreme multi-label classification (XMC) task involves learning a classifier that can predict from a large label set the most relevant subset of labels for a data instance. While deep neural networks (DNNs) have demonstrated remarkable success in XMC problems, the task is still challenging because it must deal with a large number of output labels, which make the DNN training computationally expensive. This paper addresses the issue by exploring the use of random circular vectors, where each vector component is represented as a complex amplitude. In our framework, we can develop an output layer and loss function of DNNs for XMC by representing the final output layer as a fully connected layer that directly predicts a low-dimensional circular vector encoding a set of labels for a data instance. We conducted experiments on synthetic datasets to verify that circular vectors have better label encoding capacity and retrieval ability than normal real-valued vectors. Then, we conducted experiments on actual XMC datasets and found that these appealing properties of circular vectors contribute to significant improvements in task performance compared with a previous model using random real-valued vectors, while reducing the size of the output layers by up to 99{\%}.", }
The extreme multi-label classification (XMC) task involves learning a classifier that can predict from a large label set the most relevant subset of labels for a data instance. While deep neural networks (DNNs) have demonstrated remarkable success in XMC problems, the task is still challenging because it must deal with a large number of output labels, which make the DNN training computationally expensive. This paper addresses the issue by exploring the use of random circular vectors, where each vector component is represented as a complex amplitude. In our framework, we can develop an output layer and loss function of DNNs for XMC by representing the final output layer as a fully connected layer that directly predicts a low-dimensional circular vector encoding a set of labels for a data instance. We conducted experiments on synthetic datasets to verify that circular vectors have better label encoding capacity and retrieval ability than normal real-valued vectors. Then, we conducted experiments on actual XMC datasets and found that these appealing properties of circular vectors contribute to significant improvements in task performance compared with a previous model using random real-valued vectors, while reducing the size of the output layers by up to 99{\%}.
[ "Nishida, Ken", "Machi, Kojiro", "Onishi, Kazuma", "Hayashi, Katsuhiko", "Kamigaito, Hidetaka" ]
Multi-label Learning with Random Circular Vectors
repl4nlp-1.18
Poster
2407.05656
[ "https://github.com/Nishiken1/Circular-HRR" ]
-1
-1
-1
-1
https://aclanthology.org/2024.repl4nlp-1.18/
[]
[]
[]
0
https://aclanthology.org/2024.repl4nlp-1.19.bib
@inproceedings{ki-etal-2024-mitigating, title = "Mitigating Semantic Leakage in Cross-lingual Embeddings via Orthogonality Constraint", author = "Ki, Dayeon and Park, Cheonbok and Kim, Hyunjoong", editor = "Zhao, Chen and Mosbach, Marius and Atanasova, Pepa and Goldfarb-Tarrent, Seraphina and Hase, Peter and Hosseini, Arian and Elbayad, Maha and Pezzelle, Sandro and Mozes, Maximilian", booktitle = "Proceedings of the 9th Workshop on Representation Learning for NLP (RepL4NLP-2024)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.repl4nlp-1.19", pages = "256--273", abstract = "Accurately aligning contextual representations in cross-lingual sentence embeddings is key for effective parallel data mining. A common strategy for achieving this alignment involves disentangling semantics and language in sentence embeddings derived from multilingual pre-trained models. However, we discover that current disentangled representation learning methods suffer from semantic leakage{---}a term we introduce to describe when a substantial amount of language-specific information is unintentionally leaked into semantic representations. This hinders the effective disentanglement of semantic and language representations, making it difficult to retrieve embeddings that distinctively represent the meaning of the sentence. To address this challenge, we propose a novel training objective, ORthogonAlity Constraint LEarning (ORACLE), tailored to enforce orthogonality between semantic and language embeddings. ORACLE builds upon two components: intra-class clustering and inter-class separation. Through experiments on cross-lingual retrieval and semantic textual similarity tasks, we demonstrate that training with the ORACLE objective effectively reduces semantic leakage and enhances semantic alignment within the embedding space.", }
Accurately aligning contextual representations in cross-lingual sentence embeddings is key for effective parallel data mining. A common strategy for achieving this alignment involves disentangling semantics and language in sentence embeddings derived from multilingual pre-trained models. However, we discover that current disentangled representation learning methods suffer from semantic leakage{---}a term we introduce to describe when a substantial amount of language-specific information is unintentionally leaked into semantic representations. This hinders the effective disentanglement of semantic and language representations, making it difficult to retrieve embeddings that distinctively represent the meaning of the sentence. To address this challenge, we propose a novel training objective, ORthogonAlity Constraint LEarning (ORACLE), tailored to enforce orthogonality between semantic and language embeddings. ORACLE builds upon two components: intra-class clustering and inter-class separation. Through experiments on cross-lingual retrieval and semantic textual similarity tasks, we demonstrate that training with the ORACLE objective effectively reduces semantic leakage and enhances semantic alignment within the embedding space.
[ "Ki, Dayeon", "Park, Cheonbok", "Kim, Hyunjoong" ]
Mitigating Semantic Leakage in Cross-lingual Embeddings via Orthogonality Constraint
repl4nlp-1.19
Poster
2409.15664
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.repl4nlp-1.19/
[]
[]
[]
0
https://aclanthology.org/2024.repl4nlp-1.20.bib
@inproceedings{frydenlund-2024-pathological, title = "On the Pathological Path-star Task for Language Models (Extended Abstract)", author = "Frydenlund, Arvid", editor = "Zhao, Chen and Mosbach, Marius and Atanasova, Pepa and Goldfarb-Tarrent, Seraphina and Hase, Peter and Hosseini, Arian and Elbayad, Maha and Pezzelle, Sandro and Mozes, Maximilian", booktitle = "Proceedings of the 9th Workshop on Representation Learning for NLP (RepL4NLP-2024)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.repl4nlp-1.20", pages = "274--284", abstract = "The recently introduced path-star task is a minimal toy task designed to exemplify limitations to the abilities of language models (Bachmann and Nagarajan, 2024). It involves a \textit{path-star} graph where multiple arms radiate from a single starting node and each node is unique. Then, given the start node and a specified target node which ends one of the arms, the task is to generate the arm containing that target node. This is straightforward for a human but surprisingly difficult for a language model, which they found failed to predict above chance.They hypothesized this is due to a deficiency in teacher-forcing and next-token prediction paradigm. In this extended abstract, we demonstrate that the task is learnable using teacher-forcing in alternative settings and that the issue is (partially) due to representation. We analyze situations when the models fail to solve the task which leads us to introduce a regularization technique where we pack each training batch with multiple instances of the same graph but with differing target nodes to prevent overfitting. Initial results indicate this helps in solving the task.", }
The recently introduced path-star task is a minimal toy task designed to exemplify limitations to the abilities of language models (Bachmann and Nagarajan, 2024). It involves a \textit{path-star} graph where multiple arms radiate from a single starting node and each node is unique. Then, given the start node and a specified target node which ends one of the arms, the task is to generate the arm containing that target node. This is straightforward for a human but surprisingly difficult for a language model, which they found failed to predict above chance.They hypothesized this is due to a deficiency in teacher-forcing and next-token prediction paradigm. In this extended abstract, we demonstrate that the task is learnable using teacher-forcing in alternative settings and that the issue is (partially) due to representation. We analyze situations when the models fail to solve the task which leads us to introduce a regularization technique where we pack each training batch with multiple instances of the same graph but with differing target nodes to prevent overfitting. Initial results indicate this helps in solving the task.
[ "Frydenlund, Arvid" ]
On the Pathological Path-star Task for Language Models (Extended Abstract)
repl4nlp-1.20
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.repl4nlp-1.20/
[]
[]
[]
0
https://aclanthology.org/2024.repl4nlp-1.21.bib
@inproceedings{forooghi-etal-2024-whitening, title = "Whitening Not Recommended for Classification Tasks in {LLM}s", author = "Forooghi, Ali and Sadeghi, Shaghayegh and Lu, Jianguo", editor = "Zhao, Chen and Mosbach, Marius and Atanasova, Pepa and Goldfarb-Tarrent, Seraphina and Hase, Peter and Hosseini, Arian and Elbayad, Maha and Pezzelle, Sandro and Mozes, Maximilian", booktitle = "Proceedings of the 9th Workshop on Representation Learning for NLP (RepL4NLP-2024)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.repl4nlp-1.21", pages = "285--289", abstract = "Sentence embedding is a cornerstone in NLP. Whitening has been claimed to be an effective method to improve embeddings obtained from Large Language Models (LLMs) for sentence embedding. However, we find that the effectiveness of whitening is model-dependent and task-dependent. In particular, whitening degenerates embeddings for classification tasks. The conclusion is supported by extensive experiments. A by-product of our research is embedding evaluation platform for LLMs called SentEval+", }
Sentence embedding is a cornerstone in NLP. Whitening has been claimed to be an effective method to improve embeddings obtained from Large Language Models (LLMs) for sentence embedding. However, we find that the effectiveness of whitening is model-dependent and task-dependent. In particular, whitening degenerates embeddings for classification tasks. The conclusion is supported by extensive experiments. A by-product of our research is embedding evaluation platform for LLMs called SentEval+
[ "Forooghi, Ali", "Sadeghi, Shaghayegh", "Lu, Jianguo" ]
Whitening Not Recommended for Classification Tasks in LLMs
repl4nlp-1.21
Poster
2407.12886
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.repl4nlp-1.21/
[]
[]
[]
0
https://aclanthology.org/2024.repl4nlp-1.22.bib
@inproceedings{tigges-etal-2024-llm, title = "{LLM} Circuit Analyses Are Consistent Across Training and Scale", author = "Tigges, Curt and Hanna, Michael and Yu, Qinan and Biderman, Stella", editor = "Zhao, Chen and Mosbach, Marius and Atanasova, Pepa and Goldfarb-Tarrent, Seraphina and Hase, Peter and Hosseini, Arian and Elbayad, Maha and Pezzelle, Sandro and Mozes, Maximilian", booktitle = "Proceedings of the 9th Workshop on Representation Learning for NLP (RepL4NLP-2024)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.repl4nlp-1.22", pages = "290--303", abstract = "Most currently deployed large language models (LLMs) undergo continuous training or additional finetuning. By contrast, most research into LLMs{'} internal mechanisms focuses on models at one snapshot in time (the end of pre-training), raising the question of whether their results generalize to real-world settings. Existing studies of mechanisms over time focus on encoder-only or toy models, which differ significantly from most deployed models. In this study, we track how model mechanisms, operationalized as circuits, emerge and evolve across 300 billion tokens of training in decoder-only LLMs, in models ranging from 70 million to 2.8 billion parameters. We find that task abilities and the functional components that support them emerge consistently at similar token counts across scale. Moreover, although such components may be implemented by different attention heads over time, the overarching algorithm that they implement remains. Surprisingly, both these algorithms and the types of components involved therein tend to replicate across model scale. Finally, we find that circuit size correlates with model size and can fluctuate considerably over time even when the same algorithm is implemented. These results suggest that circuit analyses conducted on small models at the end of pre-training can provide insights that still apply after additional training and over model scale.", }
Most currently deployed large language models (LLMs) undergo continuous training or additional finetuning. By contrast, most research into LLMs{'} internal mechanisms focuses on models at one snapshot in time (the end of pre-training), raising the question of whether their results generalize to real-world settings. Existing studies of mechanisms over time focus on encoder-only or toy models, which differ significantly from most deployed models. In this study, we track how model mechanisms, operationalized as circuits, emerge and evolve across 300 billion tokens of training in decoder-only LLMs, in models ranging from 70 million to 2.8 billion parameters. We find that task abilities and the functional components that support them emerge consistently at similar token counts across scale. Moreover, although such components may be implemented by different attention heads over time, the overarching algorithm that they implement remains. Surprisingly, both these algorithms and the types of components involved therein tend to replicate across model scale. Finally, we find that circuit size correlates with model size and can fluctuate considerably over time even when the same algorithm is implemented. These results suggest that circuit analyses conducted on small models at the end of pre-training can provide insights that still apply after additional training and over model scale.
[ "Tigges, Curt", "Hanna, Michael", "Yu, Qinan", "Biderman, Stella" ]
LLM Circuit Analyses Are Consistent Across Training and Scale
repl4nlp-1.22
Poster
2407.10827
[ "" ]
https://huggingface.co/papers/2407.10827
1
4
2
4
https://aclanthology.org/2024.repl4nlp-1.22/
[]
[]
[]
1
https://aclanthology.org/2024.sdp-1.1.bib
@inproceedings{ghosal-etal-2024-overview, title = "Overview of the Fourth Workshop on Scholarly Document Processing", author = "Ghosal, Tirthankar and Singh, Amanpreet and De Waard, Anita and Mayr, Philipp and Naik, Aakanksha and Weller, Orion and Lee, Yoonjoo and Shen, Zejiang and Qin, Yanxia", editor = "Ghosal, Tirthankar and Singh, Amanpreet and Waard, Anita and Mayr, Philipp and Naik, Aakanksha and Weller, Orion and Lee, Yoonjoo and Shen, Shannon and Qin, Yanxia", booktitle = "Proceedings of the Fourth Workshop on Scholarly Document Processing (SDP 2024)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.sdp-1.1", pages = "1--6", abstract = "The workshop on Scholarly Document Processing (SDP) started in 2020 to accelerate research, inform policy and educate the public on natural language processing for scientific text. The fourth iteration of the workshop, SDP24 was held at the 62nd Annual Meeting of the Association for Computational Linguistics (ACL24) as a hybrid event. The SDP workshop saw a great increase in interest, with 57 submissions, of which 28 were accepted. The program consisted of a research track, four invited talks and two shared tasks: 1) DAGPap24: Detecting automatically generated scientific papers and 2) Context24: Multimodal Evidence and Grounding Context Identification for Scientific Claims. The program was geared towards NLP, information extraction, information retrieval, and data mining for scholarly documents, with an emphasis on identifying and providing solutions to open challenges.", }
The workshop on Scholarly Document Processing (SDP) started in 2020 to accelerate research, inform policy and educate the public on natural language processing for scientific text. The fourth iteration of the workshop, SDP24 was held at the 62nd Annual Meeting of the Association for Computational Linguistics (ACL24) as a hybrid event. The SDP workshop saw a great increase in interest, with 57 submissions, of which 28 were accepted. The program consisted of a research track, four invited talks and two shared tasks: 1) DAGPap24: Detecting automatically generated scientific papers and 2) Context24: Multimodal Evidence and Grounding Context Identification for Scientific Claims. The program was geared towards NLP, information extraction, information retrieval, and data mining for scholarly documents, with an emphasis on identifying and providing solutions to open challenges.
[ "Ghosal, Tirthankar", "Singh, Amanpreet", "De Waard, Anita", "Mayr, Philipp", "Naik, Aakanksha", "Weller, Orion", "Lee, Yoonjoo", "Shen, Zejiang", "Qin, Yanxia" ]
Overview of the Fourth Workshop on Scholarly Document Processing
sdp-1.1
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.sdp-1.1/
[]
[]
[]
0
https://aclanthology.org/2024.sdp-1.2.bib
@inproceedings{chamezopoulos-etal-2024-overview, title = "Overview of the {D}ag{P}ap24 Shared Task on Detecting Automatically Generated Scientific Paper", author = "Chamezopoulos, Savvas and Herrmannova, Drahomira and De Waard, Anita and Herrmannova, Drahomira and Rosati, Domenic and Kashnitsky, Yury", editor = "Ghosal, Tirthankar and Singh, Amanpreet and Waard, Anita and Mayr, Philipp and Naik, Aakanksha and Weller, Orion and Lee, Yoonjoo and Shen, Shannon and Qin, Yanxia", booktitle = "Proceedings of the Fourth Workshop on Scholarly Document Processing (SDP 2024)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.sdp-1.2", pages = "7--11", abstract = "This paper provides an overview of the 2024 ACL Scholarly Document Processing workshop shared task on the detection of automatically generated scientific papers. Unlike our previous task, which focused on the binary classification of whether scientific passages were machine-generated or not, one likely use case for text generation technology in scientific writing is to intersperse human-written text with passages of machine-generated text. We frame the detection problem as a multiclass span classification task: given an expert of text, label token spans in the text as human-written or machine-generated We shared a dataset containing excerpts from human-written papers as well as artificially generated content collected by Elsevier publishing and editorial teams. As a test set, the participants were provided with a corpus of openly accessible human-written as well as generated papers from the same scientific domains of documents. The shared task saw 457 submissions across 28 participating teams and resulted in three published technical reports. We discuss our findings from the shared task in this overview paper.", }
This paper provides an overview of the 2024 ACL Scholarly Document Processing workshop shared task on the detection of automatically generated scientific papers. Unlike our previous task, which focused on the binary classification of whether scientific passages were machine-generated or not, one likely use case for text generation technology in scientific writing is to intersperse human-written text with passages of machine-generated text. We frame the detection problem as a multiclass span classification task: given an expert of text, label token spans in the text as human-written or machine-generated We shared a dataset containing excerpts from human-written papers as well as artificially generated content collected by Elsevier publishing and editorial teams. As a test set, the participants were provided with a corpus of openly accessible human-written as well as generated papers from the same scientific domains of documents. The shared task saw 457 submissions across 28 participating teams and resulted in three published technical reports. We discuss our findings from the shared task in this overview paper.
[ "Chamezopoulos, Savvas", "Herrmannova, Drahomira", "De Waard, Anita", "Herrmannova, Drahomira", "Rosati, Domenic", "Kashnitsky, Yury" ]
Overview of the DagPap24 Shared Task on Detecting Automatically Generated Scientific Paper
sdp-1.2
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.sdp-1.2/
[]
[]
[]
0
https://aclanthology.org/2024.sdp-1.3.bib
@inproceedings{chan-etal-2024-overview, title = "Overview of the Context24 Shared Task on Contextualizing Scientific Claims", author = "Chan, Chu Sern Joel and Naik, Aakanksha and Akamatsu, Matthew and Bekele, Hanna and Bransom, Erin and Campbell, Ian and Sparks, Jenna", editor = "Ghosal, Tirthankar and Singh, Amanpreet and Waard, Anita and Mayr, Philipp and Naik, Aakanksha and Weller, Orion and Lee, Yoonjoo and Shen, Shannon and Qin, Yanxia", booktitle = "Proceedings of the Fourth Workshop on Scholarly Document Processing (SDP 2024)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.sdp-1.3", pages = "12--21", abstract = "To appropriately interpret and use scientific claims for sensemaking and decision-making, it is critical to contextualize them, not just with textual evidence that the claim was in fact asserted, but also with key supporting empirical evidence, such as a figure that describes a key result, and methodological details, such as the methods of data collection. Retrieving this contextual information when encountering claims in isolation, away from their source papers, is difficult and time-consuming for humans. Scholarly document processing models could help to contextualize scientific claims, but there is a lack of datasets designed for this task. Thus, we contribute a dataset of 585 scientific claims with gold annotations for supporting figures and tables, and gold text snippets of methodological details, that ground the key results behind each claim and run the Context24 shared task to encourage model development for this task. This report describes details of our dataset construction process, summarizes results from the shared task conducted at the 4th Workshop on Scholarly Document Processing (SDP), and discusses future research directions in this space. To support further research, we also publicly release the dataset on HuggingFace.", }
To appropriately interpret and use scientific claims for sensemaking and decision-making, it is critical to contextualize them, not just with textual evidence that the claim was in fact asserted, but also with key supporting empirical evidence, such as a figure that describes a key result, and methodological details, such as the methods of data collection. Retrieving this contextual information when encountering claims in isolation, away from their source papers, is difficult and time-consuming for humans. Scholarly document processing models could help to contextualize scientific claims, but there is a lack of datasets designed for this task. Thus, we contribute a dataset of 585 scientific claims with gold annotations for supporting figures and tables, and gold text snippets of methodological details, that ground the key results behind each claim and run the Context24 shared task to encourage model development for this task. This report describes details of our dataset construction process, summarizes results from the shared task conducted at the 4th Workshop on Scholarly Document Processing (SDP), and discusses future research directions in this space. To support further research, we also publicly release the dataset on HuggingFace.
[ "Chan, Chu Sern Joel", "Naik, Aakanksha", "Akamatsu, Matthew", "Bekele, Hanna", "Bransom, Erin", "Campbell, Ian", "Sparks, Jenna" ]
Overview of the Context24 Shared Task on Contextualizing Scientific Claims
sdp-1.3
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.sdp-1.3/
[]
[]
[]
0
https://aclanthology.org/2024.sdp-1.4.bib
@inproceedings{gu-hahnloser-2024-controllable, title = "Controllable Citation Sentence Generation with Language Models", author = "Gu, Nianlong and Hahnloser, Richard", editor = "Ghosal, Tirthankar and Singh, Amanpreet and Waard, Anita and Mayr, Philipp and Naik, Aakanksha and Weller, Orion and Lee, Yoonjoo and Shen, Shannon and Qin, Yanxia", booktitle = "Proceedings of the Fourth Workshop on Scholarly Document Processing (SDP 2024)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.sdp-1.4", pages = "22--37", abstract = "Citation generation aims to generate a citation sentence that refers to a chosen paper in the context of a manuscript. However, a rigid citation generation process is at odds with an author{'}s desire to control specific attributes, such as 1) the citation intent, e.g., either introducing background information or comparing results, and 2) keywords that should appear in the citation text. To provide these degrees of controllability during citation generation, we propose to integrate the manuscript context, the context of the referenced paper, and the desired control attributes into a structured template and use it to fine-tune a language model (LM) via next-token prediction. We then utilize Proximal Policy Optimization to directly optimize the LM in favor of a high score of our proposed controllability metric. The proposed workflow harmoniously combines citation attribute suggestion and conditional citation generation into one LM, allowing for better user control.", }
Citation generation aims to generate a citation sentence that refers to a chosen paper in the context of a manuscript. However, a rigid citation generation process is at odds with an author{'}s desire to control specific attributes, such as 1) the citation intent, e.g., either introducing background information or comparing results, and 2) keywords that should appear in the citation text. To provide these degrees of controllability during citation generation, we propose to integrate the manuscript context, the context of the referenced paper, and the desired control attributes into a structured template and use it to fine-tune a language model (LM) via next-token prediction. We then utilize Proximal Policy Optimization to directly optimize the LM in favor of a high score of our proposed controllability metric. The proposed workflow harmoniously combines citation attribute suggestion and conditional citation generation into one LM, allowing for better user control.
[ "Gu, Nianlong", "Hahnloser, Richard" ]
Controllable Citation Sentence Generation with Language Models
sdp-1.4
Poster
2211.07066
[ "https://github.com/nianlonggu/lmcitegen" ]
-1
-1
-1
-1
https://aclanthology.org/2024.sdp-1.4/
[]
[]
[]
0
https://aclanthology.org/2024.sdp-1.5.bib
@inproceedings{nishimura-etal-2024-toward, title = "Toward Related Work Generation with Structure and Novelty Statement", author = "Nishimura, Kazuya and Saito, Kuniaki and Hirasawa, Tosho and Ushiku, Yoshitaka", editor = "Ghosal, Tirthankar and Singh, Amanpreet and Waard, Anita and Mayr, Philipp and Naik, Aakanksha and Weller, Orion and Lee, Yoonjoo and Shen, Shannon and Qin, Yanxia", booktitle = "Proceedings of the Fourth Workshop on Scholarly Document Processing (SDP 2024)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.sdp-1.5", pages = "38--57", abstract = "To help readers understand the novelty and the research context, an excellent related work section is structured (\textit{i.e.,} the section consists of paragraphs determined by categorizing papers into several topics) and includes descriptions of novelty. However, previous studies viewed related work generation as multi-document summarization, and the structure and novelty statement are ignored in such studies. In this paper, we redefine the related work generation task as summarization with structure (\textit{i.e.,} multiple paragraphs with citation) and novelty statement. For this task, we propose a quality-oriented dataset and evaluation metrics. Experiments evaluated the state-of-the-art language models on our tasks, and we confirmed the issues with the current models and the validity of the evaluation indicators.", }
To help readers understand the novelty and the research context, an excellent related work section is structured (\textit{i.e.,} the section consists of paragraphs determined by categorizing papers into several topics) and includes descriptions of novelty. However, previous studies viewed related work generation as multi-document summarization, and the structure and novelty statement are ignored in such studies. In this paper, we redefine the related work generation task as summarization with structure (\textit{i.e.,} multiple paragraphs with citation) and novelty statement. For this task, we propose a quality-oriented dataset and evaluation metrics. Experiments evaluated the state-of-the-art language models on our tasks, and we confirmed the issues with the current models and the validity of the evaluation indicators.
[ "Nishimura, Kazuya", "Saito, Kuniaki", "Hirasawa, Tosho", "Ushiku, Yoshitaka" ]
Toward Related Work Generation with Structure and Novelty Statement
sdp-1.5
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.sdp-1.5/
[]
[]
[]
0
https://aclanthology.org/2024.sdp-1.6.bib
@inproceedings{zhuang-kennington-2024-understanding, title = "Understanding Survey Paper Taxonomy about Large Language Models via Graph Representation Learning", author = "Zhuang, Jun and Kennington, Casey", editor = "Ghosal, Tirthankar and Singh, Amanpreet and Waard, Anita and Mayr, Philipp and Naik, Aakanksha and Weller, Orion and Lee, Yoonjoo and Shen, Shannon and Qin, Yanxia", booktitle = "Proceedings of the Fourth Workshop on Scholarly Document Processing (SDP 2024)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.sdp-1.6", pages = "58--69", abstract = "As new research on Large Language Models (LLMs) continues, it is difficult to keep up with new research and models. To help researchers synthesize the new research many have written survey papers, but even those have become numerous. In this paper, we develop a method to automatically assign survey papers to a taxonomy. We collect the metadata of 144 LLM survey papers and explore three paradigms to classify papers within the taxonomy. Our work indicates that leveraging graph structure information on co-category graphs can significantly outperform the language models in two paradigms; pre-trained language models{'} fine-tuning and zero-shot/few-shot classifications using LLMs. We find that our model surpasses an average human recognition level and that fine-tuning LLMs using weak labels generated by a smaller model, such as the GCN in this study, can be more effective than using ground-truth labels, revealing the potential of weak-to-strong generalization in the taxonomy classification task.", }
As new research on Large Language Models (LLMs) continues, it is difficult to keep up with new research and models. To help researchers synthesize the new research many have written survey papers, but even those have become numerous. In this paper, we develop a method to automatically assign survey papers to a taxonomy. We collect the metadata of 144 LLM survey papers and explore three paradigms to classify papers within the taxonomy. Our work indicates that leveraging graph structure information on co-category graphs can significantly outperform the language models in two paradigms; pre-trained language models{'} fine-tuning and zero-shot/few-shot classifications using LLMs. We find that our model surpasses an average human recognition level and that fine-tuning LLMs using weak labels generated by a smaller model, such as the GCN in this study, can be more effective than using ground-truth labels, revealing the potential of weak-to-strong generalization in the taxonomy classification task.
[ "Zhuang, Jun", "Kennington, Casey" ]
Understanding Survey Paper Taxonomy about Large Language Models via Graph Representation Learning
sdp-1.6
Poster
2402.10409
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.sdp-1.6/
[]
[]
[]
0
https://aclanthology.org/2024.sdp-1.7.bib
@inproceedings{palit-etal-2024-beyond, title = "Beyond Retrieval: Topic-based Alignment of Scientific Papers to Research Proposal", author = "Palit, Rudra and Patwardhan, Manasi and Vig, Lovekesh and Shroff, Gautam", editor = "Ghosal, Tirthankar and Singh, Amanpreet and Waard, Anita and Mayr, Philipp and Naik, Aakanksha and Weller, Orion and Lee, Yoonjoo and Shen, Shannon and Qin, Yanxia", booktitle = "Proceedings of the Fourth Workshop on Scholarly Document Processing (SDP 2024)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.sdp-1.7", pages = "70--83", abstract = "The inception of a research agenda typically commences with the creation of a comprehensive research proposal. The efficacy of the proposal often hinges on its ability to connect with the existing scientific literature that supports its ideas. To effectively assess the relevance of existing articles to a research proposal, it is imperative to categorize these articles into high-level thematic groups, referred to as topics, that align with the proposal. This paper introduces a novel task of aligning scientific articles, relevant to a proposal, with researcher-provided proposal topics. Additionally, we construct a dataset to serve as a benchmark for this task. We establish human and Large Language Model (LLM) baselines and propose a novel three-stage approach to address this challenge. We synthesize and use pseudo-labels that map proposal topics to text spans from cited articles to train Language Models (LMs) for two purposes: (i) as a retriever, to extract relevant text spans from cited articles for each topic, and (ii) as a classifier, to categorize the articles into the proposal topics. Our retriever-classifier pipeline, which employs very small open-source LMs fine-tuned with our constructed dataset, achieves results comparable to a vanilla paid LLM-based classifier, demonstrating its efficacy. However, a notable gap of 23.57 F1 score between our approach and the human baseline highlights the complexity of this task and emphasizes the need for further research.", }
The inception of a research agenda typically commences with the creation of a comprehensive research proposal. The efficacy of the proposal often hinges on its ability to connect with the existing scientific literature that supports its ideas. To effectively assess the relevance of existing articles to a research proposal, it is imperative to categorize these articles into high-level thematic groups, referred to as topics, that align with the proposal. This paper introduces a novel task of aligning scientific articles, relevant to a proposal, with researcher-provided proposal topics. Additionally, we construct a dataset to serve as a benchmark for this task. We establish human and Large Language Model (LLM) baselines and propose a novel three-stage approach to address this challenge. We synthesize and use pseudo-labels that map proposal topics to text spans from cited articles to train Language Models (LMs) for two purposes: (i) as a retriever, to extract relevant text spans from cited articles for each topic, and (ii) as a classifier, to categorize the articles into the proposal topics. Our retriever-classifier pipeline, which employs very small open-source LMs fine-tuned with our constructed dataset, achieves results comparable to a vanilla paid LLM-based classifier, demonstrating its efficacy. However, a notable gap of 23.57 F1 score between our approach and the human baseline highlights the complexity of this task and emphasizes the need for further research.
[ "Palit, Rudra", "Patwardhan, Manasi", "Vig, Lovekesh", "Shroff, Gautam" ]
Beyond Retrieval: Topic-based Alignment of Scientific Papers to Research Proposal
sdp-1.7
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.sdp-1.7/
[]
[]
[]
0
https://aclanthology.org/2024.sdp-1.8.bib
@inproceedings{munikoti-etal-2024-evaluating, title = "Evaluating the Effectiveness of Retrieval-Augmented Large Language Models in Scientific Document Reasoning", author = "Munikoti, Sai and Acharya, Anurag and Wagle, Sridevi and Horawalavithana, Sameera", editor = "Ghosal, Tirthankar and Singh, Amanpreet and Waard, Anita and Mayr, Philipp and Naik, Aakanksha and Weller, Orion and Lee, Yoonjoo and Shen, Shannon and Qin, Yanxia", booktitle = "Proceedings of the Fourth Workshop on Scholarly Document Processing (SDP 2024)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.sdp-1.8", pages = "84--89", abstract = "Despite the dramatic progress in Large Language Model (LLM) development, LLMs often provide seemingly plausible but not factual information, often referred to as hallucinations. Retrieval-augmented LLMs provide a non-parametric approach to solve these issues by retrieving relevant information from external data sources and augment the training process. These models help to trace evidence from an externally provided knowledge base allowing the model predictions to be better interpreted and verified. In this work, we critically evaluate these models in their ability to perform in scientific document reasoning tasks. To this end, we tuned multiple such model variants with science-focused instructions and evaluated them on a scientific document reasoning benchmark for the usefulness of the retrieved document passages. Our findings suggest that models justify predictions in science tasks with fabricated evidence and leveraging scientific corpus as pretraining data does not alleviate the risk of evidence fabrication.", }
Despite the dramatic progress in Large Language Model (LLM) development, LLMs often provide seemingly plausible but not factual information, often referred to as hallucinations. Retrieval-augmented LLMs provide a non-parametric approach to solve these issues by retrieving relevant information from external data sources and augment the training process. These models help to trace evidence from an externally provided knowledge base allowing the model predictions to be better interpreted and verified. In this work, we critically evaluate these models in their ability to perform in scientific document reasoning tasks. To this end, we tuned multiple such model variants with science-focused instructions and evaluated them on a scientific document reasoning benchmark for the usefulness of the retrieved document passages. Our findings suggest that models justify predictions in science tasks with fabricated evidence and leveraging scientific corpus as pretraining data does not alleviate the risk of evidence fabrication.
[ "Munikoti, Sai", "Acharya, Anurag", "Wagle, Sridevi", "Horawalavithana, Sameera" ]
Evaluating the Effectiveness of Retrieval-Augmented Large Language Models in Scientific Document Reasoning
sdp-1.8
Poster
2311.04348
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.sdp-1.8/
[]
[]
[]
0
https://aclanthology.org/2024.sdp-1.9.bib
@inproceedings{li-etal-2024-cited, title = "Cited Text Spans for Scientific Citation Text Generation", author = "Li, Xiangci and Lee, Yi-Hui and Ouyang, Jessica", editor = "Ghosal, Tirthankar and Singh, Amanpreet and Waard, Anita and Mayr, Philipp and Naik, Aakanksha and Weller, Orion and Lee, Yoonjoo and Shen, Shannon and Qin, Yanxia", booktitle = "Proceedings of the Fourth Workshop on Scholarly Document Processing (SDP 2024)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.sdp-1.9", pages = "90--104", abstract = "An automatic citation generation system aims to concisely and accurately describe the relationship between two scientific articles. To do so, such a system must ground its outputs to the content of the cited paper to avoid non-factual hallucinations. Due to the length of scientific documents, existing abstractive approaches have conditioned only on cited paper \textit{abstracts}. We demonstrate empirically that the abstract is not always the most appropriate input for citation generation and that models trained in this way learn to hallucinate. We propose to condition instead on the \textit{cited text span} (CTS) as an alternative to the abstract. Because manual CTS annotation is extremely time- and labor-intensive, we experiment with distant labeling of candidate CTS sentences, achieving sufficiently strong performance to substitute for expensive human annotations in model training, and we propose a human-in-the-loop, keyword-based CTS retrieval approach that makes generating citation texts grounded in the full text of cited papers both promising and practical.", }
An automatic citation generation system aims to concisely and accurately describe the relationship between two scientific articles. To do so, such a system must ground its outputs to the content of the cited paper to avoid non-factual hallucinations. Due to the length of scientific documents, existing abstractive approaches have conditioned only on cited paper \textit{abstracts}. We demonstrate empirically that the abstract is not always the most appropriate input for citation generation and that models trained in this way learn to hallucinate. We propose to condition instead on the \textit{cited text span} (CTS) as an alternative to the abstract. Because manual CTS annotation is extremely time- and labor-intensive, we experiment with distant labeling of candidate CTS sentences, achieving sufficiently strong performance to substitute for expensive human annotations in model training, and we propose a human-in-the-loop, keyword-based CTS retrieval approach that makes generating citation texts grounded in the full text of cited papers both promising and practical.
[ "Li, Xiangci", "Lee, Yi-Hui", "Ouyang, Jessica" ]
Cited Text Spans for Scientific Citation Text Generation
sdp-1.9
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.sdp-1.9/
[]
[]
[]
0
https://aclanthology.org/2024.sdp-1.10.bib
@inproceedings{kaesberg-etal-2024-citeassist, title = "{C}ite{A}ssist: A System for Automated Preprint Citation and {B}ib{T}e{X} Generation", author = "Kaesberg, Lars and Ruas, Terry and Wahle, Jan Philip and Gipp, Bela", editor = "Ghosal, Tirthankar and Singh, Amanpreet and Waard, Anita and Mayr, Philipp and Naik, Aakanksha and Weller, Orion and Lee, Yoonjoo and Shen, Shannon and Qin, Yanxia", booktitle = "Proceedings of the Fourth Workshop on Scholarly Document Processing (SDP 2024)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.sdp-1.10", pages = "105--119", abstract = "We present CiteAssist, a system to automate the generation of BibTeX entries for preprints, streamlining the process of bibliographic annotation. Our system extracts metadata, such as author names, titles, publication dates, and keywords, to create standardized annotations within the document. CiteAssist automatically attaches the BibTeX citation to the end of a PDF and links it on the first page of the document so other researchers gain immediate access to the correct citation of the article. This method promotes platform flexibility by ensuring that annotations remain accessible regardless of the repository used to publish or access the preprint. The annotations remain available even if the preprint is viewed externally to CiteAssist. Additionally, the system adds relevant related papers based on extracted keywords to the preprint, providing researchers with additional publications besides those in related work for further reading. Researchers can enhance their preprints organization and reference management workflows through a free and publicly available web interface.", }
We present CiteAssist, a system to automate the generation of BibTeX entries for preprints, streamlining the process of bibliographic annotation. Our system extracts metadata, such as author names, titles, publication dates, and keywords, to create standardized annotations within the document. CiteAssist automatically attaches the BibTeX citation to the end of a PDF and links it on the first page of the document so other researchers gain immediate access to the correct citation of the article. This method promotes platform flexibility by ensuring that annotations remain accessible regardless of the repository used to publish or access the preprint. The annotations remain available even if the preprint is viewed externally to CiteAssist. Additionally, the system adds relevant related papers based on extracted keywords to the preprint, providing researchers with additional publications besides those in related work for further reading. Researchers can enhance their preprints organization and reference management workflows through a free and publicly available web interface.
[ "Kaesberg, Lars", "Ruas, Terry", "Wahle, Jan Philip", "Gipp, Bela" ]
CiteAssist: A System for Automated Preprint Citation and BibTeX Generation
sdp-1.10
Poster
2407.03192
[ "https://github.com/gipplab/preprint_generator" ]
-1
-1
-1
-1
https://aclanthology.org/2024.sdp-1.10/
[]
[]
[]
0
https://aclanthology.org/2024.sdp-1.11.bib
@inproceedings{lin-etal-2024-end, title = "An end-to-end entity recognition and disambiguation framework for identifying Author Affiliation from literature publications", author = "Lin, Lianghong and [email protected], [email protected] and [email protected], [email protected] and Hao, Tianyong", editor = "Ghosal, Tirthankar and Singh, Amanpreet and Waard, Anita and Mayr, Philipp and Naik, Aakanksha and Weller, Orion and Lee, Yoonjoo and Shen, Shannon and Qin, Yanxia", booktitle = "Proceedings of the Fourth Workshop on Scholarly Document Processing (SDP 2024)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.sdp-1.11", pages = "120--129", abstract = "Author affiliation information plays a key role in bibliometric analyses and is essential for evaluating studies. However, as author affiliation information has not been standardized, which leads to difficulties such as synonym ambiguity and incomplete data during automated processing. To address the challenge, this paper proposes an end-to-end entity recognition and disambiguation framework for identifying author affiliation from literature publications. For entity disambiguation, an algorithm combining word embedding and spatial embedding is presented considering that author affiliation texts often contain rich geographic information. The disambiguation algorithm utilizes the semantic information and geographic information, which effectively enhances entity recognition and disambiguation effect. In addition, the proposed framework facilitates the effective utilization of the extensive literature in the PubMed database for comprehensive bibliometric analysis. The experimental results verify the robustness and effectiveness of the algorithm.", }
Author affiliation information plays a key role in bibliometric analyses and is essential for evaluating studies. However, as author affiliation information has not been standardized, which leads to difficulties such as synonym ambiguity and incomplete data during automated processing. To address the challenge, this paper proposes an end-to-end entity recognition and disambiguation framework for identifying author affiliation from literature publications. For entity disambiguation, an algorithm combining word embedding and spatial embedding is presented considering that author affiliation texts often contain rich geographic information. The disambiguation algorithm utilizes the semantic information and geographic information, which effectively enhances entity recognition and disambiguation effect. In addition, the proposed framework facilitates the effective utilization of the extensive literature in the PubMed database for comprehensive bibliometric analysis. The experimental results verify the robustness and effectiveness of the algorithm.
[ "Lin, Lianghong", "[email protected], [email protected]", "[email protected], [email protected]", "Hao, Tianyong" ]
An end-to-end entity recognition and disambiguation framework for identifying Author Affiliation from literature publications
sdp-1.11
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.sdp-1.11/
[]
[]
[]
0
https://aclanthology.org/2024.sdp-1.12.bib
@inproceedings{zhao-etal-2024-utilizing, title = "Utilizing an Ensemble Model with Anomalous Label Smoothing to Detect Generated Scientific Papers", author = "Zhao, Yuan and Gao, Junruo and Wang, Junlin and Luo, Gang and Tang, Liang", editor = "Ghosal, Tirthankar and Singh, Amanpreet and Waard, Anita and Mayr, Philipp and Naik, Aakanksha and Weller, Orion and Lee, Yoonjoo and Shen, Shannon and Qin, Yanxia", booktitle = "Proceedings of the Fourth Workshop on Scholarly Document Processing (SDP 2024)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.sdp-1.12", pages = "130--134", abstract = "Generative AI, as it becomes increasingly integrated into our lives, has brought convenience, though some concerns have arisen regarding its potential impact on the rigor and authenticity of scientific research. To encourage the development of robust and reliable automatically-generated scientific text detection systems, the {``}DAGPap24: Detecting Automatically Generated Scientific Papers{''} competition was held and shared the same task with the 4th Workshop on Scholarly Document Processing (SDP 2024) to be held at ACL 2024. In the DAGPap24 competition, participants were tasked with constructing a generative text detection model that could accurately distinguish between the human written fragment, the synonym replacement fragment, the ChatGPT rewrite fragment, and the generated summary fragment of a paper. In this competition, we first conducted a comprehensive analysis of the training set to build a generative paper detection model. Then we tried various language models, including SciBERT, ALBERT, DeBERTa, RoBERTa, etc. After that, we introduced an Anomalous Label Smoothing (ALS) method and a majority voting method to improve the final results. Finally, we achieved 0.9948 and 0.9944 F1 scores during the development and testing phases respectively, and we achieved second place in the competition.", }
Generative AI, as it becomes increasingly integrated into our lives, has brought convenience, though some concerns have arisen regarding its potential impact on the rigor and authenticity of scientific research. To encourage the development of robust and reliable automatically-generated scientific text detection systems, the {``}DAGPap24: Detecting Automatically Generated Scientific Papers{''} competition was held and shared the same task with the 4th Workshop on Scholarly Document Processing (SDP 2024) to be held at ACL 2024. In the DAGPap24 competition, participants were tasked with constructing a generative text detection model that could accurately distinguish between the human written fragment, the synonym replacement fragment, the ChatGPT rewrite fragment, and the generated summary fragment of a paper. In this competition, we first conducted a comprehensive analysis of the training set to build a generative paper detection model. Then we tried various language models, including SciBERT, ALBERT, DeBERTa, RoBERTa, etc. After that, we introduced an Anomalous Label Smoothing (ALS) method and a majority voting method to improve the final results. Finally, we achieved 0.9948 and 0.9944 F1 scores during the development and testing phases respectively, and we achieved second place in the competition.
[ "Zhao, Yuan", "Gao, Junruo", "Wang, Junlin", "Luo, Gang", "Tang, Liang" ]
Utilizing an Ensemble Model with Anomalous Label Smoothing to Detect Generated Scientific Papers
sdp-1.12
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.sdp-1.12/
[]
[]
[]
0
https://aclanthology.org/2024.sdp-1.13.bib
@inproceedings{duran-silva-etal-2024-affilgood, title = "{A}ffil{G}ood: Building reliable institution name disambiguation tools to improve scientific literature analysis", author = "Duran-Silva, Nicolau and Accuosto, Pablo and Przyby{\l}a, Piotr and Saggion, Horacio", editor = "Ghosal, Tirthankar and Singh, Amanpreet and Waard, Anita and Mayr, Philipp and Naik, Aakanksha and Weller, Orion and Lee, Yoonjoo and Shen, Shannon and Qin, Yanxia", booktitle = "Proceedings of the Fourth Workshop on Scholarly Document Processing (SDP 2024)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.sdp-1.13", pages = "135--144", abstract = "The accurate attribution of scientific works to research organizations is hindered by the lack of openly available manually annotated data{--}in particular when multilingual and complex affiliation strings are considered. The AffilGood framework introduced in this paper addresses this gap. We identify three sub-tasks relevant for institution name disambiguation and make available annotated datasets and tools aimed at each of them, including i) a dataset annotated with affiliation spans in noisy automatically-extracted strings; ii) a dataset annotated with named entities for the identification of organizations and their locations; iii) seven datasets annotated with the Research Organization Registry (ROR) identifiers for the evaluation of entity-linking systems. In addition, we describe, evaluate and make available newly developed tools that use these datasets to provide solutions for each of the identified sub-tasks. Our results confirm the value of the developed resources and methods in addressing key challenges in institution name disambiguation.", }
The accurate attribution of scientific works to research organizations is hindered by the lack of openly available manually annotated data{--}in particular when multilingual and complex affiliation strings are considered. The AffilGood framework introduced in this paper addresses this gap. We identify three sub-tasks relevant for institution name disambiguation and make available annotated datasets and tools aimed at each of them, including i) a dataset annotated with affiliation spans in noisy automatically-extracted strings; ii) a dataset annotated with named entities for the identification of organizations and their locations; iii) seven datasets annotated with the Research Organization Registry (ROR) identifiers for the evaluation of entity-linking systems. In addition, we describe, evaluate and make available newly developed tools that use these datasets to provide solutions for each of the identified sub-tasks. Our results confirm the value of the developed resources and methods in addressing key challenges in institution name disambiguation.
[ "Duran-Silva, Nicolau", "Accuosto, Pablo", "Przyby{\\l}a, Piotr", "Saggion, Horacio" ]
AffilGood: Building reliable institution name disambiguation tools to improve scientific literature analysis
sdp-1.13
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.sdp-1.13/
[]
[]
[]
0
https://aclanthology.org/2024.sdp-1.14.bib
@inproceedings{song-etal-2024-metadata, title = "Metadata Enhancement Using Large Language Models", author = "Song, Hyunju and Bethard, Steven and Thomer, Andrea", editor = "Ghosal, Tirthankar and Singh, Amanpreet and Waard, Anita and Mayr, Philipp and Naik, Aakanksha and Weller, Orion and Lee, Yoonjoo and Shen, Shannon and Qin, Yanxia", booktitle = "Proceedings of the Fourth Workshop on Scholarly Document Processing (SDP 2024)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.sdp-1.14", pages = "145--154", abstract = "In the natural sciences, a common form of scholarly document is a physical sample record, which provides categorical and textual metadata for specimens collected and analyzed for scientific research. Physical sample archives like museums and repositories publish these records in data repositories to support reproducible science and enable the discovery of physical samples. However, the success of resource discovery in such interfaces depends on the completeness of the sample records. We investigate approaches for automatically completing the scientific metadata fields of sample records. We apply large language models in zero and few-shot settings and incorporate the hierarchical structure of the taxonomy. We show that a combination of record summarization, bottom-up taxonomy traversal, and few-shot prompting yield F1 as high as 0.928 on metadata completion in the Earth science domain.", }
In the natural sciences, a common form of scholarly document is a physical sample record, which provides categorical and textual metadata for specimens collected and analyzed for scientific research. Physical sample archives like museums and repositories publish these records in data repositories to support reproducible science and enable the discovery of physical samples. However, the success of resource discovery in such interfaces depends on the completeness of the sample records. We investigate approaches for automatically completing the scientific metadata fields of sample records. We apply large language models in zero and few-shot settings and incorporate the hierarchical structure of the taxonomy. We show that a combination of record summarization, bottom-up taxonomy traversal, and few-shot prompting yield F1 as high as 0.928 on metadata completion in the Earth science domain.
[ "Song, Hyunju", "Bethard, Steven", "Thomer, Andrea" ]
Metadata Enhancement Using Large Language Models
sdp-1.14
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.sdp-1.14/
[]
[]
[]
0
https://aclanthology.org/2024.sdp-1.15.bib
@inproceedings{taechoyotin-acuna-2024-misti, title = "{MISTI}: Metadata-Informed Scientific Text and Image Representation through Contrastive Learning", author = "Taechoyotin, Pawin and Acuna, Daniel", editor = "Ghosal, Tirthankar and Singh, Amanpreet and Waard, Anita and Mayr, Philipp and Naik, Aakanksha and Weller, Orion and Lee, Yoonjoo and Shen, Shannon and Qin, Yanxia", booktitle = "Proceedings of the Fourth Workshop on Scholarly Document Processing (SDP 2024)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.sdp-1.15", pages = "155--164", abstract = "In scientific publications, automatic representations of figures and their captions can be used in NLP, computer vision, and information retrieval tasks. Contrastive learning has proven effective for creating such joint representations for natural scenes, but its application to scientific imagery and descriptions remains under-explored. Recent open-access publication datasets provide an opportunity to understand the effectiveness of this technique as well as evaluate the usefulness of additional metadata, which are available only in the scientific context. Here, we introduce MISTI, a novel model that uses contrastive learning to simultaneously learn the representation of figures, captions, and metadata, such as a paper{'}s title, sections, and curated concepts from the PubMed Open Access Subset. We evaluate our model on multiple information retrieval tasks, showing substantial improvements over baseline models. Notably, incorporating metadata doubled retrieval performance, achieving a Recall@1 of 30{\%} on a 70K-item caption retrieval task. We qualitatively explore how metadata can be used to strategically retrieve distinctive representations of the same concept but for different sections, such as introduction and results. Additionally, we show that our model seamlessly handles out-of-domain tasks related to image segmentation. We share our dataset and methods (https://github.com/Khempawin/scientific-image-caption-pair/tree/section-attr) and outline future research directions.", }
In scientific publications, automatic representations of figures and their captions can be used in NLP, computer vision, and information retrieval tasks. Contrastive learning has proven effective for creating such joint representations for natural scenes, but its application to scientific imagery and descriptions remains under-explored. Recent open-access publication datasets provide an opportunity to understand the effectiveness of this technique as well as evaluate the usefulness of additional metadata, which are available only in the scientific context. Here, we introduce MISTI, a novel model that uses contrastive learning to simultaneously learn the representation of figures, captions, and metadata, such as a paper{'}s title, sections, and curated concepts from the PubMed Open Access Subset. We evaluate our model on multiple information retrieval tasks, showing substantial improvements over baseline models. Notably, incorporating metadata doubled retrieval performance, achieving a Recall@1 of 30{\%} on a 70K-item caption retrieval task. We qualitatively explore how metadata can be used to strategically retrieve distinctive representations of the same concept but for different sections, such as introduction and results. Additionally, we show that our model seamlessly handles out-of-domain tasks related to image segmentation. We share our dataset and methods (https://github.com/Khempawin/scientific-image-caption-pair/tree/section-attr) and outline future research directions.
[ "Taechoyotin, Pawin", "Acuna, Daniel" ]
MISTI: Metadata-Informed Scientific Text and Image Representation through Contrastive Learning
sdp-1.15
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.sdp-1.15/
[]
[]
[]
0
https://aclanthology.org/2024.sdp-1.16.bib
@inproceedings{mishra-etal-2024-first, title = "First Steps in Building a Knowledge Base of Mathematical Results", author = "Mishra, Shrey and Brihmouche, Yacine and Delemazure, Th{\'e}o and Gauquier, Antoine and Senellart, Pierre", editor = "Ghosal, Tirthankar and Singh, Amanpreet and Waard, Anita and Mayr, Philipp and Naik, Aakanksha and Weller, Orion and Lee, Yoonjoo and Shen, Shannon and Qin, Yanxia", booktitle = "Proceedings of the Fourth Workshop on Scholarly Document Processing (SDP 2024)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.sdp-1.16", pages = "165--174", abstract = "This paper explores the initial steps towards extracting information about theorems and proofs from scholarly documents to build a knowledge base of interlinked results. Specifically, we consider two main tasks: extracting results and their proofs from the PDFs of scientific articles and establishing which results are used in the proofs of others across the scientific literature. We discuss the problem statement, methodologies, and preliminary findings employed in both phases of our approach, highlighting the challenges faced.", }
This paper explores the initial steps towards extracting information about theorems and proofs from scholarly documents to build a knowledge base of interlinked results. Specifically, we consider two main tasks: extracting results and their proofs from the PDFs of scientific articles and establishing which results are used in the proofs of others across the scientific literature. We discuss the problem statement, methodologies, and preliminary findings employed in both phases of our approach, highlighting the challenges faced.
[ "Mishra, Shrey", "Brihmouche, Yacine", "Delemazure, Th{\\'e}o", "Gauquier, Antoine", "Senellart, Pierre" ]
First Steps in Building a Knowledge Base of Mathematical Results
sdp-1.16
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.sdp-1.16/
[]
[]
[]
0
https://aclanthology.org/2024.sdp-1.17.bib
@inproceedings{chitnis-etal-2024-tt, title = "AutoRef: Generating Refinements of Reviews Given Guidelines", author = "Chitnis, Soham and Patwardhan, Manasi and [email protected], [email protected] and [email protected], [email protected] and Vig, Lovekesh and Shroff, Gautam", editor = "Ghosal, Tirthankar and Singh, Amanpreet and Waard, Anita and Mayr, Philipp and Naik, Aakanksha and Weller, Orion and Lee, Yoonjoo and Shen, Shannon and Qin, Yanxia", booktitle = "Proceedings of the Fourth Workshop on Scholarly Document Processing (SDP 2024)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.sdp-1.17", pages = "175--190", abstract = "When examining reviews of research papers, we can distinguish between two hypothetical referees: the maximally lenient referee who accepts any paper with a vacuous review and the maximally strict one who rejects any paper with an overly pedantic review. Clearly, both are of no practical value. Our interest is in a referee who makes a balanced judgement and provides a review abiding by the guidelines. In this paper, we present a case study of automatic correction of an existing machine-generated or human review. The ${\tt{AutoRef}\ }$ system implements an iterative approach that progressively {``}refines{''} a review by attempting to make it more compliant with pre-defined requirements of a {``}good{''} review. It implements the following steps: (1) Translate the review requirements into a specification in natural language, of {``}yes/no{''} questions; (2) Given a $(paper,review)$ pair, extract answers to the questions; (3) Use the results in (2) to generate a new review; and (4) Return to Step (2) with the paper and the new review. Here, (2) and (3) are implemented by large language model (LLM) based agents. We present a case study using papers and reviews made available for the International Conference on Learning Representations (ICLR). Our initial empirical results suggest that ${\tt{AutoRef}\ }$ progressively improves the compliance of the generated reviews to the specification. Currently designed specification makes ${\tt{AutoRef}\ }$ progressively generate reviews which are stricter, making the decisions more inclined towards {``}rejections{''}. This demonstrates the applicability of {\$}AutoRef {\$} for: (1) The progressive correction of overly lenient reviews, being useful for referees and meta-reviewers; and (2) The generation of progressively stricter reviews for a paper, starting from a vacuous review ({``}Great paper. Accept.{''}), facilitating authors when trying to assess weaknesses in their papers.", }
When examining reviews of research papers, we can distinguish between two hypothetical referees: the maximally lenient referee who accepts any paper with a vacuous review and the maximally strict one who rejects any paper with an overly pedantic review. Clearly, both are of no practical value. Our interest is in a referee who makes a balanced judgement and provides a review abiding by the guidelines. In this paper, we present a case study of automatic correction of an existing machine-generated or human review. The ${\tt{AutoRef}\ }$ system implements an iterative approach that progressively {``}refines{''} a review by attempting to make it more compliant with pre-defined requirements of a {``}good{''} review. It implements the following steps: (1) Translate the review requirements into a specification in natural language, of {``}yes/no{''} questions; (2) Given a $(paper,review)$ pair, extract answers to the questions; (3) Use the results in (2) to generate a new review; and (4) Return to Step (2) with the paper and the new review. Here, (2) and (3) are implemented by large language model (LLM) based agents. We present a case study using papers and reviews made available for the International Conference on Learning Representations (ICLR). Our initial empirical results suggest that ${\tt{AutoRef}\ }$ progressively improves the compliance of the generated reviews to the specification. Currently designed specification makes ${\tt{AutoRef}\ }$ progressively generate reviews which are stricter, making the decisions more inclined towards {``}rejections{''}. This demonstrates the applicability of {\$}AutoRef {\$} for: (1) The progressive correction of overly lenient reviews, being useful for referees and meta-reviewers; and (2) The generation of progressively stricter reviews for a paper, starting from a vacuous review ({``}Great paper. Accept.{''}), facilitating authors when trying to assess weaknesses in their papers.
[ "Chitnis, Soham", "Patwardhan, Manasi", "[email protected], [email protected]", "[email protected], [email protected]", "Vig, Lovekesh", "Shroff, Gautam" ]
AutoRef: Generating Refinements of Reviews Given Guidelines
sdp-1.17
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.sdp-1.17/
[]
[]
[]
0
https://aclanthology.org/2024.sdp-1.18.bib
@inproceedings{sakhrani-etal-2024-artificial, title = "Artificial Intuition: Efficient Classification of Scientific Abstracts", author = "Sakhrani, Harsh and Pervez, Naseela and Ravikumar, Anirudh and Morstatter, Fred and Graddy-Reed, Alexandra and Belz, Andrea", editor = "Ghosal, Tirthankar and Singh, Amanpreet and Waard, Anita and Mayr, Philipp and Naik, Aakanksha and Weller, Orion and Lee, Yoonjoo and Shen, Shannon and Qin, Yanxia", booktitle = "Proceedings of the Fourth Workshop on Scholarly Document Processing (SDP 2024)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.sdp-1.18", pages = "191--201", abstract = "It is desirable to coarsely classify short scientific texts, such as grant or publication abstracts, for strategic insight or research portfolio management. These texts efficiently transmit dense information to experts possessing a rich body of knowledge to aid interpretation. Yet this task is remarkably difficult to automate because of brevity and the absence of context. To address this gap, we have developed a novel approach to generate and appropriately assign coarse domain-specific labels. We show that a Large Language Model (LLM) can provide metadata essential to the task, in a process akin to the augmentation of supplemental knowledge representing human intuition, and propose a workflow. As a pilot study, we use a corpus of award abstracts from the National Aeronautics and Space Administration (NASA). We develop new assessment tools in concert with established performance metrics.", }
It is desirable to coarsely classify short scientific texts, such as grant or publication abstracts, for strategic insight or research portfolio management. These texts efficiently transmit dense information to experts possessing a rich body of knowledge to aid interpretation. Yet this task is remarkably difficult to automate because of brevity and the absence of context. To address this gap, we have developed a novel approach to generate and appropriately assign coarse domain-specific labels. We show that a Large Language Model (LLM) can provide metadata essential to the task, in a process akin to the augmentation of supplemental knowledge representing human intuition, and propose a workflow. As a pilot study, we use a corpus of award abstracts from the National Aeronautics and Space Administration (NASA). We develop new assessment tools in concert with established performance metrics.
[ "Sakhrani, Harsh", "Pervez, Naseela", "Ravikumar, Anirudh", "Morstatter, Fred", "Graddy-Reed, Alex", "ra", "Belz, Andrea" ]
Artificial Intuition: Efficient Classification of Scientific Abstracts
sdp-1.18
Poster
2407.06093
[ "" ]
https://huggingface.co/papers/2407.06093
1
0
0
6
https://aclanthology.org/2024.sdp-1.18/
[]
[]
[]
1
https://aclanthology.org/2024.sdp-1.19.bib
@inproceedings{oshima-etal-2024-synthetic, title = "Synthetic Context with {LLM} for Entity Linking from Scientific Tables", author = "Oshima, Yuji and Shindo, Hiroyuki and Teranishi, Hiroki and Ouchi, Hiroki and Watanabe, Taro", editor = "Ghosal, Tirthankar and Singh, Amanpreet and Waard, Anita and Mayr, Philipp and Naik, Aakanksha and Weller, Orion and Lee, Yoonjoo and Shen, Shannon and Qin, Yanxia", booktitle = "Proceedings of the Fourth Workshop on Scholarly Document Processing (SDP 2024)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.sdp-1.19", pages = "202--214", abstract = "Tables in scientific papers contain crucial information, such as experimental results.Entity Linking (EL) is a promising technology that analyses tables and associates them with a knowledge base.EL for table cells requires identifying the referent concept of each cell while understanding the context relevant to each cell in the paper. However, extracting the relevant context from the paper is challenging because the relevant parts are scattered in the main text and captions.This study defines a rule-based method for extracting broad context from the main text, including table captions and sentences that mention the table.Furthermore, we propose synthetic context as a more refined context generated by large language models (LLMs).In a synthetic context, contexts from the entire paper are refined by summarizing, injecting supplemental knowledge, and clarifying the referent concept.We observe this approach improves accuracy for EL by more than 10 points on the S2abEL dataset, and our qualitative analysis suggests potential future works.", }
Tables in scientific papers contain crucial information, such as experimental results.Entity Linking (EL) is a promising technology that analyses tables and associates them with a knowledge base.EL for table cells requires identifying the referent concept of each cell while understanding the context relevant to each cell in the paper. However, extracting the relevant context from the paper is challenging because the relevant parts are scattered in the main text and captions.This study defines a rule-based method for extracting broad context from the main text, including table captions and sentences that mention the table.Furthermore, we propose synthetic context as a more refined context generated by large language models (LLMs).In a synthetic context, contexts from the entire paper are refined by summarizing, injecting supplemental knowledge, and clarifying the referent concept.We observe this approach improves accuracy for EL by more than 10 points on the S2abEL dataset, and our qualitative analysis suggests potential future works.
[ "Oshima, Yuji", "Shindo, Hiroyuki", "Teranishi, Hiroki", "Ouchi, Hiroki", "Watanabe, Taro" ]
Synthetic Context with LLM for Entity Linking from Scientific Tables
sdp-1.19
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.sdp-1.19/
[]
[]
[]
0
https://aclanthology.org/2024.sdp-1.20.bib
@inproceedings{andreev-etal-2024-papilusion, title = "Papilusion at {DAGP}ap24: Paper or Illusion? Detecting {AI}-generated Scientific Papers", author = "Andreev, Nikita and Shirnin, Alexander and Mikhailov, Vladislav and Artemova, Ekaterina", editor = "Ghosal, Tirthankar and Singh, Amanpreet and Waard, Anita and Mayr, Philipp and Naik, Aakanksha and Weller, Orion and Lee, Yoonjoo and Shen, Shannon and Qin, Yanxia", booktitle = "Proceedings of the Fourth Workshop on Scholarly Document Processing (SDP 2024)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.sdp-1.20", pages = "215--219", abstract = "This paper presents Papilusion, an AI-generated scientific text detector developed within the DAGPap24 shared task on detecting automatically generated scientific papers. We propose an ensemble-based approach and conduct ablation studies to analyze the effect of the detector configurations on the performance. Papilusion is ranked 6th on the leaderboard, and we improve our performance after the competition ended, achieving 99.46 (+9.63) of the F1-score on the official test set.", }
This paper presents Papilusion, an AI-generated scientific text detector developed within the DAGPap24 shared task on detecting automatically generated scientific papers. We propose an ensemble-based approach and conduct ablation studies to analyze the effect of the detector configurations on the performance. Papilusion is ranked 6th on the leaderboard, and we improve our performance after the competition ended, achieving 99.46 (+9.63) of the F1-score on the official test set.
[ "Andreev, Nikita", "Shirnin, Alex", "er", "Mikhailov, Vladislav", "Artemova, Ekaterina" ]
Papilusion at DAGPap24: Paper or Illusion? Detecting AI-generated Scientific Papers
sdp-1.20
Poster
2407.17629
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.sdp-1.20/
[]
[]
[]
0
https://aclanthology.org/2024.sdp-1.21.bib
@inproceedings{gritsai-etal-2024-multi, title = "Multi-head Span-based Detector for {AI}-generated Fragments in Scientific Papers", author = "Gritsai, German and Khabutdinov, Ildar and Grabovoy, Andrey", editor = "Ghosal, Tirthankar and Singh, Amanpreet and Waard, Anita and Mayr, Philipp and Naik, Aakanksha and Weller, Orion and Lee, Yoonjoo and Shen, Shannon and Qin, Yanxia", booktitle = "Proceedings of the Fourth Workshop on Scholarly Document Processing (SDP 2024)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.sdp-1.21", pages = "220--225", abstract = "This paper describes a system designed to distinguish between AI-generated and human-written scientific excerpts in the DAGPap24 competition hosted within the Fourth Workshop on Scientific Document Processing. In this competition the task is to find artificially generated token-level text fragments in documents of a scientific domain. Our work focuses on the use of a multi-task learning architecture with two heads. The application of this approach is justified by the specificity of the task, where class spans are continuous over several hundred characters. We considered different encoder variations to obtain a state vector for each token in the sequence, as well as a variation in splitting fragments into tokens to further feed into the input of a transform-based encoder. This approach allows us to achieve a 9{\%} quality improvement relative to the baseline solution score on the development set (from 0.86 to 0.95) using the average macro $F_{1}$-score, as well as a score of 0.96 on a closed test part of the dataset from the competition.", }
This paper describes a system designed to distinguish between AI-generated and human-written scientific excerpts in the DAGPap24 competition hosted within the Fourth Workshop on Scientific Document Processing. In this competition the task is to find artificially generated token-level text fragments in documents of a scientific domain. Our work focuses on the use of a multi-task learning architecture with two heads. The application of this approach is justified by the specificity of the task, where class spans are continuous over several hundred characters. We considered different encoder variations to obtain a state vector for each token in the sequence, as well as a variation in splitting fragments into tokens to further feed into the input of a transform-based encoder. This approach allows us to achieve a 9{\%} quality improvement relative to the baseline solution score on the development set (from 0.86 to 0.95) using the average macro $F_{1}$-score, as well as a score of 0.96 on a closed test part of the dataset from the competition.
[ "Gritsai, German", "Khabutdinov, Ildar", "Grabovoy, Andrey" ]
Multi-head Span-based Detector for AI-generated Fragments in Scientific Papers
sdp-1.21
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.sdp-1.21/
[]
[]
[]
0
https://aclanthology.org/2024.sdp-1.22.bib
@inproceedings{chang-etal-2024-guiding, title = "Guiding Large Language Models via External Attention Prompting for Scientific Extreme Summarization", author = "Chang, Yuan and Li, Ziyue and Le, Xiaoqiu", editor = "Ghosal, Tirthankar and Singh, Amanpreet and Waard, Anita and Mayr, Philipp and Naik, Aakanksha and Weller, Orion and Lee, Yoonjoo and Shen, Shannon and Qin, Yanxia", booktitle = "Proceedings of the Fourth Workshop on Scholarly Document Processing (SDP 2024)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.sdp-1.22", pages = "226--242", abstract = "Scientific extreme summarization, the task of generating concise one-sentence summaries (TLDRs) for scientific papers, presents significant challenges due to the need for deep domain-specific understanding and the ability to distill salient information. This study identifies the critical role of titles and keywords in enhancing TLDR generation through quantitative analysis. We propose a novel method, External Attention Prompting (EAP), which leverages LLMs by guiding them to focus on the most critical parts of the source text through varying degrees of attention signals. Our method employs Markdown emphasis syntax to annotate attention levels, enabling LLMs to prioritize salient information effectively. Extensive experiments demonstrate that EAP significantly outperforms baseline methods across various LLMs and metrics in both zero-shot and few-shot settings. Further evaluations by GPT-4 demonstrate that EAP can enable LLMs to generate TLDRs of higher human-aligned quality.", }
Scientific extreme summarization, the task of generating concise one-sentence summaries (TLDRs) for scientific papers, presents significant challenges due to the need for deep domain-specific understanding and the ability to distill salient information. This study identifies the critical role of titles and keywords in enhancing TLDR generation through quantitative analysis. We propose a novel method, External Attention Prompting (EAP), which leverages LLMs by guiding them to focus on the most critical parts of the source text through varying degrees of attention signals. Our method employs Markdown emphasis syntax to annotate attention levels, enabling LLMs to prioritize salient information effectively. Extensive experiments demonstrate that EAP significantly outperforms baseline methods across various LLMs and metrics in both zero-shot and few-shot settings. Further evaluations by GPT-4 demonstrate that EAP can enable LLMs to generate TLDRs of higher human-aligned quality.
[ "Chang, Yuan", "Li, Ziyue", "Le, Xiaoqiu" ]
Guiding Large Language Models via External Attention Prompting for Scientific Extreme Summarization
sdp-1.22
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.sdp-1.22/
[]
[]
[]
0
https://aclanthology.org/2024.sdp-1.23.bib
@inproceedings{li-etal-2024-simulating, title = "Simulating Expert Discussions with Multi-agent for Enhanced Scientific Problem Solving", author = "Li, Ziyue and Chang, Yuan and Le, Xiaoqiu", editor = "Ghosal, Tirthankar and Singh, Amanpreet and Waard, Anita and Mayr, Philipp and Naik, Aakanksha and Weller, Orion and Lee, Yoonjoo and Shen, Shannon and Qin, Yanxia", booktitle = "Proceedings of the Fourth Workshop on Scholarly Document Processing (SDP 2024)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.sdp-1.23", pages = "243--256", abstract = "Large Language Models (LLMs) have shown remarkable potential across various domains, yet their application in addressing complex scientific problems remains a formidable challenge. This paper presents a novel methodology to augment the problem-solving capabilities of LLMs by assigning them roles as domain-specific experts. By simulating a panel of experts, each LLM is tasked with delivering professional and cautious responses to scientific inquiries. Our approach involves querying multiple LLMs and assessing the consistency of their responses. High agreement among the LLMs suggests greater confidence in the proposed solution, whereas discrepancies prompt a collaborative discussion among the LLMs to reach a consensus. This method emulates real-world scientific problem-solving processes, fostering a more reliable and robust mechanism for LLMs to tackle scientific questions. Our experimental results show that assigning roles to multiple LLMs as domain-specific experts significantly improves their accuracy and reliability in solving scientific problems. This framework has the potential to advance the application of AI in scientific research, enhancing its effectiveness and trustworthiness.", }
Large Language Models (LLMs) have shown remarkable potential across various domains, yet their application in addressing complex scientific problems remains a formidable challenge. This paper presents a novel methodology to augment the problem-solving capabilities of LLMs by assigning them roles as domain-specific experts. By simulating a panel of experts, each LLM is tasked with delivering professional and cautious responses to scientific inquiries. Our approach involves querying multiple LLMs and assessing the consistency of their responses. High agreement among the LLMs suggests greater confidence in the proposed solution, whereas discrepancies prompt a collaborative discussion among the LLMs to reach a consensus. This method emulates real-world scientific problem-solving processes, fostering a more reliable and robust mechanism for LLMs to tackle scientific questions. Our experimental results show that assigning roles to multiple LLMs as domain-specific experts significantly improves their accuracy and reliability in solving scientific problems. This framework has the potential to advance the application of AI in scientific research, enhancing its effectiveness and trustworthiness.
[ "Li, Ziyue", "Chang, Yuan", "Le, Xiaoqiu" ]
Simulating Expert Discussions with Multi-agent for Enhanced Scientific Problem Solving
sdp-1.23
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.sdp-1.23/
[]
[]
[]
0
https://aclanthology.org/2024.sdp-1.24.bib
@inproceedings{staudinger-etal-2024-analysis, title = "An Analysis of Tasks and Datasets in Peer Reviewing", author = "Staudinger, Moritz and Kusa, Wojciech and Piroi, Florina and Hanbury, Allan", editor = "Ghosal, Tirthankar and Singh, Amanpreet and Waard, Anita and Mayr, Philipp and Naik, Aakanksha and Weller, Orion and Lee, Yoonjoo and Shen, Shannon and Qin, Yanxia", booktitle = "Proceedings of the Fourth Workshop on Scholarly Document Processing (SDP 2024)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.sdp-1.24", pages = "257--268", abstract = "Taking note of the current challenges of the peer review system, this paper inventories the research tasks for analysing and possibly automating parts of the reviewing process, like matching submissions with a reviewer{'}s domain of expertise. For each of these tasks we list their associated datasets, analysing their quality in terms of available documentation of creation and use. Building up on this, we give a set of recommendations to take into account when collecting and releasing data.", }
Taking note of the current challenges of the peer review system, this paper inventories the research tasks for analysing and possibly automating parts of the reviewing process, like matching submissions with a reviewer{'}s domain of expertise. For each of these tasks we list their associated datasets, analysing their quality in terms of available documentation of creation and use. Building up on this, we give a set of recommendations to take into account when collecting and releasing data.
[ "Staudinger, Moritz", "Kusa, Wojciech", "Piroi, Florina", "Hanbury, Allan" ]
An Analysis of Tasks and Datasets in Peer Reviewing
sdp-1.24
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.sdp-1.24/
[]
[]
[]
0
https://aclanthology.org/2024.sdp-1.25.bib
@inproceedings{alvarez-etal-2024-zero, title = "Zero-shot Scientific Claim Verification Using {LLM}s and Citation Text", author = "Alvarez, Carlos and Bennett, Maxwell and Wang, Lucy", editor = "Ghosal, Tirthankar and Singh, Amanpreet and Waard, Anita and Mayr, Philipp and Naik, Aakanksha and Weller, Orion and Lee, Yoonjoo and Shen, Shannon and Qin, Yanxia", booktitle = "Proceedings of the Fourth Workshop on Scholarly Document Processing (SDP 2024)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.sdp-1.25", pages = "269--276", abstract = "Due to rapidly changing and advancing science, it is important to check the veracity of scientific claims and whether they are supported by research evidence. Previous versions of this task depended on supervised training, where labeled datasets were constructed through manual claim writing and evidence identification, sometimes coupled with mining citation relationships in papers. In this work, we investigate whether zero-shot scientific claim verification could be enabled using large language models (LLMs) and distant supervision examples taken directly from citation texts. We derive an in-context learning (ICL) dataset, SCitance, consisting of citation sentences ({``}citances{''}), LLM-generated negations, evidence documents, and veracity labels, and find that prompting GPT-4 with ICL examples from this dataset yields comparable performance (within 1 point F1) to previous finetuned models trained on manually curated claim-evidence pairs. Our results suggest that prompting LLMs with citance-evidence pairs directly poses a viable alternative to finetuning scientific claim verification models with manually-curated data.", }
Due to rapidly changing and advancing science, it is important to check the veracity of scientific claims and whether they are supported by research evidence. Previous versions of this task depended on supervised training, where labeled datasets were constructed through manual claim writing and evidence identification, sometimes coupled with mining citation relationships in papers. In this work, we investigate whether zero-shot scientific claim verification could be enabled using large language models (LLMs) and distant supervision examples taken directly from citation texts. We derive an in-context learning (ICL) dataset, SCitance, consisting of citation sentences ({``}citances{''}), LLM-generated negations, evidence documents, and veracity labels, and find that prompting GPT-4 with ICL examples from this dataset yields comparable performance (within 1 point F1) to previous finetuned models trained on manually curated claim-evidence pairs. Our results suggest that prompting LLMs with citance-evidence pairs directly poses a viable alternative to finetuning scientific claim verification models with manually-curated data.
[ "Alvarez, Carlos", "Bennett, Maxwell", "Wang, Lucy" ]
Zero-shot Scientific Claim Verification Using LLMs and Citation Text
sdp-1.25
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.sdp-1.25/
[]
[]
[]
0
https://aclanthology.org/2024.sdp-1.26.bib
@inproceedings{nagao-katsurai-2024-researcher, title = "Researcher Representations Based on Aggregating Embeddings of Publication Titles: A Case Study in a {J}apanese Academic Database", author = "Nagao, Hiroyoshi and Katsurai, Marie", editor = "Ghosal, Tirthankar and Singh, Amanpreet and Waard, Anita and Mayr, Philipp and Naik, Aakanksha and Weller, Orion and Lee, Yoonjoo and Shen, Shannon and Qin, Yanxia", booktitle = "Proceedings of the Fourth Workshop on Scholarly Document Processing (SDP 2024)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.sdp-1.26", pages = "277--282", abstract = "Constructing researcher representations is crucial for search and recommendation in academic databases. While recent studies presented methods based on knowledge graph embeddings, obtaining a complete graph of academic entities might be sometimes challenging due to the lack of linked data.By contrast, the textual list of publications of each researcher, which represents their research interests and expertise, is usually easy to obtain.Therefore, this study focuses on creating researcher representations based on textual embeddings of their publication titles and assesses their practicality. We aggregate embeddings of each researcher{'}s multiple publications into a single vector and apply it to research field classification and similar researcher search tasks. We experimented with multiple language models and embedding aggregation methods to compare their performance.From the model perspective, we confirmed the effectiveness of using sentence embedding models and a simple averaging approach.", }
Constructing researcher representations is crucial for search and recommendation in academic databases. While recent studies presented methods based on knowledge graph embeddings, obtaining a complete graph of academic entities might be sometimes challenging due to the lack of linked data.By contrast, the textual list of publications of each researcher, which represents their research interests and expertise, is usually easy to obtain.Therefore, this study focuses on creating researcher representations based on textual embeddings of their publication titles and assesses their practicality. We aggregate embeddings of each researcher{'}s multiple publications into a single vector and apply it to research field classification and similar researcher search tasks. We experimented with multiple language models and embedding aggregation methods to compare their performance.From the model perspective, we confirmed the effectiveness of using sentence embedding models and a simple averaging approach.
[ "Nagao, Hiroyoshi", "Katsurai, Marie" ]
Researcher Representations Based on Aggregating Embeddings of Publication Titles: A Case Study in a Japanese Academic Database
sdp-1.26
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.sdp-1.26/
[]
[]
[]
0
https://aclanthology.org/2024.sdp-1.27.bib
@inproceedings{singh-singh-2024-cosaemb, title = "{C}o{SAE}mb: Contrastive Section-aware Aspect Embeddings for Scientific Articles", author = "Singh, Shruti and Singh, Mayank", editor = "Ghosal, Tirthankar and Singh, Amanpreet and Waard, Anita and Mayr, Philipp and Naik, Aakanksha and Weller, Orion and Lee, Yoonjoo and Shen, Shannon and Qin, Yanxia", booktitle = "Proceedings of the Fourth Workshop on Scholarly Document Processing (SDP 2024)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.sdp-1.27", pages = "283--292", abstract = "Research papers are long documents that contain information about various aspects such as background, prior work, methodology, and results. Existing works on scientific document representation learning only leverage the title and abstract of the paper. We present CoSAEmb, a model that learns representations from the full text of 97402 scientific papers from the S2ORC dataset. We present a novel supervised contrastive training framework for long documents using triplet loss and margin gradation. Our framework can be used to learn representations of long documents with any existing encoder-only transformer model without retraining it from scratch. CoSAEmb shows improved performance on information retrieval from the paper{'}s full text in comparison to models trained only on paper titles and abstracts. We also evaluate CoSAEmb on SciRepEval and CSFCube benchmarks, showing comparable performance with existing state-of-the-art models.", }
Research papers are long documents that contain information about various aspects such as background, prior work, methodology, and results. Existing works on scientific document representation learning only leverage the title and abstract of the paper. We present CoSAEmb, a model that learns representations from the full text of 97402 scientific papers from the S2ORC dataset. We present a novel supervised contrastive training framework for long documents using triplet loss and margin gradation. Our framework can be used to learn representations of long documents with any existing encoder-only transformer model without retraining it from scratch. CoSAEmb shows improved performance on information retrieval from the paper{'}s full text in comparison to models trained only on paper titles and abstracts. We also evaluate CoSAEmb on SciRepEval and CSFCube benchmarks, showing comparable performance with existing state-of-the-art models.
[ "Singh, Shruti", "Singh, Mayank" ]
CoSAEmb: Contrastive Section-aware Aspect Embeddings for Scientific Articles
sdp-1.27
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.sdp-1.27/
[]
[]
[]
0
https://aclanthology.org/2024.sdp-1.28.bib
@inproceedings{korkmaz-del-rio-chanona-2024-integrating, title = "Integrating Table Representations into Large Language Models for Improved Scholarly Document Comprehension", author = "Korkmaz, Buse Sibel and Del Rio Chanona, Antonio", editor = "Ghosal, Tirthankar and Singh, Amanpreet and Waard, Anita and Mayr, Philipp and Naik, Aakanksha and Weller, Orion and Lee, Yoonjoo and Shen, Shannon and Qin, Yanxia", booktitle = "Proceedings of the Fourth Workshop on Scholarly Document Processing (SDP 2024)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.sdp-1.28", pages = "293--306", abstract = "We address the challenge of interpreting and reasoning over scientific tables with Large Language Models (LLMs), a crucial aspect of scholarly documents. Despite significant progress in natural language processing, the integration of tabular data into scientific LLMs remains limited. We propose an innovative approach leveraging intermediate task pre-training on table question-answering datasets, followed by model adaptation to comprehend tables in computer science literature. Our findings reveal that incorporating table understanding substantially improves the performance of LLMs on scientific literature understanding tasks, which we showcase in peer-review score prediction. This improvement underscores the importance of utilizing tabular data in the training of scientific language models. The code and models are publicly available at [this link](https://github.com/buseskorkmaz/Integrating-Table-Representations-into-LLMs).", }
We address the challenge of interpreting and reasoning over scientific tables with Large Language Models (LLMs), a crucial aspect of scholarly documents. Despite significant progress in natural language processing, the integration of tabular data into scientific LLMs remains limited. We propose an innovative approach leveraging intermediate task pre-training on table question-answering datasets, followed by model adaptation to comprehend tables in computer science literature. Our findings reveal that incorporating table understanding substantially improves the performance of LLMs on scientific literature understanding tasks, which we showcase in peer-review score prediction. This improvement underscores the importance of utilizing tabular data in the training of scientific language models. The code and models are publicly available at [this link](https://github.com/buseskorkmaz/Integrating-Table-Representations-into-LLMs).
[ "Korkmaz, Buse Sibel", "Del Rio Chanona, Antonio" ]
Integrating Table Representations into Large Language Models for Improved Scholarly Document Comprehension
sdp-1.28
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.sdp-1.28/
[]
[]
[]
0
https://aclanthology.org/2024.sdp-1.29.bib
@inproceedings{kumar-wang-2024-harnessing, title = "Harnessing {CLIP} for Evidence Identification in Scientific Literature: A Multimodal Approach to Context24 Shared Task", author = "Kumar, Anukriti and Wang, Lucy", editor = "Ghosal, Tirthankar and Singh, Amanpreet and Waard, Anita and Mayr, Philipp and Naik, Aakanksha and Weller, Orion and Lee, Yoonjoo and Shen, Shannon and Qin, Yanxia", booktitle = "Proceedings of the Fourth Workshop on Scholarly Document Processing (SDP 2024)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.sdp-1.29", pages = "307--313", abstract = "Knowing whether scientific claims are supported by evidence is fundamental to scholarly communication and evidence-based decision-making. We present our approach to Task 1 of the Context24 Shared Task{---}Contextualizing Scientific Figures and Tables (SDP@ACL2024), which focuses on identifying multimodal evidence from scientific publications that support claims. We finetune CLIP, a state-of-the-art model for image-text similarity tasks, to identify and rank figures and tables in papers that substantiate specific claims. Our methods focus on text and image preprocessing techniques and augmenting the organizer-provided training data with labeled examples from the SciMMIR and MedICaT datasets. Our best-performing model achieved NDCG@5 and NDCG@10 values of 0.26 and 0.30, respectively, on the Context24 test split. Our findings underscore the effectiveness of data augmentation and preprocessing in improving the model{'}s ability in evidence matching.", }
Knowing whether scientific claims are supported by evidence is fundamental to scholarly communication and evidence-based decision-making. We present our approach to Task 1 of the Context24 Shared Task{---}Contextualizing Scientific Figures and Tables (SDP@ACL2024), which focuses on identifying multimodal evidence from scientific publications that support claims. We finetune CLIP, a state-of-the-art model for image-text similarity tasks, to identify and rank figures and tables in papers that substantiate specific claims. Our methods focus on text and image preprocessing techniques and augmenting the organizer-provided training data with labeled examples from the SciMMIR and MedICaT datasets. Our best-performing model achieved NDCG@5 and NDCG@10 values of 0.26 and 0.30, respectively, on the Context24 test split. Our findings underscore the effectiveness of data augmentation and preprocessing in improving the model{'}s ability in evidence matching.
[ "Kumar, Anukriti", "Wang, Lucy" ]
Harnessing CLIP for Evidence Identification in Scientific Literature: A Multimodal Approach to Context24 Shared Task
sdp-1.29
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.sdp-1.29/
[]
[]
[]
0
https://aclanthology.org/2024.sdp-1.30.bib
@inproceedings{bolucu-etal-2024-csiro, title = "{CSIRO} at Context24: Contextualising Scientific Figures and Tables in Scientific Literature", author = {B{\"o}l{\"u}c{\"u}, Necva and Nguyen, Vincent and Timmer, Roelien and Yang, Huichen and Rybinski, Maciej and Wan, Stephen and Karimi, Sarvnaz}, editor = "Ghosal, Tirthankar and Singh, Amanpreet and Waard, Anita and Mayr, Philipp and Naik, Aakanksha and Weller, Orion and Lee, Yoonjoo and Shen, Shannon and Qin, Yanxia", booktitle = "Proceedings of the Fourth Workshop on Scholarly Document Processing (SDP 2024)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.sdp-1.30", pages = "314--323", abstract = "Finding evidence for claims from content presented in experimental results of scientific articles is difficult. The evidence is often presented in the form of tables and figures, and correctly matching it to scientific claims presents automation challenges. The Context24 shared task is launched to support the development of systems able to verify claims by extracting supporting evidence from articles. We explore different facets of this shared task modelled as a search problem and as an information extraction task. We experiment with a range of methods in each of these categories for the two sub-tasks of evidence identification and grounding context identification in the Context24 shared task.", }
Finding evidence for claims from content presented in experimental results of scientific articles is difficult. The evidence is often presented in the form of tables and figures, and correctly matching it to scientific claims presents automation challenges. The Context24 shared task is launched to support the development of systems able to verify claims by extracting supporting evidence from articles. We explore different facets of this shared task modelled as a search problem and as an information extraction task. We experiment with a range of methods in each of these categories for the two sub-tasks of evidence identification and grounding context identification in the Context24 shared task.
[ "B{\\\"o}l{\\\"u}c{\\\"u}, Necva", "Nguyen, Vincent", "Timmer, Roelien", "Yang, Huichen", "Rybinski, Maciej", "Wan, Stephen", "Karimi, Sarvnaz" ]
CSIRO at Context24: Contextualising Scientific Figures and Tables in Scientific Literature
sdp-1.30
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.sdp-1.30/
[]
[]
[]
0
https://aclanthology.org/2024.sdp-1.31.bib
@inproceedings{hirasawa-2024-osx, title = "{OSX} at Context24: How Well Can {GPT} Tackle Contexualizing Scientific Figures and Tables", author = "Hirasawa, Tosho", editor = "Ghosal, Tirthankar and Singh, Amanpreet and Waard, Anita and Mayr, Philipp and Naik, Aakanksha and Weller, Orion and Lee, Yoonjoo and Shen, Shannon and Qin, Yanxia", booktitle = "Proceedings of the Fourth Workshop on Scholarly Document Processing (SDP 2024)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.sdp-1.31", pages = "324--331", abstract = "Identifying the alignment between different parts of a scientific paper is fundamental to scholarly document processing.In the Context24 shared task, participants are given a scientific claim and asked to identify (1) key figures or tables that support the claim and (2) methodological details.While employing a supervised approach to train models on task-specific data is a prevailing strategy for both subtasks, such an approach is not feasible for low-resource domains.Therefore, this paper introduces data-free systems supported by Large Language Models.We propose systems based on GPT-4o and GPT-4-turbo for each task.The experimental results reveal the zero-shot capabilities of GPT-4* in both tasks.", }
Identifying the alignment between different parts of a scientific paper is fundamental to scholarly document processing.In the Context24 shared task, participants are given a scientific claim and asked to identify (1) key figures or tables that support the claim and (2) methodological details.While employing a supervised approach to train models on task-specific data is a prevailing strategy for both subtasks, such an approach is not feasible for low-resource domains.Therefore, this paper introduces data-free systems supported by Large Language Models.We propose systems based on GPT-4o and GPT-4-turbo for each task.The experimental results reveal the zero-shot capabilities of GPT-4* in both tasks.
[ "Hirasawa, Tosho" ]
OSX at Context24: How Well Can GPT Tackle Contexualizing Scientific Figures and Tables
sdp-1.31
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.sdp-1.31/
[]
[]
[]
0
https://aclanthology.org/2024.sighan-1.1.bib
@inproceedings{yang-wang-2024-automatic, title = "Automatic Quote Attribution in {C}hinese Literary Works", author = "Yang, Xingxing and Wang, Yu", editor = "Wong, Kam-Fai and Zhang, Min and Xu, Ruifeng and Li, Jing and Wei, Zhongyu and Gui, Lin and Liang, Bin and Zhao, Runcong", booktitle = "Proceedings of the 10th SIGHAN Workshop on Chinese Language Processing (SIGHAN-10)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.sighan-1.1", pages = "1--9", abstract = "Quote attribution in fiction refers to the extraction of dialogues and speaker identification of dialogues, which can be divided into 2 steps: quotation annotation and speaker annotation. We use a pipeline for quote attribution, which involves classification, extractive QA, multi-choice QA, and coreference resolution. We also had an evaluation of our model performance by predicting explicit and implicit speakers using a combination of different models.", }
Quote attribution in fiction refers to the extraction of dialogues and speaker identification of dialogues, which can be divided into 2 steps: quotation annotation and speaker annotation. We use a pipeline for quote attribution, which involves classification, extractive QA, multi-choice QA, and coreference resolution. We also had an evaluation of our model performance by predicting explicit and implicit speakers using a combination of different models.
[ "Yang, Xingxing", "Wang, Yu" ]
Automatic Quote Attribution in Chinese Literary Works
sighan-1.1
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.sighan-1.1/
[]
[]
[]
0
https://aclanthology.org/2024.sighan-1.2.bib
@inproceedings{wang-etal-2024-telechat, title = "{T}ele{C}hat: An Open-source Billingual Large Language Model", author = "Wang, Zihan and [email protected], [email protected] and [email protected], [email protected] and Yao, Yitong and [email protected], [email protected] and Mengxiang, Li and He, Zhongjiang and [email protected], [email protected] and [email protected], [email protected] and [email protected], [email protected] and Wang, Chao and Song, Shuangyong", editor = "Wong, Kam-Fai and Zhang, Min and Xu, Ruifeng and Li, Jing and Wei, Zhongyu and Gui, Lin and Liang, Bin and Zhao, Runcong", booktitle = "Proceedings of the 10th SIGHAN Workshop on Chinese Language Processing (SIGHAN-10)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.sighan-1.2", pages = "10--20", abstract = "In this paper, we present \textbf{TeleChat}, a collection of large language models (LLMs) with parameters of 7 billion and 12 billion. TeleChat is initially pretrained on an extensive corpus containing a diverse collection of texts from both English and Chinese languages, encompassing trillions of tokens. Subsequently, the model undergoes fine-tuning to align with human preferences, following a detailed methodology that we describe. We evaluate the performance of TeleChat on various tasks, including general dialogue generation, language understanding, mathematics, reasoning, code generation, and knowledge-based question answering. Our findings indicate that TeleChat achieves state-of-the-art performance to other open-source models of similar size across a wide range of public benchmarks. To support future research and applications utilizing LLMs, we release the fine-tuned model checkpoints of TeleChat-7B and TeleChat-12B, along with code and a portion of our filtered high-quality pretraining data, to the public community.", }
In this paper, we present \textbf{TeleChat}, a collection of large language models (LLMs) with parameters of 7 billion and 12 billion. TeleChat is initially pretrained on an extensive corpus containing a diverse collection of texts from both English and Chinese languages, encompassing trillions of tokens. Subsequently, the model undergoes fine-tuning to align with human preferences, following a detailed methodology that we describe. We evaluate the performance of TeleChat on various tasks, including general dialogue generation, language understanding, mathematics, reasoning, code generation, and knowledge-based question answering. Our findings indicate that TeleChat achieves state-of-the-art performance to other open-source models of similar size across a wide range of public benchmarks. To support future research and applications utilizing LLMs, we release the fine-tuned model checkpoints of TeleChat-7B and TeleChat-12B, along with code and a portion of our filtered high-quality pretraining data, to the public community.
TeleChat: An Open-source Billingual Large Language Model
sighan-1.2
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.sighan-1.2/
[]
[]
[]
0
https://aclanthology.org/2024.sighan-1.3.bib
@inproceedings{poon-etal-2024-shot, title = "Few-shot Question Generation for Reading Comprehension", author = "Poon, Yin and Lee, John Sie Yuen and [email protected], [email protected] and [email protected], [email protected] and [email protected], [email protected] and [email protected], [email protected]", editor = "Wong, Kam-Fai and Zhang, Min and Xu, Ruifeng and Li, Jing and Wei, Zhongyu and Gui, Lin and Liang, Bin and Zhao, Runcong", booktitle = "Proceedings of the 10th SIGHAN Workshop on Chinese Language Processing (SIGHAN-10)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.sighan-1.3", pages = "21--27", abstract = "According to the internationally recognized PIRLS (Progress in International Reading Literacy Study) assessment standards, reading comprehension questions should require not only information retrieval, but also higher-order processes such as inferencing, interpreting and evaluation. However, these kinds of questions are often not available in large quantities for training question generation models. This paper investigates whether pre-trained Large Language Models (LLMs) can produce higher-order questions. Human assessment on a Chinese dataset shows that few-shot LLM prompting generates more usable and higher-order questions than two competitive neural baselines.", }
According to the internationally recognized PIRLS (Progress in International Reading Literacy Study) assessment standards, reading comprehension questions should require not only information retrieval, but also higher-order processes such as inferencing, interpreting and evaluation. However, these kinds of questions are often not available in large quantities for training question generation models. This paper investigates whether pre-trained Large Language Models (LLMs) can produce higher-order questions. Human assessment on a Chinese dataset shows that few-shot LLM prompting generates more usable and higher-order questions than two competitive neural baselines.
Few-shot Question Generation for Reading Comprehension
sighan-1.3
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.sighan-1.3/
[]
[]
[]
0
https://aclanthology.org/2024.sighan-1.4.bib
@inproceedings{wang-etal-2024-adversarial, title = "Adversarial Learning for Multi-Lingual Entity Linking", author = "Wang, Bingbing and Liang, Bin and Bai, Zhixin and Ma, Yongzhuo", editor = "Wong, Kam-Fai and Zhang, Min and Xu, Ruifeng and Li, Jing and Wei, Zhongyu and Gui, Lin and Liang, Bin and Zhao, Runcong", booktitle = "Proceedings of the 10th SIGHAN Workshop on Chinese Language Processing (SIGHAN-10)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.sighan-1.4", pages = "28--35", abstract = "Entity linking aims to identify mentions from the text and link them to a knowledge base. Further, Multi-lingual Entity Linking (MEL) is a more challenging task, where the language-specific mentions need to be linked to a multi-lingual knowledge base. To tackle the MEL task, we propose a novel model that employs the merit of adversarial learning and few-shot learning to generalize the learning ability across languages. Specifically, we first randomly select a fraction of language-agnostic unlabeled data as the language signal to construct the language discriminator. Based on it, we devise a simple and effective adversarial learning framework with two characteristic branches, including an entity classifier and a language discriminator with adversarial training. Experimental results on two benchmark datasets indicate the excellent performance in few-shot learning and the effectiveness of the proposed adversarial learning framework.", }
Entity linking aims to identify mentions from the text and link them to a knowledge base. Further, Multi-lingual Entity Linking (MEL) is a more challenging task, where the language-specific mentions need to be linked to a multi-lingual knowledge base. To tackle the MEL task, we propose a novel model that employs the merit of adversarial learning and few-shot learning to generalize the learning ability across languages. Specifically, we first randomly select a fraction of language-agnostic unlabeled data as the language signal to construct the language discriminator. Based on it, we devise a simple and effective adversarial learning framework with two characteristic branches, including an entity classifier and a language discriminator with adversarial training. Experimental results on two benchmark datasets indicate the excellent performance in few-shot learning and the effectiveness of the proposed adversarial learning framework.
[ "Wang, Bingbing", "Liang, Bin", "Bai, Zhixin", "Ma, Yongzhuo" ]
Adversarial Learning for Multi-Lingual Entity Linking
sighan-1.4
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.sighan-1.4/
[]
[]
[]
0
https://aclanthology.org/2024.sighan-1.5.bib
@inproceedings{zhang-etal-2024-incremental, title = "Incremental pre-training from smaller language models", author = "Zhang, Han and Wang, Hui and Xu, Ruifeng", editor = "Wong, Kam-Fai and Zhang, Min and Xu, Ruifeng and Li, Jing and Wei, Zhongyu and Gui, Lin and Liang, Bin and Zhao, Runcong", booktitle = "Proceedings of the 10th SIGHAN Workshop on Chinese Language Processing (SIGHAN-10)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.sighan-1.5", pages = "36--44", abstract = "Large language models have recently become a new learning paradigm and led to state-of-the-art performance across a range of tasks. As explosive open-source pre-trained models are available, it is worth investigating how to better utilize existing models. We propose a simple yet effective method, Incr-Pretrain, for incrementally pre-training language models from smaller well-trained source models. Different layer-wise transfer strategies were introduced for model augmentation including parameter copying, initial value padding, and model distillation. Experiments on multiple zero-shot learning tasks demonstrate satisfying inference performance upon transferring and promising training efficiency during continuing pre-training. Compared to training from scratch, Incr-Pretrain can save up to half the training time to get a similar testing loss.", }
Large language models have recently become a new learning paradigm and led to state-of-the-art performance across a range of tasks. As explosive open-source pre-trained models are available, it is worth investigating how to better utilize existing models. We propose a simple yet effective method, Incr-Pretrain, for incrementally pre-training language models from smaller well-trained source models. Different layer-wise transfer strategies were introduced for model augmentation including parameter copying, initial value padding, and model distillation. Experiments on multiple zero-shot learning tasks demonstrate satisfying inference performance upon transferring and promising training efficiency during continuing pre-training. Compared to training from scratch, Incr-Pretrain can save up to half the training time to get a similar testing loss.
[ "Zhang, Han", "Wang, Hui", "Xu, Ruifeng" ]
Incremental pre-training from smaller language models
sighan-1.5
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.sighan-1.5/
[]
[]
[]
0
https://aclanthology.org/2024.sighan-1.6.bib
@inproceedings{deng-etal-2024-holistic, title = "Holistic Exploration on Universal Decompositional Semantic Parsing: Architecture, Data Augmentation, and {LLM} Paradigm", author = "Deng, Hexuan and Zhang, Xin and Zhang, Meishan and Liu, Xuebo and Zhang, Min", editor = "Wong, Kam-Fai and Zhang, Min and Xu, Ruifeng and Li, Jing and Wei, Zhongyu and Gui, Lin and Liang, Bin and Zhao, Runcong", booktitle = "Proceedings of the 10th SIGHAN Workshop on Chinese Language Processing (SIGHAN-10)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.sighan-1.6", pages = "45--57", abstract = "In this paper, we conduct a holistic exploration of Universal Decompositional Semantic (UDS) parsing, aiming to provide a more efficient and effective solution for semantic parsing and to envision the development prospects after the emergence of large language models (LLMs). To achieve this, we first introduce a cascade model for UDS parsing that decomposes the complex task into semantically appropriate subtasks. Our approach outperforms prior models while significantly reducing inference time. Furthermore, to further exploit the hierarchical and automated annotation process of UDS, we explore the use of syntactic information and pseudo-labels, both of which enhance UDS parsing. Lastly, we investigate ChatGPT{'}s efficacy in handling the UDS task, highlighting its proficiency in attribute parsing but struggles in relation parsing, revealing that small parsing models still hold research significance. Our code is available at https://github.com/hexuandeng/HExp4UDS.", }
In this paper, we conduct a holistic exploration of Universal Decompositional Semantic (UDS) parsing, aiming to provide a more efficient and effective solution for semantic parsing and to envision the development prospects after the emergence of large language models (LLMs). To achieve this, we first introduce a cascade model for UDS parsing that decomposes the complex task into semantically appropriate subtasks. Our approach outperforms prior models while significantly reducing inference time. Furthermore, to further exploit the hierarchical and automated annotation process of UDS, we explore the use of syntactic information and pseudo-labels, both of which enhance UDS parsing. Lastly, we investigate ChatGPT{'}s efficacy in handling the UDS task, highlighting its proficiency in attribute parsing but struggles in relation parsing, revealing that small parsing models still hold research significance. Our code is available at https://github.com/hexuandeng/HExp4UDS.
[ "Deng, Hexuan", "Zhang, Xin", "Zhang, Meishan", "Liu, Xuebo", "Zhang, Min" ]
Holistic Exploration on Universal Decompositional Semantic Parsing: Architecture, Data Augmentation, and LLM Paradigm
sighan-1.6
Poster
2307.13424
[ "https://github.com/hexuandeng/hexp4uds" ]
-1
-1
-1
-1
https://aclanthology.org/2024.sighan-1.6/
[]
[]
[]
0
https://aclanthology.org/2024.sighan-1.7.bib
@inproceedings{ji-etal-2024-responded, title = "Who Responded to Whom: The Joint Effects of Latent Topics and Discourse in Conversation Structure", author = "Ji, Lu and Chen, Lei and Li, Jing and Wei, Zhongyu and Zhang, Qi and Huang, Xuanjing", editor = "Wong, Kam-Fai and Zhang, Min and Xu, Ruifeng and Li, Jing and Wei, Zhongyu and Gui, Lin and Liang, Bin and Zhao, Runcong", booktitle = "Proceedings of the 10th SIGHAN Workshop on Chinese Language Processing (SIGHAN-10)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.sighan-1.7", pages = "58--68", abstract = "Vast amount of online conversations are produced on a daily basis, resulting in a pressing need to automatic conversation understanding. As a basis to structure a discussion, we identify the responding relations in the conversation discourse, which link response utterances to their initiations. To figure out who responded to whom, here we explore how the consistency of topic contents and dependency of discourse roles indicate such interactions, whereas most prior work ignore the effects of latent factors underlying word occurrences. We propose a neural model to learn latent topics and discourse in word distributions, and predict pairwise initiation-response links via exploiting topic consistency and discourse dependency. Experimental results on both English and Chinese conversations show that our model significantly outperforms the previous state of the arts.", }
Vast amount of online conversations are produced on a daily basis, resulting in a pressing need to automatic conversation understanding. As a basis to structure a discussion, we identify the responding relations in the conversation discourse, which link response utterances to their initiations. To figure out who responded to whom, here we explore how the consistency of topic contents and dependency of discourse roles indicate such interactions, whereas most prior work ignore the effects of latent factors underlying word occurrences. We propose a neural model to learn latent topics and discourse in word distributions, and predict pairwise initiation-response links via exploiting topic consistency and discourse dependency. Experimental results on both English and Chinese conversations show that our model significantly outperforms the previous state of the arts.
[ "Ji, Lu", "Chen, Lei", "Li, Jing", "Wei, Zhongyu", "Zhang, Qi", "Huang, Xuanjing" ]
Who Responded to Whom: The Joint Effects of Latent Topics and Discourse in Conversation Structure
sighan-1.7
Poster
2104.08601
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.sighan-1.7/
[]
[]
[]
0
https://aclanthology.org/2024.sighan-1.8.bib
@inproceedings{xiang-etal-2024-cantonese, title = "{C}antonese Natural Language Processing in the Transformers Era", author = "Xiang, Rong and Liao, Ming and Li, Jing", editor = "Wong, Kam-Fai and Zhang, Min and Xu, Ruifeng and Li, Jing and Wei, Zhongyu and Gui, Lin and Liang, Bin and Zhao, Runcong", booktitle = "Proceedings of the 10th SIGHAN Workshop on Chinese Language Processing (SIGHAN-10)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.sighan-1.8", pages = "69--79", abstract = "Despite being spoken by a large population of speakers worldwide, Cantonese is under-resourced in terms of the data scale and diversity compared to other major languages. This limitation has excluded it from the current {``}pre-training and fine-tuning{''} paradigm that is dominated by Transformer architectures.In this paper, we provide a comprehensive review on the existing resources and methodologies for Cantonese Natural Language Processing, covering the recent progress in language understanding, text generation and development of language models.We finally discuss two aspects of the Cantonese language that could make it potentially challenging even for state-of-the-art architectures: \textit{colloquialism} and \textit{multilinguality}.", }
Despite being spoken by a large population of speakers worldwide, Cantonese is under-resourced in terms of the data scale and diversity compared to other major languages. This limitation has excluded it from the current {``}pre-training and fine-tuning{''} paradigm that is dominated by Transformer architectures.In this paper, we provide a comprehensive review on the existing resources and methodologies for Cantonese Natural Language Processing, covering the recent progress in language understanding, text generation and development of language models.We finally discuss two aspects of the Cantonese language that could make it potentially challenging even for state-of-the-art architectures: \textit{colloquialism} and \textit{multilinguality}.
[ "Xiang, Rong", "Liao, Ming", "Li, Jing" ]
Cantonese Natural Language Processing in the Transformers Era
sighan-1.8
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.sighan-1.8/
[]
[]
[]
0
https://aclanthology.org/2024.sighan-1.9.bib
@inproceedings{bai-etal-2024-auto, title = "Auto-{ACE}: An Automatic Answer Correctness Evaluation Method for Conversational Question Answering", author = "Bai, Zhixin and Wang, Bingbing and Liang, Bin and Xu, Ruifeng", editor = "Wong, Kam-Fai and Zhang, Min and Xu, Ruifeng and Li, Jing and Wei, Zhongyu and Gui, Lin and Liang, Bin and Zhao, Runcong", booktitle = "Proceedings of the 10th SIGHAN Workshop on Chinese Language Processing (SIGHAN-10)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.sighan-1.9", pages = "80--87", abstract = "Conversational question answering aims to respond to questions based on relevant contexts and previous question-answer history. Existing studies typically use ground-truth answers in history, leading to the inconsistency between the training and inference phases. However, in real-world scenarios, progress in question answering can only be made using predicted answers. Since not all predicted answers are correct, indiscriminately using all predicted answers for training introduces noise into the model. To tackle these challenges, we propose an automatic answer correctness evaluation method named **Auto-ACE**. Specifically, we first construct an Att-BERT model which employs attention weight to the BERT model, so as to bridge the relation between the current question and the question-answer pair in history. Furthermore, to reduce the interference of the irrelevant information in the predicted answer, A-Scorer, an answer scorer is designed to evaluate the confidence of the predicted answer. We conduct a series of experiments on QuAC and CoQA datasets, and the results demonstrate the effectiveness and practicality of our proposed Auto-ACE framework.", }
Conversational question answering aims to respond to questions based on relevant contexts and previous question-answer history. Existing studies typically use ground-truth answers in history, leading to the inconsistency between the training and inference phases. However, in real-world scenarios, progress in question answering can only be made using predicted answers. Since not all predicted answers are correct, indiscriminately using all predicted answers for training introduces noise into the model. To tackle these challenges, we propose an automatic answer correctness evaluation method named **Auto-ACE**. Specifically, we first construct an Att-BERT model which employs attention weight to the BERT model, so as to bridge the relation between the current question and the question-answer pair in history. Furthermore, to reduce the interference of the irrelevant information in the predicted answer, A-Scorer, an answer scorer is designed to evaluate the confidence of the predicted answer. We conduct a series of experiments on QuAC and CoQA datasets, and the results demonstrate the effectiveness and practicality of our proposed Auto-ACE framework.
[ "Bai, Zhixin", "Wang, Bingbing", "Liang, Bin", "Xu, Ruifeng" ]
Auto-ACE: An Automatic Answer Correctness Evaluation Method for Conversational Question Answering
sighan-1.9
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.sighan-1.9/
[]
[]
[]
0
https://aclanthology.org/2024.sighan-1.10.bib
@inproceedings{kang-etal-2024-tmak, title = "{TMAK}-Plus at {SIGHAN}-2024 dim{ABSA} Task: Multi-Agent Collaboration for Transparent and Rational Sentiment Analysis", author = "Kang, Xin and Zhang, Zhifei and 周嘉政, 周嘉政 and [email protected], [email protected] and [email protected], [email protected] and Matsumoto, Kazuyuki", editor = "Wong, Kam-Fai and Zhang, Min and Xu, Ruifeng and Li, Jing and Wei, Zhongyu and Gui, Lin and Liang, Bin and Zhao, Runcong", booktitle = "Proceedings of the 10th SIGHAN Workshop on Chinese Language Processing (SIGHAN-10)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.sighan-1.10", pages = "88--95", abstract = "The TMAK-Plus team proposes a Multi-Agent Collaboration (MAC) model for the dimensional Aspect-Based Sentiment Analysis (dimABSA) task at SIGHAN-2024. The MAC model leverages Neuro-Symbolic AI to solve dimABSA transparently and rationally through symbolic message exchanges among generative AI agents. These agents collaborate on aspect detection, opinion detection, aspect classification, and intensity estimation. We created 8 sentiment intensity agents with distinct character traits to mimic diverse sentiment perceptions and average their outputs. The AI agents received clear instructions and 20 training examples to ensure task understanding. Our results suggest that the MAC model is effective in solving the dimABSA task and offers a transparent and rational approach to understanding the solution process.", }
The TMAK-Plus team proposes a Multi-Agent Collaboration (MAC) model for the dimensional Aspect-Based Sentiment Analysis (dimABSA) task at SIGHAN-2024. The MAC model leverages Neuro-Symbolic AI to solve dimABSA transparently and rationally through symbolic message exchanges among generative AI agents. These agents collaborate on aspect detection, opinion detection, aspect classification, and intensity estimation. We created 8 sentiment intensity agents with distinct character traits to mimic diverse sentiment perceptions and average their outputs. The AI agents received clear instructions and 20 training examples to ensure task understanding. Our results suggest that the MAC model is effective in solving the dimABSA task and offers a transparent and rational approach to understanding the solution process.
[ "Kang, Xin", "Zhang, Zhifei", "周嘉政, 周嘉政", "[email protected], [email protected]", "[email protected], [email protected]", "Matsumoto, Kazuyuki" ]
TMAK-Plus at SIGHAN-2024 dimABSA Task: Multi-Agent Collaboration for Transparent and Rational Sentiment Analysis
sighan-1.10
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.sighan-1.10/
[]
[]
[]
0
https://aclanthology.org/2024.sighan-1.11.bib
@inproceedings{wangzehui-stu-ynu-edu-cn-etal-2024-ynu, title = "{YNU}-{HPCC} at {SIGHAN}-2024 dim{ABSA} Task: Using {PLM}s with a Joint Learning Strategy for Dimensional Intensity Prediction", author = "[email protected], [email protected] and Zhang, You and Wang, Jin and Xu, Dan and Zhang, Xuejie", editor = "Wong, Kam-Fai and Zhang, Min and Xu, Ruifeng and Li, Jing and Wei, Zhongyu and Gui, Lin and Liang, Bin and Zhao, Runcong", booktitle = "Proceedings of the 10th SIGHAN Workshop on Chinese Language Processing (SIGHAN-10)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.sighan-1.11", pages = "96--101", abstract = "The dimensional approach can represent more fine-grained emotional information than discrete affective states. In this paper, a pretrained language model (PLM) with a joint learning strategy is proposed for the SIGHAN-2024 shared task on Chinese dimensional aspect-based sentiment analysis (dimABSA), which requires submitted models to provide fine-grained multi-dimensional (Valance and Arousal) intensity predictions for given aspects of a review. The proposed model consists of three parts: an input layer that concatenates both given aspect terms and input sentences; a Chinese PLM encoder that generates aspect-specific review representation; and separate linear predictors that jointly predict Valence and Arousal sentiment intensities. Moreover, we merge simplified and traditional Chinese training data for data augmentation. Our systems ranked 2nd place out of 5 participants in subtask 1-intensity prediction. The code is publicly available at https://github.com/WZH5127/2024{\_}subtask1{\_}intensity{\_}prediction.", }
The dimensional approach can represent more fine-grained emotional information than discrete affective states. In this paper, a pretrained language model (PLM) with a joint learning strategy is proposed for the SIGHAN-2024 shared task on Chinese dimensional aspect-based sentiment analysis (dimABSA), which requires submitted models to provide fine-grained multi-dimensional (Valance and Arousal) intensity predictions for given aspects of a review. The proposed model consists of three parts: an input layer that concatenates both given aspect terms and input sentences; a Chinese PLM encoder that generates aspect-specific review representation; and separate linear predictors that jointly predict Valence and Arousal sentiment intensities. Moreover, we merge simplified and traditional Chinese training data for data augmentation. Our systems ranked 2nd place out of 5 participants in subtask 1-intensity prediction. The code is publicly available at https://github.com/WZH5127/2024{\_}subtask1{\_}intensity{\_}prediction.
[ "[email protected], [email protected]", "Zhang, You", "Wang, Jin", "Xu, Dan", "Zhang, Xuejie" ]
YNU-HPCC at SIGHAN-2024 dimABSA Task: Using PLMs with a Joint Learning Strategy for Dimensional Intensity Prediction
sighan-1.11
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.sighan-1.11/
[]
[]
[]
0
https://aclanthology.org/2024.sighan-1.12.bib
@inproceedings{tong-wei-2024-cciiplab, title = "{CCIIPL}ab at {SIGHAN}-2024 dim{ABSA} Task: Contrastive Learning-Enhanced Span-based Framework for {C}hinese Dimensional Aspect-Based Sentiment Analysis", author = "Tong, Zeliang and Wei, Wei", editor = "Wong, Kam-Fai and Zhang, Min and Xu, Ruifeng and Li, Jing and Wei, Zhongyu and Gui, Lin and Liang, Bin and Zhao, Runcong", booktitle = "Proceedings of the 10th SIGHAN Workshop on Chinese Language Processing (SIGHAN-10)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.sighan-1.12", pages = "102--111", abstract = "This paper describes our system and findings for SIGHAN-2024 Shared Task Chinese Dimensional Aspect-Based Sentiment Analysis (dimABSA). Our team CCIIPLab proposes an Contrastive Learning-Enhanced Span-based (CL-Span) framework to boost the performance of extracting triplets/quadruples and predicting sentiment intensity. We first employ a span-based framework that integrates contextual representations and incorporates rotary position embedding. This approach fully considers the relational information of entire aspect and opinion terms, and enhancing the model{'}s understanding of the associations between tokens. Additionally, we utilize contrastive learning to predict sentiment intensities in the valence-arousal dimensions with greater precision. To improve the generalization ability of the model, additional datasets are used to assist training. Experiments have validated the effectiveness of our approach. In the official test results, our system ranked 2nd among the three subtasks.", }
This paper describes our system and findings for SIGHAN-2024 Shared Task Chinese Dimensional Aspect-Based Sentiment Analysis (dimABSA). Our team CCIIPLab proposes an Contrastive Learning-Enhanced Span-based (CL-Span) framework to boost the performance of extracting triplets/quadruples and predicting sentiment intensity. We first employ a span-based framework that integrates contextual representations and incorporates rotary position embedding. This approach fully considers the relational information of entire aspect and opinion terms, and enhancing the model{'}s understanding of the associations between tokens. Additionally, we utilize contrastive learning to predict sentiment intensities in the valence-arousal dimensions with greater precision. To improve the generalization ability of the model, additional datasets are used to assist training. Experiments have validated the effectiveness of our approach. In the official test results, our system ranked 2nd among the three subtasks.
[ "Tong, Zeliang", "Wei, Wei" ]
CCIIPLab at SIGHAN-2024 dimABSA Task: Contrastive Learning-Enhanced Span-based Framework for Chinese Dimensional Aspect-Based Sentiment Analysis
sighan-1.12
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.sighan-1.12/
[]
[]
[]
0
https://aclanthology.org/2024.sighan-1.13.bib
@inproceedings{zhu-etal-2024-zzu, title = "{ZZU}-{NLP} at {SIGHAN}-2024 dim{ABSA} Task: Aspect-Based Sentiment Analysis with Coarse-to-Fine In-context Learning", author = "Zhu, Senbin and Zhao, Hanjie and Wxr, Wxr and [email protected], [email protected] and Jia, Yuxiang and Zan, Hongying", editor = "Wong, Kam-Fai and Zhang, Min and Xu, Ruifeng and Li, Jing and Wei, Zhongyu and Gui, Lin and Liang, Bin and Zhao, Runcong", booktitle = "Proceedings of the 10th SIGHAN Workshop on Chinese Language Processing (SIGHAN-10)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.sighan-1.13", pages = "112--120", abstract = "The DimABSA task requires fine-grained sentiment intensity prediction for restaurant reviews, including scores for Valence and Arousal dimensions for each Aspect Term. In this study, we propose a Coarse-to-Fine In-context Learning(CFICL) method based on the Baichuan2-7B model for the DimABSA task in the SIGHAN 2024 workshop. Our method improves prediction accuracy through a two-stage optimization process. In the first stage, we use fixed in-context examples and prompt templates to enhance the model{'}s sentiment recognition capability and provide initial predictions for the test data. In the second stage, we encode the Opinion field using BERT and select the most similar training data as new in-context examples based on similarity. These examples include the Opinion field and its scores, as well as related opinion words and their average scores. By filtering for sentiment polarity, we ensure that the examples are consistent with the test data. Our method significantly improves prediction accuracy and consistency by effectively utilizing training data and optimizing in-context examples, as validated by experimental results.", }
The DimABSA task requires fine-grained sentiment intensity prediction for restaurant reviews, including scores for Valence and Arousal dimensions for each Aspect Term. In this study, we propose a Coarse-to-Fine In-context Learning(CFICL) method based on the Baichuan2-7B model for the DimABSA task in the SIGHAN 2024 workshop. Our method improves prediction accuracy through a two-stage optimization process. In the first stage, we use fixed in-context examples and prompt templates to enhance the model{'}s sentiment recognition capability and provide initial predictions for the test data. In the second stage, we encode the Opinion field using BERT and select the most similar training data as new in-context examples based on similarity. These examples include the Opinion field and its scores, as well as related opinion words and their average scores. By filtering for sentiment polarity, we ensure that the examples are consistent with the test data. Our method significantly improves prediction accuracy and consistency by effectively utilizing training data and optimizing in-context examples, as validated by experimental results.
[ "Zhu, Senbin", "Zhao, Hanjie", "Wxr, Wxr", "[email protected], [email protected]", "Jia, Yuxiang", "Zan, Hongying" ]
ZZU-NLP at SIGHAN-2024 dimABSA Task: Aspect-Based Sentiment Analysis with Coarse-to-Fine In-context Learning
sighan-1.13
Poster
2407.15341
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.sighan-1.13/
[]
[]
[]
0
https://aclanthology.org/2024.sighan-1.14.bib
@inproceedings{jiang-etal-2024-jn, title = "{JN}-{NLP} at {SIGHAN}-2024 dim{ABSA} Task: Extraction of Sentiment Intensity Quadruples Based on Paraphrase Generation", author = "Jiang, Yunfan and [email protected], [email protected] and Lu, Heng-yang", editor = "Wong, Kam-Fai and Zhang, Min and Xu, Ruifeng and Li, Jing and Wei, Zhongyu and Gui, Lin and Liang, Bin and Zhao, Runcong", booktitle = "Proceedings of the 10th SIGHAN Workshop on Chinese Language Processing (SIGHAN-10)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.sighan-1.14", pages = "121--126", abstract = "Aspect-based sentiment analysis(ABSA) is a fine-grained sentiment analysis task, which aims to extract multiple specific sentiment elements from text. The current aspect-based sentiment analysis task mainly involves four basic elements: aspect term, aspect category, opinion term, and sentiment polarity. With the development of ABSA, methods for predicting the four sentiment elements are gradually increasing. However, traditional ABSA usually only distinguishes between {``}positive{''}, {``}negative{''}, or {``}neutral{''}attitudes when judging sentiment polarity, and this simplified classification method makes it difficult to highlight the sentimentintensity of different reviews. SIGHAN 2024 provides a more challenging evaluation task, the Chinese dimensional ABSA shared task (dimABSA), which replaces the traditional sentiment polarity judgment task with a dataset in a multidimensional space with continuous sentiment intensity scores, including valence and arousal. Continuous sentiment intensity scores can obtain more detailed emotional information. In this task, we propose a new paraphrase generation paradigm that uses generative questioning in an end-to-end manner to predict sentiment intensity quadruples, which can fully utilize semantic information and reduce propagation errors in the pipeline approach.", }
Aspect-based sentiment analysis(ABSA) is a fine-grained sentiment analysis task, which aims to extract multiple specific sentiment elements from text. The current aspect-based sentiment analysis task mainly involves four basic elements: aspect term, aspect category, opinion term, and sentiment polarity. With the development of ABSA, methods for predicting the four sentiment elements are gradually increasing. However, traditional ABSA usually only distinguishes between {``}positive{''}, {``}negative{''}, or {``}neutral{''}attitudes when judging sentiment polarity, and this simplified classification method makes it difficult to highlight the sentimentintensity of different reviews. SIGHAN 2024 provides a more challenging evaluation task, the Chinese dimensional ABSA shared task (dimABSA), which replaces the traditional sentiment polarity judgment task with a dataset in a multidimensional space with continuous sentiment intensity scores, including valence and arousal. Continuous sentiment intensity scores can obtain more detailed emotional information. In this task, we propose a new paraphrase generation paradigm that uses generative questioning in an end-to-end manner to predict sentiment intensity quadruples, which can fully utilize semantic information and reduce propagation errors in the pipeline approach.
[ "Jiang, Yunfan", "[email protected], [email protected]", "Lu, Heng-yang" ]
JN-NLP at SIGHAN-2024 dimABSA Task: Extraction of Sentiment Intensity Quadruples Based on Paraphrase Generation
sighan-1.14
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.sighan-1.14/
[]
[]
[]
0
https://aclanthology.org/2024.sighan-1.15.bib
@inproceedings{meng-etal-2024-ds, title = "{DS}-Group at {SIGHAN}-2024 dim{ABSA} Task: Constructing In-context Learning Structure for Dimensional Aspect-Based Sentiment Analysis", author = "Meng, Ling-ang and Zhao, Tianyu and Song, Dawei", editor = "Wong, Kam-Fai and Zhang, Min and Xu, Ruifeng and Li, Jing and Wei, Zhongyu and Gui, Lin and Liang, Bin and Zhao, Runcong", booktitle = "Proceedings of the 10th SIGHAN Workshop on Chinese Language Processing (SIGHAN-10)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.sighan-1.15", pages = "127--132", abstract = "Aspect-Based Sentiment Analysis (ABSA) is an important subtask in Natural Language Processing (NLP). More recent research within ABSA have consistently focused on conducting more precise sentiment analysis on aspects, i.e., dimensional Aspect-Based Sentiment Analysis (dimABSA). However, previous approaches have not systematically explored the use of Large Language Models (LLMs) in dimABSA. To fill the gap, we propose a novel In-Context Learning (ICL) structure with a novel aspect-aware ICL example selection method, to enhance the performance of LLMs in dimABSA. Experiments show that our proposed ICL structure significantly improves the fine-grained sentiment analysis abilities of LLMs.", }
Aspect-Based Sentiment Analysis (ABSA) is an important subtask in Natural Language Processing (NLP). More recent research within ABSA have consistently focused on conducting more precise sentiment analysis on aspects, i.e., dimensional Aspect-Based Sentiment Analysis (dimABSA). However, previous approaches have not systematically explored the use of Large Language Models (LLMs) in dimABSA. To fill the gap, we propose a novel In-Context Learning (ICL) structure with a novel aspect-aware ICL example selection method, to enhance the performance of LLMs in dimABSA. Experiments show that our proposed ICL structure significantly improves the fine-grained sentiment analysis abilities of LLMs.
[ "Meng, Ling-ang", "Zhao, Tianyu", "Song, Dawei" ]
DS-Group at SIGHAN-2024 dimABSA Task: Constructing In-context Learning Structure for Dimensional Aspect-Based Sentiment Analysis
sighan-1.15
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.sighan-1.15/
[]
[]
[]
0
https://aclanthology.org/2024.sighan-1.16.bib
@inproceedings{wang-etal-2024-fine, title = "Fine-tuning after Prompting: an Explainable Way for Classification", author = "Wang, Zezhong and Ye, Luyao and Wang, Hongru and Xue, Boyang and Du, Yiming and Liang, Bin and Wong, Kam-Fai", editor = "Wong, Kam-Fai and Zhang, Min and Xu, Ruifeng and Li, Jing and Wei, Zhongyu and Gui, Lin and Liang, Bin and Zhao, Runcong", booktitle = "Proceedings of the 10th SIGHAN Workshop on Chinese Language Processing (SIGHAN-10)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.sighan-1.16", pages = "133--142", abstract = "Prompting is an alternative approach for utilizing pre-trained language models (PLMs) in classification tasks. In contrast to fine-tuning, prompting is more understandable for humans because it utilizes natural language to interact with the PLM, but it often falls short in terms of accuracy. While current research primarily focuses on enhancing the performance of prompting methods to compete with fine-tuning, we believe that these two approaches are not mutually exclusive, each having its strengths and weaknesses. In our study, we depart from the competitive view of prompting versus fine-tuning and instead combine them, introducing a novel method called F{\&}P. This approach enables us to harness the advantages of \textbf{F}ine-tuning for accuracy and the explainability of \textbf{P}rompting simultaneously. Specifically, we reformulate the sample into a prompt and subsequently fine-tune a linear classifier on top of the PLM. Following this, we extract verbalizers according to the weight of this classifier. During the inference phase, we reformulate the sample in the same way and query the PLM. The PLM generates a word, which is then subject to a dictionary lookup by the verbalizer to obtain the prediction. Experiments show that keeping only 30 keywords for each class can achieve comparable performance as fine-tuning. On the other hand, both the prompt and verbalizers are constructed in natural language, making them fully understandable to humans. Hence, the F{\&}P method offers an effective and transparent way to employ a PLM for classification tasks.", }
Prompting is an alternative approach for utilizing pre-trained language models (PLMs) in classification tasks. In contrast to fine-tuning, prompting is more understandable for humans because it utilizes natural language to interact with the PLM, but it often falls short in terms of accuracy. While current research primarily focuses on enhancing the performance of prompting methods to compete with fine-tuning, we believe that these two approaches are not mutually exclusive, each having its strengths and weaknesses. In our study, we depart from the competitive view of prompting versus fine-tuning and instead combine them, introducing a novel method called F{\&}P. This approach enables us to harness the advantages of \textbf{F}ine-tuning for accuracy and the explainability of \textbf{P}rompting simultaneously. Specifically, we reformulate the sample into a prompt and subsequently fine-tune a linear classifier on top of the PLM. Following this, we extract verbalizers according to the weight of this classifier. During the inference phase, we reformulate the sample in the same way and query the PLM. The PLM generates a word, which is then subject to a dictionary lookup by the verbalizer to obtain the prediction. Experiments show that keeping only 30 keywords for each class can achieve comparable performance as fine-tuning. On the other hand, both the prompt and verbalizers are constructed in natural language, making them fully understandable to humans. Hence, the F{\&}P method offers an effective and transparent way to employ a PLM for classification tasks.
[ "Wang, Zezhong", "Ye, Luyao", "Wang, Hongru", "Xue, Boyang", "Du, Yiming", "Liang, Bin", "Wong, Kam-Fai" ]
Fine-tuning after Prompting: an Explainable Way for Classification
sighan-1.16
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.sighan-1.16/
[]
[]
[]
0
https://aclanthology.org/2024.sighan-1.17.bib
@inproceedings{wang-2024-causalbench, title = "{C}ausal{B}ench: A Comprehensive Benchmark for Evaluating Causal Reasoning Capabilities of Large Language Models", author = "Wang, Zeyu", editor = "Wong, Kam-Fai and Zhang, Min and Xu, Ruifeng and Li, Jing and Wei, Zhongyu and Gui, Lin and Liang, Bin and Zhao, Runcong", booktitle = "Proceedings of the 10th SIGHAN Workshop on Chinese Language Processing (SIGHAN-10)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.sighan-1.17", pages = "143--151", abstract = "Causal reasoning, a core aspect of human cognition, is essential for advancing large language models (LLMs) towards artificial general intelligence (AGI) and reducing their propensity for generating hallucinations. However, existing datasets for evaluating causal reasoning in LLMs are limited by narrow domain coverage and a focus on cause-to-effect reasoning through textual problems, which does not comprehensively assess whether LLMs truly grasp causal relationships or merely guess correct answers. To address these shortcomings, we introduce a novel benchmark that spans textual, mathematical, and coding problem domains. Each problem is crafted to probe causal understanding from four perspectives: cause-to-effect, effect-to-cause, cause-to-effect with intervention, and effect-to-cause with intervention. This multi-dimensional evaluation method ensures that LLMs must exhibit a genuine understanding of causal structures by correctly answering questions across all four dimensions, mitigating the possibility of correct responses by chance. Furthermore, our benchmark explores the relationship between an LLM{'}s causal reasoning performance and its tendency to produce hallucinations. We present evaluations of state-of-the-art LLMs using our benchmark, providing valuable insights into their current causal reasoning capabilities across diverse domains. The dataset is publicly available for download at https://huggingface.co/datasets/CCLV/CausalBench", }
Causal reasoning, a core aspect of human cognition, is essential for advancing large language models (LLMs) towards artificial general intelligence (AGI) and reducing their propensity for generating hallucinations. However, existing datasets for evaluating causal reasoning in LLMs are limited by narrow domain coverage and a focus on cause-to-effect reasoning through textual problems, which does not comprehensively assess whether LLMs truly grasp causal relationships or merely guess correct answers. To address these shortcomings, we introduce a novel benchmark that spans textual, mathematical, and coding problem domains. Each problem is crafted to probe causal understanding from four perspectives: cause-to-effect, effect-to-cause, cause-to-effect with intervention, and effect-to-cause with intervention. This multi-dimensional evaluation method ensures that LLMs must exhibit a genuine understanding of causal structures by correctly answering questions across all four dimensions, mitigating the possibility of correct responses by chance. Furthermore, our benchmark explores the relationship between an LLM{'}s causal reasoning performance and its tendency to produce hallucinations. We present evaluations of state-of-the-art LLMs using our benchmark, providing valuable insights into their current causal reasoning capabilities across diverse domains. The dataset is publicly available for download at https://huggingface.co/datasets/CCLV/CausalBench
[ "Wang, Zeyu" ]
CausalBench: A Comprehensive Benchmark for Evaluating Causal Reasoning Capabilities of Large Language Models
sighan-1.17
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.sighan-1.17/
[]
[]
[]
0
https://aclanthology.org/2024.sighan-1.18.bib
@inproceedings{du-etal-2024-perltqa, title = "{P}er{LTQA}: A Personal Long-Term Memory Dataset for Memory Classification, Retrieval, and Fusion in Question Answering", author = "Du, Yiming and Wang, Hongru and Zhao, Zhengyi and Liang, Bin and Wang, Baojun and Zhong, Wanjun and Wang, Zezhong and Wong, Kam-Fai", editor = "Wong, Kam-Fai and Zhang, Min and Xu, Ruifeng and Li, Jing and Wei, Zhongyu and Gui, Lin and Liang, Bin and Zhao, Runcong", booktitle = "Proceedings of the 10th SIGHAN Workshop on Chinese Language Processing (SIGHAN-10)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.sighan-1.18", pages = "152--164", abstract = "In conversational AI, effectively employing long-term memory improves personalized and consistent response generation. Existing work only concentrated on a single type of long-term memory, such as preferences, dialogue history, or social relationships, overlooking their interaction in real-world contexts. To this end, inspired by the concept of semantic memory and episodic memory from cognitive psychology, we create a new and more comprehensive Chinese dataset, coined as PerLTQA, in which world knowledge, profiles, social relationships, events, and dialogues are considered to leverage the interaction between different types of long-term memory for question answering (QA) in conversation. Further, based on PerLTQA, we propose a novel framework for memory integration in QA, consisting of three subtasks: \textbf{Memory Classification}, \textbf{Memory Retrieval}, and \textbf{Memory Fusion}, which provides a comprehensive paradigm for memory modeling, enabling consistent and personalized memory utilization. This essentially allows the exploitation of more accurate memory information for better responses in QA. We evaluate this framework using five LLMs and three retrievers. Experimental results demonstrate the importance of personal long-term memory in the QA task", }
In conversational AI, effectively employing long-term memory improves personalized and consistent response generation. Existing work only concentrated on a single type of long-term memory, such as preferences, dialogue history, or social relationships, overlooking their interaction in real-world contexts. To this end, inspired by the concept of semantic memory and episodic memory from cognitive psychology, we create a new and more comprehensive Chinese dataset, coined as PerLTQA, in which world knowledge, profiles, social relationships, events, and dialogues are considered to leverage the interaction between different types of long-term memory for question answering (QA) in conversation. Further, based on PerLTQA, we propose a novel framework for memory integration in QA, consisting of three subtasks: \textbf{Memory Classification}, \textbf{Memory Retrieval}, and \textbf{Memory Fusion}, which provides a comprehensive paradigm for memory modeling, enabling consistent and personalized memory utilization. This essentially allows the exploitation of more accurate memory information for better responses in QA. We evaluate this framework using five LLMs and three retrievers. Experimental results demonstrate the importance of personal long-term memory in the QA task
[ "Du, Yiming", "Wang, Hongru", "Zhao, Zhengyi", "Liang, Bin", "Wang, Baojun", "Zhong, Wanjun", "Wang, Zezhong", "Wong, Kam-Fai" ]
PerLTQA: A Personal Long-Term Memory Dataset for Memory Classification, Retrieval, and Fusion in Question Answering
sighan-1.18
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.sighan-1.18/
[]
[]
[]
0
https://aclanthology.org/2024.sighan-1.19.bib
@inproceedings{lee-etal-2024-overview-sighan, title = "Overview of the {SIGHAN} 2024 shared task for {C}hinese dimensional aspect-based sentiment analysis", author = "Lee, Lung-Hao and Yu, Liang-Chih and Wang, Suge and Liao, Jian", editor = "Wong, Kam-Fai and Zhang, Min and Xu, Ruifeng and Li, Jing and Wei, Zhongyu and Gui, Lin and Liang, Bin and Zhao, Runcong", booktitle = "Proceedings of the 10th SIGHAN Workshop on Chinese Language Processing (SIGHAN-10)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.sighan-1.19", pages = "165--174", abstract = "This paper describes the SIGHAN-2024 shared task for Chinese dimensional aspect-based sentiment analysis (ABSA), including task description, data preparation, performance metrics, and evaluation results. Compared to representing affective states as several discrete classes (i.e., sentiment polarity), the dimensional approach represents affective states as continuous numerical values (called sentiment intensity) in the valence-arousal space, providing more fine-grained affective states. Therefore, we organized a dimensional ABSA (shorted dimABSA) shared task, comprising three subtasks: 1) intensity prediction, 2) triplet extraction, and 3) quadruple extraction, receiving a total of 214 submissions from 61 registered participants during evaluation phase. A total of eleven teams provided selected submissions for each subtask and seven teams submitted technical reports for the subtasks. This shared task demonstrates current NLP techniques for dealing with Chinese dimensional ABSA. All data sets with gold standards and evaluation scripts used in this shared task are publicly available for future research.", }
This paper describes the SIGHAN-2024 shared task for Chinese dimensional aspect-based sentiment analysis (ABSA), including task description, data preparation, performance metrics, and evaluation results. Compared to representing affective states as several discrete classes (i.e., sentiment polarity), the dimensional approach represents affective states as continuous numerical values (called sentiment intensity) in the valence-arousal space, providing more fine-grained affective states. Therefore, we organized a dimensional ABSA (shorted dimABSA) shared task, comprising three subtasks: 1) intensity prediction, 2) triplet extraction, and 3) quadruple extraction, receiving a total of 214 submissions from 61 registered participants during evaluation phase. A total of eleven teams provided selected submissions for each subtask and seven teams submitted technical reports for the subtasks. This shared task demonstrates current NLP techniques for dealing with Chinese dimensional ABSA. All data sets with gold standards and evaluation scripts used in this shared task are publicly available for future research.
[ "Lee, Lung-Hao", "Yu, Liang-Chih", "Wang, Suge", "Liao, Jian" ]
Overview of the SIGHAN 2024 shared task for Chinese dimensional aspect-based sentiment analysis
sighan-1.19
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.sighan-1.19/
[]
[]
[]
0
https://aclanthology.org/2024.sighan-1.20.bib
@inproceedings{xu-etal-2024-hitsz, title = "{HITSZ}-{HLT} at {SIGHAN}-2024 dim{ABSA} Task: Integrating {BERT} and {LLM} for {C}hinese Dimensional Aspect-Based Sentiment Analysis", author = "Xu, Hongling and Zhang, Delong and Zhang, Yice and Xu, Ruifeng", editor = "Wong, Kam-Fai and Zhang, Min and Xu, Ruifeng and Li, Jing and Wei, Zhongyu and Gui, Lin and Liang, Bin and Zhao, Runcong", booktitle = "Proceedings of the 10th SIGHAN Workshop on Chinese Language Processing (SIGHAN-10)", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.sighan-1.20", pages = "175--185", abstract = "This paper presents the winning system participating in the ACL 2024 workshop SIGHAN-10 shared task: Chinese dimensional aspect-based sentiment analysis (dimABSA). This task aims to identify four sentiment elements in restaurant reviews: aspect, category, opinion, and sentiment intensity evaluated in valence-arousal dimensions, providing a concise yet fine-grained sentiment description for user opinions. To tackle this task, we introduce a system that integrates BERT and large language models (LLM) to leverage their strengths. First, we explore their performance in entity extraction, relation classification, and intensity prediction. Based on preliminary experiments, we develop an integrated approach to fully utilize their advantages in different scenarios. Our system achieves first place in all subtasks and obtains a 41.7{\%} F1-score in quadruple extraction.", }
This paper presents the winning system participating in the ACL 2024 workshop SIGHAN-10 shared task: Chinese dimensional aspect-based sentiment analysis (dimABSA). This task aims to identify four sentiment elements in restaurant reviews: aspect, category, opinion, and sentiment intensity evaluated in valence-arousal dimensions, providing a concise yet fine-grained sentiment description for user opinions. To tackle this task, we introduce a system that integrates BERT and large language models (LLM) to leverage their strengths. First, we explore their performance in entity extraction, relation classification, and intensity prediction. Based on preliminary experiments, we develop an integrated approach to fully utilize their advantages in different scenarios. Our system achieves first place in all subtasks and obtains a 41.7{\%} F1-score in quadruple extraction.
[ "Xu, Hongling", "Zhang, Delong", "Zhang, Yice", "Xu, Ruifeng" ]
HITSZ-HLT at SIGHAN-2024 dimABSA Task: Integrating BERT and LLM for Chinese Dimensional Aspect-Based Sentiment Analysis
sighan-1.20
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.sighan-1.20/
[]
[]
[]
0
https://aclanthology.org/2024.sigturk-1.1.bib
@inproceedings{kural-yuret-2024-unsupervised, title = "Unsupervised Learning of {T}urkish Morphology with Multiple Codebook {VQ}-{VAE}", author = {Kural, M{\"u}ge and Yuret, Deniz}, editor = {Ataman, Duygu and Derin, Mehmet Oguz and Ivanova, Sardana and K{\"o}ksal, Abdullatif and S{\"a}lev{\"a}, Jonne and Zeyrek, Deniz}, booktitle = "Proceedings of the First Workshop on Natural Language Processing for Turkic Languages (SIGTURK 2024)", month = aug, year = "2024", address = "Bangkok, Thailand and Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.sigturk-1.1", pages = "1--17", abstract = "This paper presents an interpretable unsupervised morphological learning model, showing comparable performance to supervised models in learning complex morphological rules of Turkish as evidenced by its application to the problem of morphological inflection within the SIGMORPHON Shared Tasks. The significance of our unsupervised approach lies in its alignment with how humans naturally acquire rules from raw data without supervision. To achieve this, we construct a model with multiple codebooks of VQ-VAE employing continuous and discrete latent variables during word generation. We evaluate the model{'}s performance under high and low-resource scenarios, and use probing techniques to examine encoded information in latent representations. We also evaluate its generalization capabilities by testing unseen suffixation scenarios within the SIGMORPHON-UniMorph 2022 Shared Task 0. Our results demonstrate our model{'}s ability to distinguish word structures into lemmas and suffixes, with each codebook specialized for different morphological features, contributing to the interpretability of our model and effectively performing morphological inflection on both seen and unseen morphological features.", }
This paper presents an interpretable unsupervised morphological learning model, showing comparable performance to supervised models in learning complex morphological rules of Turkish as evidenced by its application to the problem of morphological inflection within the SIGMORPHON Shared Tasks. The significance of our unsupervised approach lies in its alignment with how humans naturally acquire rules from raw data without supervision. To achieve this, we construct a model with multiple codebooks of VQ-VAE employing continuous and discrete latent variables during word generation. We evaluate the model{'}s performance under high and low-resource scenarios, and use probing techniques to examine encoded information in latent representations. We also evaluate its generalization capabilities by testing unseen suffixation scenarios within the SIGMORPHON-UniMorph 2022 Shared Task 0. Our results demonstrate our model{'}s ability to distinguish word structures into lemmas and suffixes, with each codebook specialized for different morphological features, contributing to the interpretability of our model and effectively performing morphological inflection on both seen and unseen morphological features.
[ "Kural, M{\\\"u}ge", "Yuret, Deniz" ]
Unsupervised Learning of Turkish Morphology with Multiple Codebook VQ-VAE
sigturk-1.1
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.sigturk-1.1/
[]
[]
[]
0
https://aclanthology.org/2024.sigturk-1.2.bib
@inproceedings{isbarov-etal-2024-open, title = "Open foundation models for {A}zerbaijani language", author = "Isbarov, Jafar and Huseynova, Kavsar and Mammadov, Elvin and Hajili, Mammad", editor = {Ataman, Duygu and Derin, Mehmet Oguz and Ivanova, Sardana and K{\"o}ksal, Abdullatif and S{\"a}lev{\"a}, Jonne and Zeyrek, Deniz}, booktitle = "Proceedings of the First Workshop on Natural Language Processing for Turkic Languages (SIGTURK 2024)", month = aug, year = "2024", address = "Bangkok, Thailand and Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.sigturk-1.2", pages = "18--28", abstract = "The emergence of multilingual large language models has enabled the development of language understanding and generation systems in Azerbaijani. However, most of the production-grade systems rely on cloud solutions, such as GPT-4. While there have been several attempts to develop open foundation models for Azerbaijani, these works have not found their way into common use due to a lack of systemic benchmarking. This paper encompasses several lines of work that promote open-source foundation models for Azerbaijani. We introduce (1) a large text corpus for Azerbaijani, (2) a family of encoder-only language models trained on this dataset, (3) labeled datasets for evaluating these models, and (4) extensive evaluation that covers all major open-source models with Azerbaijani support.", }
The emergence of multilingual large language models has enabled the development of language understanding and generation systems in Azerbaijani. However, most of the production-grade systems rely on cloud solutions, such as GPT-4. While there have been several attempts to develop open foundation models for Azerbaijani, these works have not found their way into common use due to a lack of systemic benchmarking. This paper encompasses several lines of work that promote open-source foundation models for Azerbaijani. We introduce (1) a large text corpus for Azerbaijani, (2) a family of encoder-only language models trained on this dataset, (3) labeled datasets for evaluating these models, and (4) extensive evaluation that covers all major open-source models with Azerbaijani support.
[ "Isbarov, Jafar", "Huseynova, Kavsar", "Mammadov, Elvin", "Hajili, Mammad" ]
Open foundation models for Azerbaijani language
sigturk-1.2
Poster
2407.02337
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.sigturk-1.2/
[]
[]
[]
0
https://aclanthology.org/2024.sigturk-1.3.bib
@inproceedings{halat-atlamaz-2024-implicatr, title = "{I}mplica{TR}: A Granular Dataset for Natural Language Inference and Pragmatic Reasoning in {T}urkish", author = {Halat, Mustafa and Atlamaz, {\"U}mit}, editor = {Ataman, Duygu and Derin, Mehmet Oguz and Ivanova, Sardana and K{\"o}ksal, Abdullatif and S{\"a}lev{\"a}, Jonne and Zeyrek, Deniz}, booktitle = "Proceedings of the First Workshop on Natural Language Processing for Turkic Languages (SIGTURK 2024)", month = aug, year = "2024", address = "Bangkok, Thailand and Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.sigturk-1.3", pages = "29--41", abstract = "We introduce ImplicaTR, a linguistically informed diagnostic dataset designed to evaluate semantic and pragmatic reasoning capabilities of Natural Language Inference (NLI) models in Turkish. Existing Turkish NLI datasets treat NLI as determining whether a sentence pair represents $\textit{entailment}$, $\textit{contradiction}$, or a $\textit{neutral}$ relation. Such datasets do not distinguish between $\textit{semantic entailment}$ and $\textit{pragmatic implicature}$, which linguists have long recognized as separate inferences types. ImplicaTR addresses this by testing NLI models{'} ability to differentiate between $\textit{entailment}$ and $\textit{implicature}$, thus assessing their pragmatic reasoning skills. The dataset consists of 19,350 semi-automatically generated sentence pairs covering $\textit{implicature, entailment, contradiction,}$ and $\textit{neutral}$ relations. We evaluated various models (BERT, Gemma, Llama-2, and Mistral) on ImplicaTR and found out that these models can reach up to 98{\%} accuracy on semantic and pragmatic reasoning. We also fine tuned various models on subsets of ImplicaTR to test the abilities of NLI models to generalize across unseen implicature contexts. Our results indicate that model performance is highly dependent on the diversity of linguistic expressions within each subset, highlighting a weakness in the abstract generalization capabilities of large language models regarding pragmatic reasoning. We share all the code, models, and the dataset.", }
We introduce ImplicaTR, a linguistically informed diagnostic dataset designed to evaluate semantic and pragmatic reasoning capabilities of Natural Language Inference (NLI) models in Turkish. Existing Turkish NLI datasets treat NLI as determining whether a sentence pair represents $\textit{entailment}$, $\textit{contradiction}$, or a $\textit{neutral}$ relation. Such datasets do not distinguish between $\textit{semantic entailment}$ and $\textit{pragmatic implicature}$, which linguists have long recognized as separate inferences types. ImplicaTR addresses this by testing NLI models{'} ability to differentiate between $\textit{entailment}$ and $\textit{implicature}$, thus assessing their pragmatic reasoning skills. The dataset consists of 19,350 semi-automatically generated sentence pairs covering $\textit{implicature, entailment, contradiction,}$ and $\textit{neutral}$ relations. We evaluated various models (BERT, Gemma, Llama-2, and Mistral) on ImplicaTR and found out that these models can reach up to 98{\%} accuracy on semantic and pragmatic reasoning. We also fine tuned various models on subsets of ImplicaTR to test the abilities of NLI models to generalize across unseen implicature contexts. Our results indicate that model performance is highly dependent on the diversity of linguistic expressions within each subset, highlighting a weakness in the abstract generalization capabilities of large language models regarding pragmatic reasoning. We share all the code, models, and the dataset.
[ "Halat, Mustafa", "Atlamaz, {\\\"U}mit" ]
ImplicaTR: A Granular Dataset for Natural Language Inference and Pragmatic Reasoning in Turkish
sigturk-1.3
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.sigturk-1.3/
[]
[]
[]
0
https://aclanthology.org/2024.sigturk-1.4.bib
@inproceedings{buyuktekin-ozge-2024-coreference, title = "A coreference corpus of {T}urkish situated dialogs", author = {B{\"u}y{\"u}ktekin, Faruk and {\"O}zge, Umut}, editor = {Ataman, Duygu and Derin, Mehmet Oguz and Ivanova, Sardana and K{\"o}ksal, Abdullatif and S{\"a}lev{\"a}, Jonne and Zeyrek, Deniz}, booktitle = "Proceedings of the First Workshop on Natural Language Processing for Turkic Languages (SIGTURK 2024)", month = aug, year = "2024", address = "Bangkok, Thailand and Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.sigturk-1.4", pages = "42--52", abstract = "The paper introduces a publicly available corpus of Turkish situated dialogs annotated for coreference. We developed an annotation scheme for coreference annotation in Turkish, a language with pro-drop and rich agglutinating morphology. The annotation scheme is tailored for these aspects of the language, making it potentially applicable to similar languages. The corpus comprises 60 dialogs containing in total 3900 sentences, 18360 words, and 6120 mentions.", }
The paper introduces a publicly available corpus of Turkish situated dialogs annotated for coreference. We developed an annotation scheme for coreference annotation in Turkish, a language with pro-drop and rich agglutinating morphology. The annotation scheme is tailored for these aspects of the language, making it potentially applicable to similar languages. The corpus comprises 60 dialogs containing in total 3900 sentences, 18360 words, and 6120 mentions.
[ "B{\\\"u}y{\\\"u}ktekin, Faruk", "{\\\"O}zge, Umut" ]
A coreference corpus of Turkish situated dialogs
sigturk-1.4
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.sigturk-1.4/
[]
[]
[]
0
https://aclanthology.org/2024.sigturk-1.5.bib
@inproceedings{oguz-etal-2024-llms, title = "Do {LLM}s Recognize me, When {I} is not me: Assessment of {LLM}s Understanding of {T}urkish Indexical Pronouns in Indexical Shift Contexts", author = "O{\u{g}}uz, Metehan and Ciftci, Yusuf and Bakman, Yavuz Faruk", editor = {Ataman, Duygu and Derin, Mehmet Oguz and Ivanova, Sardana and K{\"o}ksal, Abdullatif and S{\"a}lev{\"a}, Jonne and Zeyrek, Deniz}, booktitle = "Proceedings of the First Workshop on Natural Language Processing for Turkic Languages (SIGTURK 2024)", month = aug, year = "2024", address = "Bangkok, Thailand and Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.sigturk-1.5", pages = "53--61", abstract = "Large language models (LLMs) have shown impressive capabilities in tasks such as machine translation, text summarization, question answering, and solving complex mathematical problems. However, their primary training on data-rich languages like English limits their performance in low-resource languages. This study addresses this gap by focusing on the Indexical Shift problem in Turkish. The Indexical Shift problem involves resolving pronouns in indexical shift contexts, a grammatical challenge not present in high-resource languages like English. We present the first study examining indexical shift in any language, releasing a Turkish dataset specifically designed for this purpose. Our Indexical Shift Dataset consists of 156 multiple-choice questions, each annotated with necessary linguistic details, to evaluate LLMs in a few-shot setting. We evaluate recent multilingual LLMs, including GPT-4, GPT-3.5, Cohere-AYA, Trendyol-LLM, and Turkcell-LLM, using this dataset. Our analysis reveals that even advanced models like GPT-4 struggle with the grammatical nuances of indexical shift in Turkish, achieving only moderate performance. These findings underscore the need for focused research on the grammatical challenges posed by low-resource languages. We released the dataset and code here.", }
Large language models (LLMs) have shown impressive capabilities in tasks such as machine translation, text summarization, question answering, and solving complex mathematical problems. However, their primary training on data-rich languages like English limits their performance in low-resource languages. This study addresses this gap by focusing on the Indexical Shift problem in Turkish. The Indexical Shift problem involves resolving pronouns in indexical shift contexts, a grammatical challenge not present in high-resource languages like English. We present the first study examining indexical shift in any language, releasing a Turkish dataset specifically designed for this purpose. Our Indexical Shift Dataset consists of 156 multiple-choice questions, each annotated with necessary linguistic details, to evaluate LLMs in a few-shot setting. We evaluate recent multilingual LLMs, including GPT-4, GPT-3.5, Cohere-AYA, Trendyol-LLM, and Turkcell-LLM, using this dataset. Our analysis reveals that even advanced models like GPT-4 struggle with the grammatical nuances of indexical shift in Turkish, achieving only moderate performance. These findings underscore the need for focused research on the grammatical challenges posed by low-resource languages. We released the dataset and code here.
[ "O{\\u{g}}uz, Metehan", "Ciftci, Yusuf", "Bakman, Yavuz Faruk" ]
Do LLMs Recognize me, When I is not me: Assessment of LLMs Understanding of Turkish Indexical Pronouns in Indexical Shift Contexts
sigturk-1.5
Poster
2406.05569
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.sigturk-1.5/
[]
[]
[]
0
https://aclanthology.org/2024.sigturk-1.6.bib
@inproceedings{karagoz-etal-2024-towards, title = "Towards a Clean Text Corpus for {O}ttoman {T}urkish", author = {Karag{\"o}z, Fatih and Do{\u{g}}an, Berat and {\"O}zate{\c{s}}, {\c{S}}aziye Betül}, editor = {Ataman, Duygu and Derin, Mehmet Oguz and Ivanova, Sardana and K{\"o}ksal, Abdullatif and S{\"a}lev{\"a}, Jonne and Zeyrek, Deniz}, booktitle = "Proceedings of the First Workshop on Natural Language Processing for Turkic Languages (SIGTURK 2024)", month = aug, year = "2024", address = "Bangkok, Thailand and Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.sigturk-1.6", pages = "62--70", abstract = "Ottoman Turkish, as a historical variant of modern Turkish, suffers from a scarcity of available corpora and NLP models. This paper outlines our pioneering endeavors to address this gap by constructing a clean text corpus of Ottoman Turkish materials. We detail the challenges encountered in this process and offer potential solutions. Additionally, we present a case study wherein the created corpus is employed in continual pre-training of BERTurk, followed by evaluation of the model{'}s performance on the named entity recognition task for Ottoman Turkish. Preliminary experimental results suggest the effectiveness of our corpus in adapting existing models developed for modern Turkish to historical Turkish.", }
Ottoman Turkish, as a historical variant of modern Turkish, suffers from a scarcity of available corpora and NLP models. This paper outlines our pioneering endeavors to address this gap by constructing a clean text corpus of Ottoman Turkish materials. We detail the challenges encountered in this process and offer potential solutions. Additionally, we present a case study wherein the created corpus is employed in continual pre-training of BERTurk, followed by evaluation of the model{'}s performance on the named entity recognition task for Ottoman Turkish. Preliminary experimental results suggest the effectiveness of our corpus in adapting existing models developed for modern Turkish to historical Turkish.
[ "Karag{\\\"o}z, Fatih", "Do{\\u{g}}an, Berat", "{\\\"O}zate{\\c{s}}, {\\c{S}}aziye Betül" ]
Towards a Clean Text Corpus for Ottoman Turkish
sigturk-1.6
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.sigturk-1.6/
[]
[]
[]
0
https://aclanthology.org/2024.sigturk-1.7.bib
@inproceedings{biyik-etal-2024-turkish, title = "{T}urkish Delights: a Dataset on {T}urkish Euphemisms", author = "Biyik, Hasan and Lee, Patrick and Feldman, Anna", editor = {Ataman, Duygu and Derin, Mehmet Oguz and Ivanova, Sardana and K{\"o}ksal, Abdullatif and S{\"a}lev{\"a}, Jonne and Zeyrek, Deniz}, booktitle = "Proceedings of the First Workshop on Natural Language Processing for Turkic Languages (SIGTURK 2024)", month = aug, year = "2024", address = "Bangkok, Thailand and Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.sigturk-1.7", pages = "71--80", abstract = "Euphemisms are a form of figurative language relatively understudied in natural language processing. This research extends the current computational work on potentially euphemistic terms (PETs) to Turkish. We introduce the Turkish PET dataset, the first available of its kind in the field. By creating a list of euphemisms in Turkish, collecting example contexts, and annotating them, we provide both euphemistic and non-euphemistic examples of PETs in Turkish. We describe the dataset and methodologies, and also experiment with transformer-based models on Turkish euphemism detection by using our dataset for binary classification. We compare performances across models using F1, accuracy, and precision as evaluation metrics.", }
Euphemisms are a form of figurative language relatively understudied in natural language processing. This research extends the current computational work on potentially euphemistic terms (PETs) to Turkish. We introduce the Turkish PET dataset, the first available of its kind in the field. By creating a list of euphemisms in Turkish, collecting example contexts, and annotating them, we provide both euphemistic and non-euphemistic examples of PETs in Turkish. We describe the dataset and methodologies, and also experiment with transformer-based models on Turkish euphemism detection by using our dataset for binary classification. We compare performances across models using F1, accuracy, and precision as evaluation metrics.
[ "Biyik, Hasan", "Lee, Patrick", "Feldman, Anna" ]
Turkish Delights: a Dataset on Turkish Euphemisms
sigturk-1.7
Poster
2407.13040
[ "https://github.com/hasancanbiyik/turkish_pets" ]
-1
-1
-1
-1
https://aclanthology.org/2024.sigturk-1.7/
[]
[]
[]
0
https://aclanthology.org/2024.sigturk-1.8.bib
@inproceedings{maxutov-etal-2024-llms, title = "Do {LLM}s Speak {K}azakh? A Pilot Evaluation of Seven Models", author = "Maxutov, Akylbek and Myrzakhmet, Ayan and Braslavski, Pavel", editor = {Ataman, Duygu and Derin, Mehmet Oguz and Ivanova, Sardana and K{\"o}ksal, Abdullatif and S{\"a}lev{\"a}, Jonne and Zeyrek, Deniz}, booktitle = "Proceedings of the First Workshop on Natural Language Processing for Turkic Languages (SIGTURK 2024)", month = aug, year = "2024", address = "Bangkok, Thailand and Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.sigturk-1.8", pages = "81--91", abstract = "We conducted a systematic evaluation of seven large language models (LLMs) on tasks in Kazakh, a Turkic language spoken by approximately 13 million native speakers in Kazakhstan and abroad. We used six datasets corresponding to different tasks {--} questions answering, causal reasoning, middle school math problems, machine translation, and spelling correction. Three of the datasets were prepared for this study. As expected, the quality of the LLMs on the Kazakh tasks is lower than on the parallel English tasks. GPT-4 shows the best results, followed by Gemini and . In general, LLMs perform better on classification tasks and struggle with generative tasks. Our results provide valuable insights into the applicability of currently available LLMs for Kazakh. We made the data collected for this study publicly available: https://github.com/akylbekmaxutov/LLM-eval-using-Kazakh.", }
We conducted a systematic evaluation of seven large language models (LLMs) on tasks in Kazakh, a Turkic language spoken by approximately 13 million native speakers in Kazakhstan and abroad. We used six datasets corresponding to different tasks {--} questions answering, causal reasoning, middle school math problems, machine translation, and spelling correction. Three of the datasets were prepared for this study. As expected, the quality of the LLMs on the Kazakh tasks is lower than on the parallel English tasks. GPT-4 shows the best results, followed by Gemini and . In general, LLMs perform better on classification tasks and struggle with generative tasks. Our results provide valuable insights into the applicability of currently available LLMs for Kazakh. We made the data collected for this study publicly available: https://github.com/akylbekmaxutov/LLM-eval-using-Kazakh.
[ "Maxutov, Akylbek", "Myrzakhmet, Ayan", "Braslavski, Pavel" ]
Do LLMs Speak Kazakh? A Pilot Evaluation of Seven Models
sigturk-1.8
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.sigturk-1.8/
[]
[]
[]
0
https://aclanthology.org/2024.sigturk-1.9.bib
@inproceedings{zakirova-etal-2024-intelligent, title = "Intelligent Tutor to Support Teaching and Learning of {T}atar", author = "Zakirova, Alsu and Hou, Jue and Katinskaia, Anisia and Vu, Anh-Duc and Yangarber, Roman", editor = {Ataman, Duygu and Derin, Mehmet Oguz and Ivanova, Sardana and K{\"o}ksal, Abdullatif and S{\"a}lev{\"a}, Jonne and Zeyrek, Deniz}, booktitle = "Proceedings of the First Workshop on Natural Language Processing for Turkic Languages (SIGTURK 2024)", month = aug, year = "2024", address = "Bangkok, Thailand and Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.sigturk-1.9", pages = "92--101", abstract = "This paper presents our work on tools to support the Tatar language, using Revita, a web-based Intelligent Tutoring System for language teaching and learning. The system allows the users {---} teachers and learners {---} to upload arbitrary authentic texts, and automatically creates exercises based on these texts that engage the learners in active production of language. It provides graduated feedback when they make mistakes, and performs continuous assessment, based on which the system selects exercises for the learners at the appropriate level. The assessment also helps the students maintain their learning pace, and helps the teachers to monitor their progress.The paper describes the functionality currently implemented for Tatar, which enables learners {---} who possess basic proficiency beyond the beginner level {---} to improve their competency, using texts of their choice as learning content. Support for Tatar is being developed to increase public interest in learning the language of this important regional minority, as well as to to provide tools for improving fluency to {``}heritage speakers{''} {---} those who have substantial passive competency, but lack active fluency and need support for regular practice.", }
This paper presents our work on tools to support the Tatar language, using Revita, a web-based Intelligent Tutoring System for language teaching and learning. The system allows the users {---} teachers and learners {---} to upload arbitrary authentic texts, and automatically creates exercises based on these texts that engage the learners in active production of language. It provides graduated feedback when they make mistakes, and performs continuous assessment, based on which the system selects exercises for the learners at the appropriate level. The assessment also helps the students maintain their learning pace, and helps the teachers to monitor their progress.The paper describes the functionality currently implemented for Tatar, which enables learners {---} who possess basic proficiency beyond the beginner level {---} to improve their competency, using texts of their choice as learning content. Support for Tatar is being developed to increase public interest in learning the language of this important regional minority, as well as to to provide tools for improving fluency to {``}heritage speakers{''} {---} those who have substantial passive competency, but lack active fluency and need support for regular practice.
[ "Zakirova, Alsu", "Hou, Jue", "Katinskaia, Anisia", "Vu, Anh-Duc", "Yangarber, Roman" ]
Intelligent Tutor to Support Teaching and Learning of Tatar
sigturk-1.9
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.sigturk-1.9/
[]
[]
[]
0
https://aclanthology.org/2024.smm4h-1.1.bib
@inproceedings{ta-etal-2024-thangdlu, title = "{T}hang{DLU} at {\#}{SMM}4{H} 2024: Encoder-decoder models for classifying text data on social disorders in children and adolescents", author = "Ta, Thang and Rahman, Abu and Najjar, Lotfollah and Gelbukh, Alexander", editor = "Xu, Dongfang and Gonzalez-Hernandez, Graciela", booktitle = "Proceedings of The 9th Social Media Mining for Health Research and Applications (SMM4H 2024) Workshop and Shared Tasks", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.smm4h-1.1", pages = "1--4", abstract = "This paper describes our participation in Task 3 and Task 5 of the {\#}SMM4H (Social Media Mining for Health) 2024 Workshop, explicitly targeting the classification challenges within tweet data. Task 3 is a multi-class classification task centered on tweets discussing the impact of outdoor environments on symptoms of social anxiety. Task 5 involves a binary classification task focusing on tweets reporting medical disorders in children. We applied transfer learning from pre-trained encoder-decoder models such as BART-base and T5-small to identify the labels of a set of given tweets. We also presented some data augmentation methods to see their impact on the model performance. Finally, the systems obtained the best F1 score of 0.627 in Task 3 and the best F1 score of 0.841 in Task 5", }
This paper describes our participation in Task 3 and Task 5 of the {\#}SMM4H (Social Media Mining for Health) 2024 Workshop, explicitly targeting the classification challenges within tweet data. Task 3 is a multi-class classification task centered on tweets discussing the impact of outdoor environments on symptoms of social anxiety. Task 5 involves a binary classification task focusing on tweets reporting medical disorders in children. We applied transfer learning from pre-trained encoder-decoder models such as BART-base and T5-small to identify the labels of a set of given tweets. We also presented some data augmentation methods to see their impact on the model performance. Finally, the systems obtained the best F1 score of 0.627 in Task 3 and the best F1 score of 0.841 in Task 5
[ "Ta, Thang", "Rahman, Abu", "Najjar, Lotfollah", "Gelbukh, Alex", "er" ]
ThangDLU at #SMM4H 2024: Encoder-decoder models for classifying text data on social disorders in children and adolescents
smm4h-1.1
Poster
2404.19714
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.smm4h-1.1/
[]
[]
[]
0
https://aclanthology.org/2024.smm4h-1.2.bib
@inproceedings{fan-etal-2024-ctyun-ai, title = "{CTYUN}-{AI}@{SMM}4{H}-2024: Knowledge Extension Makes Expert Models", author = "Fan, Yuming and Yang, Dongming and Cao, Lina", editor = "Xu, Dongfang and Gonzalez-Hernandez, Graciela", booktitle = "Proceedings of The 9th Social Media Mining for Health Research and Applications (SMM4H 2024) Workshop and Shared Tasks", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.smm4h-1.2", pages = "5--9", abstract = "This paper explores the potential of social media as a rich source of data for understanding public health trends and behaviors, particularly focusing on emotional well-being and the impact of environmental factors. We employed large language models (LLMs) and developed a suite of knowledge extension techniques to analyze social media content related to mental health issues, specifically examining 1) effects of outdoor spaces on social anxiety symptoms in Reddit,2) tweets reporting children{'}s medical disorders, and 3) self-reported ages in posts of Twitter and Reddit. Our knowledge extension approach encompasses both supervised data (i.e., sample augmentation and cross-task fine-tuning) and unsupervised data (i.e., knowledge distillation and cross-task pre-training), tackling the inherent challenges of sample imbalance and informality of social media language. The effectiveness of our approach is demonstrated by the superior performance across multiple tasks (i.e., Task 3, 5 and 6) at the SMM4H-2024. Notably, we achieved the best performance in all three tasks, underscoring the utility of our models in real-world applications.", }
This paper explores the potential of social media as a rich source of data for understanding public health trends and behaviors, particularly focusing on emotional well-being and the impact of environmental factors. We employed large language models (LLMs) and developed a suite of knowledge extension techniques to analyze social media content related to mental health issues, specifically examining 1) effects of outdoor spaces on social anxiety symptoms in Reddit,2) tweets reporting children{'}s medical disorders, and 3) self-reported ages in posts of Twitter and Reddit. Our knowledge extension approach encompasses both supervised data (i.e., sample augmentation and cross-task fine-tuning) and unsupervised data (i.e., knowledge distillation and cross-task pre-training), tackling the inherent challenges of sample imbalance and informality of social media language. The effectiveness of our approach is demonstrated by the superior performance across multiple tasks (i.e., Task 3, 5 and 6) at the SMM4H-2024. Notably, we achieved the best performance in all three tasks, underscoring the utility of our models in real-world applications.
[ "Fan, Yuming", "Yang, Dongming", "Cao, Lina" ]
CTYUN-AI@SMM4H-2024: Knowledge Extension Makes Expert Models
smm4h-1.2
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.smm4h-1.2/
[]
[]
[]
0
https://aclanthology.org/2024.smm4h-1.3.bib
@inproceedings{wasi-rahman-2024-dilab, title = "{DILAB} at {\#}{SMM}4{H} 2024: {R}o{BERT}a Ensemble for Identifying Children{'}s Medical Disorders in {E}nglish Tweets", author = "Wasi, Azmine Toushik and Rahman, Sheikh", editor = "Xu, Dongfang and Gonzalez-Hernandez, Graciela", booktitle = "Proceedings of The 9th Social Media Mining for Health Research and Applications (SMM4H 2024) Workshop and Shared Tasks", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.smm4h-1.3", pages = "10--12", abstract = "This paper details our system developed for the 9th Social Media Mining for Health Research and Applications Workshop (SMM4H 2024), addressing Task 5 focused on binary classification of English tweets reporting children{'}s medical disorders. Our objective was to enhance the detection of tweets related to children{'}s medical issues. To do this, we use various pre-trained language models, like RoBERTa and BERT. We fine-tuned these models on the task-specific dataset, adjusting model layers and hyperparameters in an attempt to optimize performance. As we observe unstable fluctuations in performance metrics during training, we implement an ensemble approach that combines predictions from different learning epochs. Our model achieves promising results, with the best-performing configuration achieving F1 score of 93.8{\%} on the validation set and 89.8{\%} on the test set.", }
This paper details our system developed for the 9th Social Media Mining for Health Research and Applications Workshop (SMM4H 2024), addressing Task 5 focused on binary classification of English tweets reporting children{'}s medical disorders. Our objective was to enhance the detection of tweets related to children{'}s medical issues. To do this, we use various pre-trained language models, like RoBERTa and BERT. We fine-tuned these models on the task-specific dataset, adjusting model layers and hyperparameters in an attempt to optimize performance. As we observe unstable fluctuations in performance metrics during training, we implement an ensemble approach that combines predictions from different learning epochs. Our model achieves promising results, with the best-performing configuration achieving F1 score of 93.8{\%} on the validation set and 89.8{\%} on the test set.
[ "Wasi, Azmine Toushik", "Rahman, Sheikh" ]
DILAB at #SMM4H 2024: RoBERTa Ensemble for Identifying Children's Medical Disorders in English Tweets
smm4h-1.3
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.smm4h-1.3/
[]
[]
[]
0
https://aclanthology.org/2024.smm4h-1.4.bib
@inproceedings{rahman-wasi-2024-dilab, title = "{DILAB} at {\#}{SMM}4{H} 2024: Analyzing Social Anxiety Effects through Context-Aware Transfer Learning on {R}eddit Data", author = "Rahman, Sheikh and Wasi, Azmine Toushik", editor = "Xu, Dongfang and Gonzalez-Hernandez, Graciela", booktitle = "Proceedings of The 9th Social Media Mining for Health Research and Applications (SMM4H 2024) Workshop and Shared Tasks", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.smm4h-1.4", pages = "13--16", abstract = "This paper illustrates the system we design for Task 3 of the 9th Social Media Mining for Health (SMM4H 2024) shared tasks. The task presents posts made on the Reddit social media platform, specifically the *r/SocialAnxiety* subreddit, along with one or more outdoor activities as pre-determined keywords for each post. The task then requires each post to be categorized as either one of *positive*, *negative*, *no effect*, or *not outdoor activity* based on what effect the keyword(s) have on social anxiety. Our approach focuses on fine-tuning pre-trained language models to classify the posts. Additionally, we use fuzzy string matching to select only the text around the given keywords so that the model only has to focus on the contextual sentiment associated with the keywords. Using this system, our peak score is 0.65 macro-F1 on the validation set and 0.654 on test set.", }
This paper illustrates the system we design for Task 3 of the 9th Social Media Mining for Health (SMM4H 2024) shared tasks. The task presents posts made on the Reddit social media platform, specifically the *r/SocialAnxiety* subreddit, along with one or more outdoor activities as pre-determined keywords for each post. The task then requires each post to be categorized as either one of *positive*, *negative*, *no effect*, or *not outdoor activity* based on what effect the keyword(s) have on social anxiety. Our approach focuses on fine-tuning pre-trained language models to classify the posts. Additionally, we use fuzzy string matching to select only the text around the given keywords so that the model only has to focus on the contextual sentiment associated with the keywords. Using this system, our peak score is 0.65 macro-F1 on the validation set and 0.654 on test set.
[ "Rahman, Sheikh", "Wasi, Azmine Toushik" ]
DILAB at #SMM4H 2024: Analyzing Social Anxiety Effects through Context-Aware Transfer Learning on Reddit Data
smm4h-1.4
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.smm4h-1.4/
[]
[]
[]
0
https://aclanthology.org/2024.smm4h-1.5.bib
@inproceedings{tortoreto-mousavi-2024-dolomites, title = "Dolomites@{\#}{SMM}4{H} 2024: Helping {LLM}s {``}Know The Drill{''} in Low-Resource Settings - A Study on Social Media Posts", author = "Tortoreto, Giuliano and Mousavi, Seyed Mahed", editor = "Xu, Dongfang and Gonzalez-Hernandez, Graciela", booktitle = "Proceedings of The 9th Social Media Mining for Health Research and Applications (SMM4H 2024) Workshop and Shared Tasks", month = aug, year = "2024", address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.smm4h-1.5", pages = "17--22", abstract = "The amount of data to fine-tune LLMs plays a crucial role in the performance of these models in downstream tasks. Consequently, it is not straightforward to deploy these models in low-resource settings. In this work, we investigate two new multi-task learning data augmentation approaches for fine-tuning LLMs when little data is available: {``}In-domain Augmentation{''} of the training data and extracting {``}Drills{''} as smaller tasks from the target dataset. We evaluate the proposed approaches in three natural language processing settings in the context of SMM4H 2024 competition tasks: multi-class classification, entity recognition, and information extraction. The results show that both techniques improve the performance of the models in all three settings, suggesting a positive impact from the knowledge learned in multi-task training to perform the target task.", }
The amount of data to fine-tune LLMs plays a crucial role in the performance of these models in downstream tasks. Consequently, it is not straightforward to deploy these models in low-resource settings. In this work, we investigate two new multi-task learning data augmentation approaches for fine-tuning LLMs when little data is available: {``}In-domain Augmentation{''} of the training data and extracting {``}Drills{''} as smaller tasks from the target dataset. We evaluate the proposed approaches in three natural language processing settings in the context of SMM4H 2024 competition tasks: multi-class classification, entity recognition, and information extraction. The results show that both techniques improve the performance of the models in all three settings, suggesting a positive impact from the knowledge learned in multi-task training to perform the target task.
[ "Tortoreto, Giuliano", "Mousavi, Seyed Mahed" ]
Dolomites@#SMM4H 2024: Helping LLMs “Know The Drill” in Low-Resource Settings - A Study on Social Media Posts
smm4h-1.5
Poster
[ "" ]
-1
-1
-1
-1
https://aclanthology.org/2024.smm4h-1.5/
[]
[]
[]
0