--- license: cc-by-nc-sa-4.0 task_categories: - text-classification language: - en tags: - Social Media - News Media - Sentiment - Stance - Emotion pretty_name: >- LlamaLens: Specialized Multilingual LLM for Analyzing News and Social Media Content -- English size_categories: - 10K

## LlamaLens This repo includes scripts needed to run our full pipeline, including data preprocessing and sampling, instruction dataset creation, model fine-tuning, inference and evaluation. ### Features - Multilingual support (Arabic, English, Hindi) - 19 NLP tasks with 52 datasets - Optimized for news and social media content analysis ## 📂 Dataset Overview ### English Datasets | **Task** | **Dataset** | **# Labels** | **# Train** | **# Test** | **# Dev** | |---------------------------|------------------------------|--------------|-------------|------------|-----------| | Checkworthiness | CT24_T1 | 2 | 22,403 | 1,031 | 318 | | Claim | claim-detection | 2 | 23,224 | 7,267 | 5,815 | | Cyberbullying | Cyberbullying | 6 | 32,551 | 9,473 | 4,751 | | Emotion | emotion | 6 | 280,551 | 82,454 | 41,429 | | Factuality | News_dataset | 2 | 28,147 | 8,616 | 4,376 | | Factuality | Politifact | 6 | 14,799 | 4,230 | 2,116 | | News Genre Categorization | CNN_News_Articles_2011-2022 | 6 | 32,193 | 5,682 | 9,663 | | News Genre Categorization | News_Category_Dataset | 42 | 145,748 | 41,740 | 20,899 | | News Genre Categorization | SemEval23T3-subtask1 | 3 | 302 | 83 | 130 | | Summarization | xlsum | -- | 306,493 | 11,535 | 11,535 | | Offensive Language | Offensive_Hateful_Dataset_New | 2 | 42,000 | 5,252 | 5,254 | | Offensive Language | offensive_language_dataset | 2 | 29,216 | 3,653 | 3,653 | | Offensive/Hate-Speech | hate-offensive-speech | 3 | 48,944 | 2,799 | 2,802 | | Propaganda | QProp | 2 | 35,986 | 10,159 | 5,125 | | Sarcasm | News-Headlines-Dataset-For-Sarcasm-Detection | 2 | 19,965 | 5,719 | 2,858 | | Sentiment | NewsMTSC-dataset | 3 | 7,739 | 747 | 320 | | Subjectivity | clef2024-checkthat-lab | 2 | 825 | 484 | 219 | ## Results Below, we present the performance of **LlamaLens** in **English** compared to existing SOTA (if available) and the Llama-Instruct baseline, The “Δ” (Delta) column here is calculated as **(LLamalens – SOTA)**. | **Task** | **Dataset** | **Metric** | **SOTA** | **Llama-instruct** | **LLamalens** | **Δ** (LLamalens - SOTA) | |----------------------|---------------------------|-----------:|--------:|--------------------:|--------------:|------------------------------:| | News Summarization | xlsum | R-2 | 0.152 | 0.074 | 0.141 | -0.011 | | News Genre | CNN_News_Articles | Acc | 0.940 | 0.644 | 0.915 | -0.025 | | News Genre | News_Category | Ma-F1 | 0.769 | 0.970 | 0.505 | -0.264 | | News Genre | SemEval23T3-ST1 | Mi-F1 | 0.815 | 0.687 | 0.241 | -0.574 | | Subjectivity | CT24_T2 | Ma-F1 | 0.744 | 0.535 | 0.508 | -0.236 | | Emotion | emotion | Ma-F1 | 0.790 | 0.353 | 0.878 | 0.088 | | Sarcasm | News-Headlines | Acc | 0.897 | 0.668 | 0.956 | 0.059 | | Sentiment | NewsMTSC | Ma-F1 | 0.817 | 0.628 | 0.627 | -0.190 | | Checkworthiness | CT24_T1 | F1_Pos | 0.753 | 0.404 | 0.877 | 0.124 | | Claim | claim-detection | Mi-F1 | – | 0.545 | 0.915 | – | | Factuality | News_dataset | Acc | 0.920 | 0.654 | 0.946 | 0.026 | | Factuality | Politifact | W-F1 | 0.490 | 0.121 | 0.290 | -0.200 | | Propaganda | QProp | Ma-F1 | 0.667 | 0.759 | 0.851 | 0.184 | | Cyberbullying | Cyberbullying | Acc | 0.907 | 0.175 | 0.847 | -0.060 | | Offensive | Offensive_Hateful | Mi-F1 | – | 0.692 | 0.805 | – | | Offensive | offensive_language | Mi-F1 | 0.994 | 0.646 | 0.884 | -0.110 | | Offensive & Hate | hate-offensive-speech | Acc | 0.945 | 0.602 | 0.924 | -0.021 | ## File Format Each JSONL file in the dataset follows a structured format with the following fields: - `id`: Unique identifier for each data entry. - `original_id`: Identifier from the original dataset, if available. - `input`: The original text that needs to be analyzed. - `output`: The label assigned to the text after analysis. - `dataset`: Name of the dataset the entry belongs. - `task`: The specific task type. - `lang`: The language of the input text. - `instructions`: A brief set of instructions describing how the text should be labeled. - `text`: A formatted structure including instructions and response for the task in a conversation format between the system, user, and assistant, showing the decision process. **Example entry in JSONL file:** ``` { "id": "3fe3eb6a-843e-4a03-b38c-8333c052f4c4", "original_id": "nan", "input": "You know, I saw a movie - \"Crocodile Dundee.\"", "output": "not_checkworthy", "dataset": "CT24_checkworthy", "task": "Checkworthiness", "lang": "en", "instructions": "Analyze the given text and label it as 'checkworthy' if it includes a factual statement that is significant or relevant to verify, or 'not_checkworthy' if it's not worth checking. Return only the label without any explanation, justification or additional text.", "text": "<|begin_of_text|><|start_header_id|>system<|end_header_id|>You are a social media expert providing accurate analysis and insights.<|eot_id|><|start_header_id|>user<|end_header_id|>Analyze the given text and label it as 'checkworthy' if it includes a factual statement that is significant or relevant to verify, or 'not_checkworthy' if it's not worth checking. Return only the label without any explanation, justification or additional text.\ninput: You know, I saw a movie - \"Crocodile Dundee.\"\nlabel: <|eot_id|><|start_header_id|>assistant<|end_header_id|>not_checkworthy<|eot_id|><|end_of_text|>" } ``` ## Model [**LlamaLens on Hugging Face**](https://huggingface.co/QCRI/LlamaLens) ## Replication Scripts [**LlamaLens GitHub Repository**](https://github.com/firojalam/LlamaLens) ## 📢 Citation If you use this dataset, please cite our [paper](https://arxiv.org/pdf/2410.15308): ``` @article{kmainasi2024llamalensspecializedmultilingualllm, title={LlamaLens: Specialized Multilingual LLM for Analyzing News and Social Media Content}, author={Mohamed Bayan Kmainasi and Ali Ezzat Shahroor and Maram Hasanain and Sahinur Rahman Laskar and Naeemul Hassan and Firoj Alam}, year={2024}, journal={arXiv preprint arXiv:2410.15308}, volume={}, number={}, pages={}, url={https://arxiv.org/abs/2410.15308}, eprint={2410.15308}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```