--- license: apache-2.0 language: - en base_model: - nasa-impact/nasa-smd-ibm-v0.1 library_name: transformers --- # Science Keyword Classification model We have fine-tuned [INDUS Model](https://huggingface.co/nasa-impact/nasa-smd-ibm-v0.1) for classifying scientific keywords from NASA's Common Metadata Repository (CMR). The project aims to improve the accessibility and organization of Earth observation metadata by predicting associated keywords in an Extreme Multi-Label Classification setting. ## Model Overview - **Base Model:** INDUS, fine-tuned for multi-label classification. - **Loss Function:** The model uses focal loss instead of traditional cross-entropy to address label imbalance by focusing on difficult-to-classify examples. - **Dataset:** NASA's CMR metadata, filtered to remove duplicates and irrelevant labels, resulting in a dataset of 42,474 records and 3,240 labels. You can find the [dataset here](https://huggingface.co/datasets/nasa-impact/science-keyword-classification-dataset) ## Key Features - **Extreme Multi-Label Classification:** Addresses classification with a vast number of potential labels (keywords) and imbalanced frequency. - **Stratified Splitting:** The dataset is split based on `provider-id` to maintain balanced representation across train, validation, and test sets. - **Improved Performance:** Focal loss with different focusing parameters (γ) was evaluated, showing significant improvements in weighted precision, recall, F1 score, and Jaccard similarity over cross-entropy loss and previous models. ## Label Mapping During Inference After obtaining predictions from the model, we can map the predicted label indices to their actual names using the `model.config.id2label` dictionary ```python # Example usage predicted_indices = [0, 2, 5] # top 3 predicted_labels = [model.config.id2label[idx] for idx in predicted_indices] print(predicted_labels) ``` ## Experiments 1. **Baseline (alpha-1.0.1):** Used cross-entropy loss. 2. **Experiment 2 (alpha-1.1.1):** Focal loss with γ = 4. 3. **Experiment 3 (alpha-1.1.2):** Focal loss with γ = 2. 4. **Final (alpha-1.2.1):** Focal loss (γ = 2) with stratified splitting. ## Results The model with focal loss and stratified sampling (alpha-1.2.1) outperformed all other configurations and previous models in terms of precision, recall, F1 score, and Jaccard similarity. The weighted metrics at various threshold for the model can be found below. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63f0e7de9cf89c9ed1bf92a2/CvhGPUzA2vJua3uu9H3tA.png) Please find accompanying [technical writeup here](https://drive.google.com/file/d/1g4l5tLjeNUu3z8fcVMKuIXs7hLifVfSu/view?usp=sharing). ## References - RoBERTa: [arXiv](https://arxiv.org/abs/1907.11692) - Focal Loss: [arXiv](https://arxiv.org/abs/1708.02002) - [NASA CMR](https://cmr.earthdata.nasa.gov/search) - [Previous Model API](https://gcmd.nasa-impact.net/docs/)