--- license: mit --- # RAGulator-deberta-v3-large This is the out-of-context detection model from our work: [**RAGulator: Lightweight Out-of-Context Detectors for Grounded Text Generation**](https://arxiv.org/abs/2411.03920) This repository contains model files for the deberta-v3-large variant of RAGulator. Code can be found [here](https://github.com/ipoeyke/RAGulator). ## Key Points * RAGulator predicts whether a sentence is out-of-context (OOC) from retrieved text documents in a RAG setting. * We preprocess a combination of summarisation and semantic textual similarity datasets (STS) to construct training data using minimal resources. * We demonstrate 2 types of trained models: tree-based meta-models trained on features engineered on preprocessed text, and BERT-based classifiers fine-tuned directly on original text. * We find that fine-tuned DeBERTa is not only the best-performing model under this pipeline, but it is also fast and does not require additional text preprocessing or feature engineering. ## Model Details ### Dataset Training data for RAGulator is adapted from a combination of summarisation and STS datasets to simulate RAG: * [BBC](https://www.kaggle.com/datasets/pariza/bbc-news-summary) * [CNN DailyMail ver. 3.0.0](https://huggingface.co/datasets/abisee/cnn_dailymail) * [PubMed](https://huggingface.co/datasets/ccdv/pubmed-summarization) * [MRPC from the GLUE dataset](https://huggingface.co/datasets/nyu-mll/glue/) * [SNLI ver. 1.0](https://huggingface.co/datasets/stanfordnlp/snli) The datasets were transformed before concatenation into the final dataset. Each row of the final dataset consists \[`sentence`, `context`, `OOC label`\]. * For summarisation datasets, transformation was done by randomly pairing summary abstracts with unrelated articles to create OOC pairs, then sentencizing the abstracts to create one example for each abstract sentence. * For STS datasets, transformation was done by inserting random sentences from the datasets to one of the sentences in the pair to simulate a long "context". The original labels were mapped to our OOC definition. If the original pair was indicated as dissimilar, we consider the pair as OOC. To enable training of BERT-based classifiers, each training example was split into sub-sequences of maximum 512 tokens. The OOC label for each sub-sequence was derived through a generative labelling process with Llama-3.1-70b-Instruct. ### Model Training RAGulator is fine-tuned from `microsoft/deberta-v3-large` ([He et al., 2023](https://arxiv.org/pdf/2111.09543.pdf)). ### Model Performance