File size: 5,319 Bytes
96e1d5d
 
 
a01368f
 
 
 
 
 
 
ab5817c
a01368f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ab5817c
a01368f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
---

license: mit
---


# RAGulator-deberta-v3-large

This is the out-of-context detection model from our work:

[**RAGulator: Lightweight Out-of-Context Detectors for Grounded Text Generation**](https://arxiv.org/abs/2411.03920)

This repository contains model files for the deberta-v3-large variant of RAGulator. Code can be found [here](https://github.com/ipoeyke/RAGulator).

## Key Points
* RAGulator predicts whether a sentence is out-of-context (OOC) from retrieved text documents in a RAG setting.
* We preprocess a combination of summarisation and semantic textual similarity datasets (STS) to construct training data using minimal
resources.
* We demonstrate 2 types of trained models: tree-based meta-models trained on features engineered on preprocessed text, and BERT-based classifiers fine-tuned directly on original text.
* We find that fine-tuned DeBERTa is not only the best-performing model under this pipeline, but it is also fast and does not require additional text preprocessing or feature engineering.

## Model Details

### Dataset
Training data for RAGulator is adapted from a combination of summarisation and STS datasets to simulate RAG:
* [BBC](https://www.kaggle.com/datasets/pariza/bbc-news-summary)
* [CNN DailyMail ver. 3.0.0](https://huggingface.co/datasets/abisee/cnn_dailymail)
* [PubMed](https://huggingface.co/datasets/ccdv/pubmed-summarization)
* [MRPC from the GLUE dataset](https://huggingface.co/datasets/nyu-mll/glue/)
* [SNLI ver. 1.0](https://huggingface.co/datasets/stanfordnlp/snli)

The datasets were transformed before concatenation into the final dataset. Each row of the final dataset consists \[`sentence`, `context`, `OOC label`\].
* For summarisation datasets, transformation was done by randomly pairing summary abstracts with unrelated articles to create OOC pairs, then sentencizing
the abstracts to create one example for each abstract sentence.
* For STS datasets, transformation was done by inserting random sentences from the datasets to one of the sentences in the pair to simulate a long "context". The original labels were mapped to our OOC definition. If the original pair was indicated as dissimilar, we consider the pair as OOC.

To enable training of BERT-based classifiers, each training example was split into sub-sequences of maximum 512 tokens. The OOC label for each sub-sequence was derived through a generative labelling process with Llama-3.1-70b-Instruct.

### Model Training
RAGulator is fine-tuned from `microsoft/deberta-v3-large` ([He et al., 2023](https://arxiv.org/pdf/2111.09543.pdf)).

### Model Performance
<p align="center">
    <img src="./model-performance.png" width="700">

</p>


We compare our models to LLM-as-a-judge (Llama-3.1-70b-Instruct) as a baseline. We evaluate on both a held-out data split of our simulated RAG dataset, as well as an out-of-distribution collection of private enterprise data, which consists of RAG responses from a real use case.

The deberta-v3-large variant is our best-performing model, showing a 19% increase in AUROC and a 17% increase in F1 score despite being significantly smaller than Llama-3.1.

## Basic Usage
```python

import torch

from transformers import DebertaV2Tokenizer, DebertaV2ForSequenceClassification



model_path = "./ragulator-deberta-v3-large" # assuming model folder located here

tokenizer = DebertaV2Tokenizer.from_pretrained(model_path)

model = DebertaV2ForSequenceClassification.from_pretrained(

    model_path,

    num_labels=2

)

model.eval()



# input

sentences = ["This is the first sentence", "This is the second sentence"]

contexts = ["This is the first context", "This is the second context"]

inputs = tokenizer(

    sentences,

    contexts,

    add_special_tokens=True,

    return_token_type_ids=True,

    return_attention_mask=True,

    padding='max_length',

    max_length=512,

    truncation='longest_first',

    return_tensors='pt'

)



# forward pass

with torch.no_grad():

    outputs = self.model(**inputs)



# OOC score

fn = torch.nn.Softmax(dim=-1)

ooc_scores = fn(outputs.logits).cpu().numpy()[:,1]

```

## Usage - batch and long-context inference
We provide a simple wrapper to demonstrate batch inference and accommodation for long-context examples. First, install the package:
```bash

pip install "ragulator @ git+https://github.com/ipoeyke/RAGulator.git@main"

```
```python

from ragulator import RAGulator



model = RAGulator(

    model_name='deberta-v3-large', # only value supported for now

    batch_size=32,

    device='cpu'

)



# input

sentences = ["This is the first sentence", "This is the second sentence"]

contexts = ["This is the first context", "This is the second context"]



# batch inference

model.infer_batch(

    sentences,

    contexts,

    return_probas=True # True for OOC probabilities, False for binary labels

)

```

## Citation
```

@misc{poey2024ragulatorlightweightoutofcontextdetectors,

      title={RAGulator: Lightweight Out-of-Context Detectors for Grounded Text Generation}, 

      author={Ian Poey and Jiajun Liu and Qishuai Zhong and Adrien Chenailler},

      year={2024},

      eprint={2411.03920},

      archivePrefix={arXiv},

      primaryClass={cs.CL},

      url={https://arxiv.org/abs/2411.03920}, 

}

```