|
--- |
|
license: mit |
|
language: |
|
- en |
|
base_model: |
|
- microsoft/deberta-v3-large |
|
pipeline_tag: text-classification |
|
--- |
|
|
|
# FactCG for Large Language Model Ungrounded Hallucination Detection |
|
|
|
This is a fact-checking model from our work: |
|
|
|
๐ [**FactCG: Enhancing Fact Checkers with Graph-Based |
|
Multi-Hop Data**](https://arxiv.org/pdf/2501.17144) (NAACL2025, [GitHub Repo](https://github.com/derenlei/FactCG)) |
|
|
|
|
|
You can load our model with the following example code: |
|
|
|
```python |
|
from transformers import AutoTokenizer, AutoConfig, AutoModelForSequenceClassification |
|
config = AutoConfig.from_pretrained("yaxili96/FactCG-DeBERTa-v3-Large", num_labels=2, finetuning_task="text-classification", revision='main', token=None, cache_dir="./cache") |
|
config.problem_type = "single_label_classification" |
|
tokenizer = AutoTokenizer.from_pretrained("yaxili96/FactCG-DeBERTa-v3-Large", use_fast=True, revision='main', token=None, cache_dir="./cache") |
|
model = AutoModelForSequenceClassification.from_pretrained( |
|
"yaxili96/FactCG-DeBERTa-v3-Large", config=config, revision='main', token=None, ignore_mismatched_sizes=False, cache_dir="./cache") |
|
``` |
|
|
|
|
|
If you find the repository or FactCG helpful, please cite the following paper |
|
```bibtex |
|
@inproceedings{lei2025factcg, |
|
title={FactCG: Enhancing Fact Checkers with Graph-Based Multi-Hop Data}, |
|
author={Lei, Deren and Li, Yaxi and Li, Siyao and Hu, Mengya and Xu, Rui and Archer, Ken and Wang, Mingyu and Ching, Emily and Deng, Alex}, |
|
journal={NAACL}, |
|
year={2025} |
|
} |
|
``` |