--- base_model: Models/llama3-8b-instruct library_name: peft language: - en --- # Model Card for Model ID # 🤖 PIP-KAG: Mitigating Knowledge Conflicts in Knowledge-Augmented Generation via Parametric Pruning This is the official model for **[PIP-KAG: Mitigating Knowledge Conflicts in Knowledge-Augmented Generation via Parametric Pruning](https://arxiv.org/pdf/2502.15543)**. The PIP-KAG model is designed to address **knowledge conflicts** in **knowledge-augmented generation** tasks by leveraging a **parametric pruning** strategy, improving the **contextual faithfulness** of language models during knowledge-intensive generation. ## 📚 **Paper** For a detailed explanation of the methodology and experiments, please refer to our paper: [**PIP-KAG: Mitigating Knowledge Conflicts in Knowledge-Augmented Generation via Parametric Pruning**](https://arxiv.org/abs/2502.15543) ## 📊 Reproduce the Results To reproduce the experiments and benchmarks from the paper, follow the instructions provided in the official GitHub repository: [👉 GitHub: OpenBMB/PIP-KAG](https://github.com/OpenBMB/PIP-KAG). ## 📁 Model Details - Model Name: PIP-KAG-7B - Architecture: LLaMA3-8B-Instruct with Parametric Pruning - Training Data: [CoConflictQA](https://huggingface.co/datasets/chengpingan/PIP-KAG) Dataset - Pretrained Tasks: Knowledge-Augmented Generation, Contextual Faithfulness Evaluation ## 🔖 Citation If you use PIP-KAG in your work, please consider citing our paper: ``` @misc{huang2025pipkagmitigatingknowledgeconflicts, title={PIP-KAG: Mitigating Knowledge Conflicts in Knowledge-Augmented Generation via Parametric Pruning}, author={Pengcheng Huang and Zhenghao Liu and Yukun Yan and Xiaoyuan Yi and Hao Chen and Zhiyuan Liu and Maosong Sun and Tong Xiao and Ge Yu and Chenyan Xiong}, year={2025}, eprint={2502.15543}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2502.15543}, } ```