|
--- |
|
language: en |
|
tags: |
|
- summarization |
|
- medical |
|
library_name: transformers |
|
pipeline_tag: summarization |
|
--- |
|
|
|
# Automatic Personalized Impression Generation for PET Reports Using Large Language Models πβ |
|
|
|
**Authored by**: Xin Tie, Muheon Shin, Ali Pirasteh, Nevein Ibrahim, Zachary Huemann, Sharon M. Castellino, Kara Kelly, John Garrett, Junjie Hu, Steve Y. Cho, Tyler J. Bradshaw |
|
|
|
[Read the full paper](https://arxiv.org/abs/2309.10066) |
|
<!-- Link to our Arxiv paper --> |
|
|
|
## π Model Description |
|
|
|
This is the domain-adapted BARTScore for evaluating the quality of PET impressions. |
|
|
|
To check our domain-adapted text-generation-based evaluation metrics: |
|
- [BARTScore+PET](https://huggingface.co/xtie/BARTScore-PET) |
|
- [PEGASUSScore+PET](https://huggingface.co/xtie/PEGASUSScore-PET) |
|
- [T5+PET](https://huggingface.co/xtie/T5Score-PET) |
|
|
|
|
|
|
|
## π Usage |
|
|
|
Clone this GitHub repository in a local folder |
|
```bash |
|
git clone https://github.com/xtie97/PET-Report-Summarization.git |
|
``` |
|
|
|
Go the the folder containing codes for computing BARTScore and create a new folder called "checkpoints" |
|
```bash |
|
cd ./PET-Report-Summarization/evaluation_metrics/metrics/BARTScore |
|
mkdir checkpoints |
|
mkdir checkpoints/bart-large |
|
``` |
|
|
|
Download the model weights and put them in the folder "checkpoints/bart-large". Run the code for computing text-generation-based metrics |
|
``` |
|
python compute_metrics_text_generation.py |
|
``` |
|
|
|
## π Additional Resources |
|
- **Codebase for evaluation metrics:** [GitHub](https://github.com/xtie97/PET-Report-Summarization/tree/main/evaluation_metrics) |
|
--- |
|
|