Spaces:
Running
Running
Rename project to "Smooth Paraphraser"
Browse files- Updated all references of the app name from "Paraphrasing App" to "Smooth Paraphraser".
- Adjusted UI titles, headers, and documentation accordingly.
- Ensures consistent branding across the application.
- README.md +11 -36
- app.py +11 -20
- requirements.txt +1 -3
README.md
CHANGED
@@ -1,39 +1,14 @@
|
|
1 |
-
|
2 |
-
title: Paraphrasing App
|
3 |
-
emoji: π
|
4 |
-
colorFrom: indigo
|
5 |
-
colorTo: blue
|
6 |
-
sdk: gradio
|
7 |
-
sdk_version: "4.29.0"
|
8 |
-
app_file: app.py
|
9 |
-
pinned: false
|
10 |
-
---
|
11 |
|
12 |
-
|
13 |
|
14 |
-
|
15 |
-
|
|
|
|
|
16 |
|
17 |
-
##
|
18 |
-
|
19 |
-
|
20 |
-
|
21 |
-
|
22 |
-
## π οΈ Requirements
|
23 |
-
All dependencies are listed in `requirements.txt`:
|
24 |
-
- `transformers`
|
25 |
-
- `torch`
|
26 |
-
- `sentencepiece`
|
27 |
-
- `tiktoken`
|
28 |
-
- `gradio`
|
29 |
-
|
30 |
-
## π‘ Example
|
31 |
-
Input:
|
32 |
-
> "The quick brown fox jumps over the lazy dog."
|
33 |
-
|
34 |
-
Output:
|
35 |
-
- "A fast brown fox leaps over a lazy dog."
|
36 |
-
- "The lazy dog was jumped over by a quick brown fox."
|
37 |
-
|
38 |
-
---
|
39 |
-
Built with β€οΈ using Hugging Face Spaces
|
|
|
1 |
+
# Smooth Paraphraser
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
2 |
|
3 |
+
Smooth Paraphraser is a simple Hugging Face Space app built with **Gradio** and a pretrained **T5 model** for paraphrasing text.
|
4 |
|
5 |
+
## Features
|
6 |
+
- Enter any text and get a smooth paraphrased version.
|
7 |
+
- Powered by Hugging Face `transformers` library.
|
8 |
+
- Runs with Gradio for a clean UI.
|
9 |
|
10 |
+
## How to Run Locally
|
11 |
+
```bash
|
12 |
+
pip install -r requirements.txt
|
13 |
+
python app.py
|
14 |
+
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
app.py
CHANGED
@@ -1,29 +1,20 @@
|
|
1 |
import gradio as gr
|
2 |
-
from transformers import
|
3 |
|
4 |
-
# Load
|
5 |
-
|
6 |
-
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
7 |
-
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
|
8 |
|
9 |
-
def paraphrase(text
|
10 |
-
|
11 |
-
|
12 |
-
outputs = model.generate(
|
13 |
-
inputs,
|
14 |
-
max_length=512,
|
15 |
-
num_beams=num_beams,
|
16 |
-
num_return_sequences=num_return_sequences,
|
17 |
-
temperature=1.5
|
18 |
-
)
|
19 |
-
return [tokenizer.decode(output, skip_special_tokens=True, clean_up_tokenization_spaces=True) for output in outputs]
|
20 |
|
|
|
21 |
demo = gr.Interface(
|
22 |
fn=paraphrase,
|
23 |
-
inputs=
|
24 |
-
outputs=
|
25 |
-
title="
|
26 |
-
description="
|
27 |
)
|
28 |
|
29 |
if __name__ == "__main__":
|
|
|
1 |
import gradio as gr
|
2 |
+
from transformers import pipeline
|
3 |
|
4 |
+
# Load paraphrasing pipeline
|
5 |
+
paraphraser = pipeline("text2text-generation", model="Vamsi/T5_Paraphrase_Paws")
|
|
|
|
|
6 |
|
7 |
+
def paraphrase(text):
|
8 |
+
result = paraphraser(text, max_length=100, num_return_sequences=1)
|
9 |
+
return result[0]['generated_text']
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
10 |
|
11 |
+
# Gradio interface
|
12 |
demo = gr.Interface(
|
13 |
fn=paraphrase,
|
14 |
+
inputs=gr.Textbox(lines=4, placeholder="Enter text to paraphrase..."),
|
15 |
+
outputs="text",
|
16 |
+
title="Smooth Paraphraser",
|
17 |
+
description="A simple app to smoothly paraphrase text using a pretrained T5 transformer model."
|
18 |
)
|
19 |
|
20 |
if __name__ == "__main__":
|
requirements.txt
CHANGED
@@ -1,5 +1,3 @@
|
|
1 |
-
transformers
|
2 |
torch
|
3 |
-
sentencepiece
|
4 |
-
tiktoken
|
5 |
gradio==4.29.0
|
|
|
1 |
+
transformers==4.42.4
|
2 |
torch
|
|
|
|
|
3 |
gradio==4.29.0
|