karbolak commited on
Commit
66f8fa5
·
verified ·
1 Parent(s): 51457d8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -42
README.md CHANGED
@@ -4,7 +4,7 @@ tags:
4
  - lora
5
  - poetry
6
  - art
7
- license: mit
8
  language:
9
  - en
10
  base_model:
@@ -62,48 +62,14 @@ For broader applicability, further fine-tuning with culturally diverse poetic da
62
 
63
  ## How to Get Started with the Model
64
 
65
- Use the following code snippet to begin generating poetry with the model:
66
-
67
- ```python
68
- from transformers import pipeline
69
-
70
- generator = pipeline('text-generation', model='your-model-id')
71
- system_message = """
72
- You are an expert in poetry fusion, specializing in blending the distinct styles of two poets. Focus on emotional depth, unique metaphor usage, symbolic imagery, and rhythmic patterns. Your task is to merge not only technical elements like word choice and structure but also the deeper conceptual and emotional richness that define each poet's work.
73
- """
74
-
75
- Poet_1, Poet_2 = "William Shakespeare", "Edgar Allan Poe"
76
- user_message = f"""
77
- Generate a new poem that fuses the styles of {Poet_1} and {Poet_2}. Combine their styles in the rest of the poem, merging their use of metaphor, rhythm, and tone. Make the poem to 150 words.
78
- """
79
-
80
- prompt = f"{system_message}\nUser: {user_message}\nAssistant:"
81
-
82
-
83
- device = "cuda:0"
84
-
85
- # Custom decoding with temperature, top_p, and top_k for more creative output
86
- def generate_with_constraints(prompt, temperature=0.7, top_p=0.9, top_k=50):
87
- inputs = tokenizer(prompt, return_tensors="pt").to("cuda:0")
88
- outputs = model.generate(
89
- **inputs,
90
- max_new_tokens=256,
91
- temperature=temperature, # Added temperature for creativity
92
- top_p=top_p, # Nucleus sampling
93
- top_k=top_k, # Limit token selection for more focused choices
94
- no_repeat_ngram_size=3,
95
- num_beams=5 # Beam search to enforce rhyme/meter constraints
96
- )
97
- return tokenizer.decode(outputs[0], skip_special_tokens=True)
98
-
99
- # Now generate the poem with creativity-enhancing parameters
100
- poetry_output = generate_with_constraints(prompt, temperature=0.8, top_p=0.85, top_k=40)
101
- ```
102
 
103
  ## Training Details
104
 
 
 
105
  ### Training Data
106
- The model was fine-tuned on a subset of a larger poetry dataset, which includes 524 English poems, with 227 poems specifically from our selected poets. This curated set allowed the model to focus on the unique attributes of each poet’s work while providing additional data for improved generalization.
107
 
108
  ### Training Procedure
109
  To achieve high-quality output, the model was fine-tuned using the following parameters:
@@ -143,6 +109,4 @@ If you reference this work, please use the following citation:
143
  For questions or further information, please reach out to the authors:
144
  - **Natalie Mladenova** - [email protected]
145
  - **Karolina Kozikowska** - [email protected]
146
- - **Kajetan Karbowski** - [email protected]
147
- ```
148
- ---
 
4
  - lora
5
  - poetry
6
  - art
7
+ license: llama3.2
8
  language:
9
  - en
10
  base_model:
 
62
 
63
  ## How to Get Started with the Model
64
 
65
+ You may find all necessary information about the model deployment in the Jupyter Notebook "Poetry_Fusion_using_Llama_3.2.ipynb".
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
66
 
67
  ## Training Details
68
 
69
+ You may find all necessary information about the model training in the Jupyter Notebook "Poetry_Fusion_using_Llama_3.2.ipynb".
70
+
71
  ### Training Data
72
+ The model was fine-tuned on a subset of a larger poetry dataset "LLM_Dataset.csv" which may be found in the repository files. It includes 524 English poems, with 227 poems specifically from our selected poets. This curated set allowed the model to focus on the unique attributes of each poet’s work while providing additional data for improved generalization.
73
 
74
  ### Training Procedure
75
  To achieve high-quality output, the model was fine-tuned using the following parameters:
 
109
  For questions or further information, please reach out to the authors:
110
  - **Natalie Mladenova** - [email protected]
111
  - **Karolina Kozikowska** - [email protected]
112
+ - **Kajetan Karbowski** - [email protected]