h4rz3rk4s3 commited on
Commit
19afeb6
·
verified ·
1 Parent(s): 98ad086

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +46 -0
README.md CHANGED
@@ -1,3 +1,49 @@
1
  ---
2
  license: apache-2.0
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+ base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
4
+ tags:
5
+ - TinyLlama
6
+ - QLoRA
7
+ - Politics
8
+ - News
9
+ - sft
10
+ language:
11
+ - en
12
+ pipeline_tag: text-generation
13
  ---
14
+
15
+ # TinyNewsLlama-1.1B
16
+
17
+ TinyNewsLlama-1.1B is a QLoRA SFT fine-tune of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) using a sample of a concentrated version of the [bigNews] (https://paperswithcode.com/dataset/bignews) Dataset. The model was fine-tuned for ~12h on one A100 40GB on ~125M tokens.
18
+
19
+ The goal of this project is to study the potential for improving the domain-specific (in this case political) knowledge of small (<3B) LLMs by concentrating the training datasets TF-IDF in respect to the underlying Topics found in the origianl Dataset.
20
+
21
+ The used training data contains political news articles from **The New York Times**, **USA Today** and **The Washington Times**. The concentrated BigNews Dataset as well as more information about the used sample will soon be added.
22
+
23
+
24
+ ## 💻 Usage
25
+
26
+ ```python
27
+ !pip install -qU transformers accelerate
28
+ from transformers import AutoTokenizer
29
+ import transformers
30
+ import torch
31
+ model = "h4rz3rk4s3/TinyNewsLlama-1.1B"
32
+ messages = [
33
+ {
34
+ "role": "system",
35
+ "content": "You are a an experienced journalist.",
36
+ },
37
+ {"role": "user", "content": "Write a short article on Brexit and it's impact on the European Union."},
38
+ ]
39
+
40
+ tokenizer = AutoTokenizer.from_pretrained(model)
41
+ prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
42
+ pipeline = transformers.pipeline(
43
+ "text-generation",
44
+ model=model,
45
+ device_map="auto",
46
+ )
47
+ outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
48
+ print(outputs[0]["generated_text"])
49
+ ```