RichardErkhov commited on
Commit
f824770
·
verified ·
1 Parent(s): 1e7f5f1

uploaded readme

Browse files
Files changed (1) hide show
  1. README.md +133 -0
README.md ADDED
@@ -0,0 +1,133 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Quantization made by Richard Erkhov.
2
+
3
+ [Github](https://github.com/RichardErkhov)
4
+
5
+ [Discord](https://discord.gg/pvy7H8DZMG)
6
+
7
+ [Request more models](https://github.com/RichardErkhov/quant_request)
8
+
9
+
10
+ astrollama - GGUF
11
+ - Model creator: https://huggingface.co/universeTBD/
12
+ - Original model: https://huggingface.co/universeTBD/astrollama/
13
+
14
+
15
+ | Name | Quant method | Size |
16
+ | ---- | ---- | ---- |
17
+ | [astrollama.Q2_K.gguf](https://huggingface.co/RichardErkhov/universeTBD_-_astrollama-gguf/blob/main/astrollama.Q2_K.gguf) | Q2_K | 2.36GB |
18
+ | [astrollama.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/universeTBD_-_astrollama-gguf/blob/main/astrollama.IQ3_XS.gguf) | IQ3_XS | 2.6GB |
19
+ | [astrollama.IQ3_S.gguf](https://huggingface.co/RichardErkhov/universeTBD_-_astrollama-gguf/blob/main/astrollama.IQ3_S.gguf) | IQ3_S | 2.75GB |
20
+ | [astrollama.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/universeTBD_-_astrollama-gguf/blob/main/astrollama.Q3_K_S.gguf) | Q3_K_S | 2.75GB |
21
+ | [astrollama.IQ3_M.gguf](https://huggingface.co/RichardErkhov/universeTBD_-_astrollama-gguf/blob/main/astrollama.IQ3_M.gguf) | IQ3_M | 2.9GB |
22
+ | [astrollama.Q3_K.gguf](https://huggingface.co/RichardErkhov/universeTBD_-_astrollama-gguf/blob/main/astrollama.Q3_K.gguf) | Q3_K | 3.07GB |
23
+ | [astrollama.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/universeTBD_-_astrollama-gguf/blob/main/astrollama.Q3_K_M.gguf) | Q3_K_M | 3.07GB |
24
+ | [astrollama.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/universeTBD_-_astrollama-gguf/blob/main/astrollama.Q3_K_L.gguf) | Q3_K_L | 3.35GB |
25
+ | [astrollama.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/universeTBD_-_astrollama-gguf/blob/main/astrollama.IQ4_XS.gguf) | IQ4_XS | 3.4GB |
26
+ | [astrollama.Q4_0.gguf](https://huggingface.co/RichardErkhov/universeTBD_-_astrollama-gguf/blob/main/astrollama.Q4_0.gguf) | Q4_0 | 3.56GB |
27
+ | [astrollama.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/universeTBD_-_astrollama-gguf/blob/main/astrollama.IQ4_NL.gguf) | IQ4_NL | 3.58GB |
28
+ | [astrollama.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/universeTBD_-_astrollama-gguf/blob/main/astrollama.Q4_K_S.gguf) | Q4_K_S | 3.59GB |
29
+ | [astrollama.Q4_K.gguf](https://huggingface.co/RichardErkhov/universeTBD_-_astrollama-gguf/blob/main/astrollama.Q4_K.gguf) | Q4_K | 3.8GB |
30
+ | [astrollama.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/universeTBD_-_astrollama-gguf/blob/main/astrollama.Q4_K_M.gguf) | Q4_K_M | 3.8GB |
31
+ | [astrollama.Q4_1.gguf](https://huggingface.co/RichardErkhov/universeTBD_-_astrollama-gguf/blob/main/astrollama.Q4_1.gguf) | Q4_1 | 3.95GB |
32
+ | [astrollama.Q5_0.gguf](https://huggingface.co/RichardErkhov/universeTBD_-_astrollama-gguf/blob/main/astrollama.Q5_0.gguf) | Q5_0 | 4.33GB |
33
+ | [astrollama.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/universeTBD_-_astrollama-gguf/blob/main/astrollama.Q5_K_S.gguf) | Q5_K_S | 4.33GB |
34
+ | [astrollama.Q5_K.gguf](https://huggingface.co/RichardErkhov/universeTBD_-_astrollama-gguf/blob/main/astrollama.Q5_K.gguf) | Q5_K | 4.45GB |
35
+ | [astrollama.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/universeTBD_-_astrollama-gguf/blob/main/astrollama.Q5_K_M.gguf) | Q5_K_M | 4.45GB |
36
+ | [astrollama.Q5_1.gguf](https://huggingface.co/RichardErkhov/universeTBD_-_astrollama-gguf/blob/main/astrollama.Q5_1.gguf) | Q5_1 | 4.72GB |
37
+ | [astrollama.Q6_K.gguf](https://huggingface.co/RichardErkhov/universeTBD_-_astrollama-gguf/blob/main/astrollama.Q6_K.gguf) | Q6_K | 5.15GB |
38
+ | [astrollama.Q8_0.gguf](https://huggingface.co/RichardErkhov/universeTBD_-_astrollama-gguf/blob/main/astrollama.Q8_0.gguf) | Q8_0 | 6.67GB |
39
+
40
+
41
+
42
+
43
+ Original model description:
44
+ ---
45
+ license: mit
46
+ datasets:
47
+ - universeTBD/arxiv-astro-abstracts-all
48
+ language:
49
+ - en
50
+ metrics:
51
+ - perplexity
52
+ pipeline_tag: text-generation
53
+ tags:
54
+ - llama-2
55
+ - astronomy
56
+ - astrophysics
57
+ - arxiv
58
+ inference: false
59
+ ---
60
+
61
+ <p><h1>AstroLLaMA</h1></p>
62
+
63
+ **Play with the model in our Hugging Face space!** https://huggingface.co/spaces/universeTBD/astrollama
64
+
65
+ <p align="center">
66
+ <img src="https://huggingface.co/universeTBD/astrollama/resolve/main/images/astrollama-logo.png" alt="AstroLLaMA" width="500px"/>
67
+ </p>
68
+
69
+ ## Loading the model
70
+
71
+ ```python
72
+ from transformers import AutoModelForCausalLM
73
+ from transformers import AutoTokenizer
74
+
75
+ tokenizer = AutoTokenizer.from_pretrained(
76
+ pretrained_model_name_or_path="universeTBD/astrollama"
77
+ )
78
+ model = AutoModelForCausalLM.from_pretrained(
79
+ pretrained_model_name_or_path="universeTBD/astrollama",
80
+ device_map="auto",
81
+ )
82
+ ```
83
+
84
+ ## Generating text from a prompt
85
+
86
+ ```python
87
+ import torch
88
+ from transformers import pipeline
89
+
90
+ generator = pipeline(
91
+ task="text-generation",
92
+ model=model,
93
+ tokenizer=tokenizer,
94
+ device_map="auto"
95
+ )
96
+
97
+ # Taken from https://arxiv.org/abs/2308.12823
98
+ prompt = "In this letter, we report the discovery of the highest redshift, " \
99
+ "heavily obscured, radio-loud QSO candidate selected using JWST NIRCam/MIRI, " \
100
+ "mid-IR, sub-mm, and radio imaging in the COSMOS-Web field. "
101
+
102
+ # For reproducibility
103
+ torch.manual_seed(42)
104
+
105
+ generated_text = generator(
106
+ prompt,
107
+ do_sample=True,
108
+ max_length=512
109
+ )
110
+ ```
111
+
112
+ ## Embedding text with AstroLLaMA
113
+
114
+ ```python
115
+ texts = [
116
+ "Abstract 1",
117
+ "Abstract 2"
118
+ ]
119
+ inputs = tokenizer(
120
+ texts,
121
+ return_tensors="pt",
122
+ return_token_type_ids=False,
123
+ padding=True,
124
+ truncation=True,
125
+ max_length=4096
126
+ )
127
+ inputs.to(model.device)
128
+ outputs = model(**inputs, output_hidden_states=True)
129
+
130
+ # Last layer of the hidden states. Get average embedding of all tokens
131
+ embeddings = outputs["hidden_states"][-1][:, 1:, ...].mean(1).detach().cpu().numpy()
132
+ ```
133
+