bartowski commited on
Commit
cea0c71
·
verified ·
1 Parent(s): 8fae93a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -6
README.md CHANGED
@@ -1,8 +1,9 @@
1
  ---
2
  quantized_by: bartowski
3
- pipeline_tag: text-generation
4
- extra_gated_prompt: To access Gemma on Hugging Face, you’re required to review and
5
- agree to Google’s usage license. To do this, please ensure you’re logged in to Hugging
 
6
  Face and click below. Requests are processed immediately.
7
  extra_gated_button_content: Acknowledge license
8
  license: gemma
@@ -48,8 +49,8 @@ After building with Gemma 3 clip support, run the following command:
48
 
49
  | Filename | Quant type | File Size | Split | Description |
50
  | -------- | ---------- | --------- | ----- | ----------- |
51
- | [mmproj-gemma-3-12b-it-f32.gguf](https://huggingface.co/bartowski/google_gemma-3-12b-it-GGUF/blob/main/mmproj-google_gemma-3-12b-it-f32.gguf) | f32 | 1.68GB | false | F32 format MMPROJ file, required for vision. |
52
- | [mmproj-gemma-3-12b-it-f16.gguf](https://huggingface.co/bartowski/google_gemma-3-12b-it-GGUF/blob/main/mmproj-google_gemma-3-12b-it-f16.gguf) | f16 | 851MB | false | F16 format MMPROJ file, required for vision. |
53
  | [gemma-3-12b-it-bf16.gguf](https://huggingface.co/bartowski/google_gemma-3-12b-it-GGUF/blob/main/google_gemma-3-12b-it-bf16.gguf) | bf16 | 23.54GB | false | Full BF16 weights. |
54
  | [gemma-3-12b-it-Q8_0.gguf](https://huggingface.co/bartowski/google_gemma-3-12b-it-GGUF/blob/main/google_gemma-3-12b-it-Q8_0.gguf) | Q8_0 | 12.51GB | false | Extremely high quality, generally unneeded but max available quant. |
55
  | [gemma-3-12b-it-Q6_K_L.gguf](https://huggingface.co/bartowski/google_gemma-3-12b-it-GGUF/blob/main/google_gemma-3-12b-it-Q6_K_L.gguf) | Q6_K_L | 9.90GB | false | Uses Q8_0 for embed and output weights. Very high quality, near perfect, *recommended*. |
@@ -189,4 +190,4 @@ Thank you ZeroWw for the inspiration to experiment with embed/output.
189
 
190
  Thank you to LM Studio for sponsoring my work.
191
 
192
- Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
 
1
  ---
2
  quantized_by: bartowski
3
+ pipeline_tag: image-text-to-text
4
+ extra_gated_prompt: >-
5
+ To access Gemma on Hugging Face, you’re required to review and agree to
6
+ Google’s usage license. To do this, please ensure you’re logged in to Hugging
7
  Face and click below. Requests are processed immediately.
8
  extra_gated_button_content: Acknowledge license
9
  license: gemma
 
49
 
50
  | Filename | Quant type | File Size | Split | Description |
51
  | -------- | ---------- | --------- | ----- | ----------- |
52
+ | [mmproj-gemma-3-12b-it-f32.gguf](https://huggingface.co/bartowski/google_gemma-3-12b-it-GGUF/blob/main/mmproj-google_gemma-3-12b-it-f32.gguf) | f32 | 1.69GB | false | F32 format MMPROJ file, required for vision. |
53
+ | [mmproj-gemma-3-12b-it-f16.gguf](https://huggingface.co/bartowski/google_gemma-3-12b-it-GGUF/blob/main/mmproj-google_gemma-3-12b-it-f16.gguf) | f16 | 854MB | false | F16 format MMPROJ file, required for vision. |
54
  | [gemma-3-12b-it-bf16.gguf](https://huggingface.co/bartowski/google_gemma-3-12b-it-GGUF/blob/main/google_gemma-3-12b-it-bf16.gguf) | bf16 | 23.54GB | false | Full BF16 weights. |
55
  | [gemma-3-12b-it-Q8_0.gguf](https://huggingface.co/bartowski/google_gemma-3-12b-it-GGUF/blob/main/google_gemma-3-12b-it-Q8_0.gguf) | Q8_0 | 12.51GB | false | Extremely high quality, generally unneeded but max available quant. |
56
  | [gemma-3-12b-it-Q6_K_L.gguf](https://huggingface.co/bartowski/google_gemma-3-12b-it-GGUF/blob/main/google_gemma-3-12b-it-Q6_K_L.gguf) | Q6_K_L | 9.90GB | false | Uses Q8_0 for embed and output weights. Very high quality, near perfect, *recommended*. |
 
190
 
191
  Thank you to LM Studio for sponsoring my work.
192
 
193
+ Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski