Dataset Preview
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code: DatasetGenerationError
Exception: CastError
Message: Couldn't cast
model_id: string
card: string
fullJson: string
metadata: string
-- schema metadata --
pandas: '{"index_columns": [], "column_indexes": [], "columns": [{"name":' + 531
to
{'model_id': Value('string'), 'metadata': Value('string'), 'card': Value('string')}
because column names don't match
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1405, in compute_config_parquet_and_info_response
fill_builder_info(builder, hf_endpoint=hf_endpoint, hf_token=hf_token, validate=validate)
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 578, in fill_builder_info
) = retry_validate_get_features_num_examples_size_and_compression_ratio(
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 497, in retry_validate_get_features_num_examples_size_and_compression_ratio
validate(pf)
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 535, in validate
raise TooBigRowGroupsError(
worker.job_runners.config.parquet_and_info.TooBigRowGroupsError: Parquet file has too big row groups. First row group has 7112093879 which exceeds the limit of 300000000
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1815, in _prepare_split_single
for _, table in generator:
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 692, in wrapped
for item in generator(*args, **kwargs):
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/parquet/parquet.py", line 106, in _generate_tables
yield f"{file_idx}_{batch_idx}", self._cast_table(pa_table)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/parquet/parquet.py", line 73, in _cast_table
pa_table = table_cast(pa_table, self.info.features.arrow_schema)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2272, in table_cast
return cast_table_to_schema(table, schema)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2218, in cast_table_to_schema
raise CastError(
datasets.table.CastError: Couldn't cast
model_id: string
card: string
fullJson: string
metadata: string
-- schema metadata --
pandas: '{"index_columns": [], "column_indexes": [], "columns": [{"name":' + 531
to
{'model_id': Value('string'), 'metadata': Value('string'), 'card': Value('string')}
because column names don't match
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1428, in compute_config_parquet_and_info_response
parquet_operations, partial, estimated_dataset_info = stream_convert_to_parquet(
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 994, in stream_convert_to_parquet
builder._prepare_split(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1702, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1858, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.exceptions.DatasetGenerationError: An error occurred while generating the datasetNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
model_id
string | metadata
string | card
string |
|---|---|---|
m3rg-iitd/matscibert
|
N/A
|
N/A
|
0-hero/FLUX.1-Fill-dev
|
{"id": "0-hero/FLUX.1-Fill-dev", "author": "0-hero", "sha": "1f8cf464fd37bc77a9a129f781e7928fc0e2a080", "last_modified": "2025-02-13 17:08:28+00:00", "created_at": "2025-02-13 16:11:26+00:00", "private": false, "gated": false, "disabled": false, "downloads": 0, "downloads_all_time": null, "likes": 1, "library_name": null, "gguf": null, "inference": null, "inference_provider_mapping": null, "tags": ["region:us"], "pipeline_tag": null, "mask_token": null, "trending_score": null, "card_data": null, "widget_data": null, "model_index": null, "config": null, "transformers_info": null, "siblings": ["RepoSibling(rfilename='.gitattributes', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='ae.safetensors', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='clip_l.safetensors', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='comfyui_portrait_lora64.safetensors', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='flux1-fill-dev.safetensors', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='fluxFillFP8_v10.safetensors', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='t5xxl_fp16.safetensors', size=None, blob_id=None, lfs=None)"], "spaces": [], "safetensors": null, "security_repo_status": null, "xet_enabled": null, "lastModified": "2025-02-13 17:08:28+00:00", "cardData": null, "transformersInfo": null, "_id": "67ae19ae7af05a21a7fadad4", "modelId": "0-hero/FLUX.1-Fill-dev", "usedStorage": 46689487020}
|
N/A
|
0-hero/Matter-0.1-7B
|
{"id": "0-hero/Matter-0.1-7B", "author": "0-hero", "sha": "035c8193ce71be90be7d90098669afb9164ec6cb", "last_modified": "2024-03-21 06:03:16+00:00", "created_at": "2024-03-20 05:57:38+00:00", "private": false, "gated": false, "disabled": false, "downloads": 15, "downloads_all_time": null, "likes": 1, "library_name": "transformers", "gguf": null, "inference": null, "inference_provider_mapping": null, "tags": ["transformers", "safetensors", "mistral", "text-generation", "conversational", "en", "dataset:0-hero/Matter-0.1", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us"], "pipeline_tag": "text-generation", "mask_token": null, "trending_score": null, "card_data": "datasets:\n- 0-hero/Matter-0.1\nlanguage:\n- en\nlicense: apache-2.0", "widget_data": [{"text": "Hi, what can you help me with?"}, {"text": "What is 84 * 3 / 2?"}, {"text": "Tell me an interesting fact about the universe!"}, {"text": "Explain quantum computing in simple terms."}], "model_index": null, "config": {"architectures": ["MistralForCausalLM"], "model_type": "mistral", "tokenizer_config": {"bos_token": "<s>", "chat_template": "{% if messages[0]['role'] == 'system' %}{% set loop_messages = messages[1:] %}{% set system_message = messages[0]['content'] %}{% else %}{% set loop_messages = messages %}{% set system_message = 'You are a helpful assistant.' %}{% endif %}{% if not add_generation_prompt is defined %}{% set add_generation_prompt = false %}{% endif %}{% for message in loop_messages %}{% if loop.index0 == 0 %}{{'<|im_start|>system\n' + system_message + '<|im_end|>\n'}}{% endif %}{{'<|im_start|>' + message['role'] + '\n' + message['content'] + '<|im_end|>' + '\n'}}{% endfor %}{% if add_generation_prompt %}{{ '<|im_start|>assistant\n' }}{% endif %}", "eos_token": "<|im_end|>", "pad_token": "</s>", "unk_token": "<unk>", "use_default_system_prompt": false}, "additional_chat_templates": {}}, "transformers_info": {"auto_model": "AutoModelForCausalLM", "custom_class": null, "pipeline_tag": "text-generation", "processor": "AutoTokenizer"}, "siblings": ["RepoSibling(rfilename='.gitattributes', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='README.md', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='added_tokens.json', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='config.json', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='generation_config.json', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='model-00001-of-00003.safetensors', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='model-00002-of-00003.safetensors', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='model-00003-of-00003.safetensors', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='model.safetensors.index.json', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='special_tokens_map.json', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='tokenizer.model', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='tokenizer_config.json', size=None, blob_id=None, lfs=None)"], "spaces": [], "safetensors": {"parameters": {"BF16": 7241781248}, "total": 7241781248}, "security_repo_status": null, "xet_enabled": null, "lastModified": "2024-03-21 06:03:16+00:00", "cardData": "datasets:\n- 0-hero/Matter-0.1\nlanguage:\n- en\nlicense: apache-2.0", "transformersInfo": {"auto_model": "AutoModelForCausalLM", "custom_class": null, "pipeline_tag": "text-generation", "processor": "AutoTokenizer"}, "_id": "65fa7ad23cf67ad371ecfbbd", "modelId": "0-hero/Matter-0.1-7B", "usedStorage": 14484089763}
|
---
license: apache-2.0
datasets:
- 0-hero/Matter-0.1
language:
- en
---
## Matter 7B - 0.1 (Mistral 7B Finetune)
Matter 7B is full-finetune on the [Matter dataset](https://huggingface.co/datasets/0-hero/Matter-0.1), which is curated from over 35 datsets analyzing >6B tokens
### Training
Prompt format: This model uses ChatML prompt format.
```
<|im_start|>system
You are a helpful AI assistant.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
### Function Calling
Model also supports function calling. Additional tokens for function calling
Model function call tokens
- <|begin_func|> - Function call start token
- <|end_func|> - Function call end token
Function call response tokens
- <|begin_func_response|> - Function response start token
- <|end_func_response|> - Function response end token
Example
```
<|im_start|>system
You are a helpful assistant with access to the following functions. Use them if required -
{ "name": "get_news_headlines",
"description": "Get the latest news headlines",
"parameters":
{ "type": "object",
"properties":
{ "country":
{ "type": "string",
"description": "The country for which to fetch news"
}
},
"required": [ "country" ]
}
}
<|im_end|>
<|im_start|>user
Can you tell me the latest news headlines for the United States?<|im_end|>
<|im_start|>assistant
<|begin_func|>{"name": "get_news_headlines", "arguments": '{"country": "United States"}'}<|end_func|><|im_end|>
<|im_start|>user
<|begin_func_response|>{
"headlines":
[
"Biden announces new vaccine mandates",
"Hurricane Ida devastates Louisiana",
"Apple unveils new iPhone",
"NASA's Perseverance rover collects first Mars rock sample"
]
}<|end_func_response|>
<|im_end|>
<|im_start|>assistant
Here are the latest news headlines for the United States:
1. Biden announces new vaccine mandates
2. Hurricane Ida devastates Louisiana
3. Apple unveils new iPhone
4. NASA's Perseverance rover collects first Mars rock sample
<|im_end|>
```
|
0-hero/Matter-0.1-7B-DPO-preview
|
{"id": "0-hero/Matter-0.1-7B-DPO-preview", "author": "0-hero", "sha": "78040e4754051df49dd907cf1fd46a6b8a6cc30f", "last_modified": "2024-03-23 04:13:08+00:00", "created_at": "2024-03-19 11:27:26+00:00", "private": false, "gated": false, "disabled": false, "downloads": 26, "downloads_all_time": null, "likes": 1, "library_name": "transformers", "gguf": null, "inference": null, "inference_provider_mapping": null, "tags": ["transformers", "safetensors", "mistral", "text-generation", "conversational", "en", "dataset:0-hero/Matter-0.1", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us"], "pipeline_tag": "text-generation", "mask_token": null, "trending_score": null, "card_data": "datasets:\n- 0-hero/Matter-0.1\nlanguage:\n- en\nlicense: apache-2.0", "widget_data": [{"text": "Hi, what can you help me with?"}, {"text": "What is 84 * 3 / 2?"}, {"text": "Tell me an interesting fact about the universe!"}, {"text": "Explain quantum computing in simple terms."}], "model_index": null, "config": {"architectures": ["MistralForCausalLM"], "model_type": "mistral", "tokenizer_config": {"bos_token": "<s>", "chat_template": "{% if messages[0]['role'] == 'system' %}{% set loop_messages = messages[1:] %}{% set system_message = messages[0]['content'] %}{% else %}{% set loop_messages = messages %}{% set system_message = 'You are a helpful assistant.' %}{% endif %}{% if not add_generation_prompt is defined %}{% set add_generation_prompt = false %}{% endif %}{% for message in loop_messages %}{% if loop.index0 == 0 %}{{'<|im_start|>system\n' + system_message + '<|im_end|>\n'}}{% endif %}{{'<|im_start|>' + message['role'] + '\n' + message['content'] + '<|im_end|>' + '\n'}}{% endfor %}{% if add_generation_prompt %}{{ '<|im_start|>assistant\n' }}{% endif %}", "eos_token": "<|im_end|>", "pad_token": "</s>", "unk_token": "<unk>", "use_default_system_prompt": false}, "additional_chat_templates": {}}, "transformers_info": {"auto_model": "AutoModelForCausalLM", "custom_class": null, "pipeline_tag": "text-generation", "processor": "AutoTokenizer"}, "siblings": ["RepoSibling(rfilename='.gitattributes', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='README.md', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='added_tokens.json', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='config.json', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='generation_config.json', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='model-00001-of-00003.safetensors', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='model-00002-of-00003.safetensors', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='model-00003-of-00003.safetensors', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='model.safetensors.index.json', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='special_tokens_map.json', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='tokenizer.model', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='tokenizer_config.json', size=None, blob_id=None, lfs=None)"], "spaces": [], "safetensors": {"parameters": {"BF16": 7241781248}, "total": 7241781248}, "security_repo_status": null, "xet_enabled": null, "lastModified": "2024-03-23 04:13:08+00:00", "cardData": "datasets:\n- 0-hero/Matter-0.1\nlanguage:\n- en\nlicense: apache-2.0", "transformersInfo": {"auto_model": "AutoModelForCausalLM", "custom_class": null, "pipeline_tag": "text-generation", "processor": "AutoTokenizer"}, "_id": "65f9769e8213fe9ea8116673", "modelId": "0-hero/Matter-0.1-7B-DPO-preview", "usedStorage": 43451282403}
|
---
license: apache-2.0
datasets:
- 0-hero/Matter-0.1
language:
- en
---
## Matter 7B - 0.1 - DPO (Mistral 7B Finetune)
DPO version of [Matter 7B](https://huggingface.co/0-hero/Matter-0.1-7B) fine-tuned on the [Matter dataset](https://huggingface.co/datasets/0-hero/Matter-0.1), which is curated from over 35 datsets analyzing >6B tokens
### Training
Prompt format: This model uses ChatML prompt format.
```
<|im_start|>system
You are a helpful AI assistant.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
### Function Calling
Model also supports function calling. Additional tokens for function calling
Model function call tokens
- <|begin_func|> - Function call start token
- <|end_func|> - Function call end token
Function call response tokens
- <|begin_func_response|> - Function response start token
- <|end_func_response|> - Function response end token
Example
```
<|im_start|>system
You are a helpful assistant with access to the following functions. Use them if required -
{ "name": "get_news_headlines",
"description": "Get the latest news headlines",
"parameters":
{ "type": "object",
"properties":
{ "country":
{ "type": "string",
"description": "The country for which to fetch news"
}
},
"required": [ "country" ]
}
}
<|im_end|>
<|im_start|>user
Can you tell me the latest news headlines for the United States?<|im_end|>
<|im_start|>assistant
<|begin_func|>{"name": "get_news_headlines", "arguments": '{"country": "United States"}'}<|end_func|><|im_end|>
<|im_start|>user
<|begin_func_response|>{
"headlines":
[
"Biden announces new vaccine mandates",
"Hurricane Ida devastates Louisiana",
"Apple unveils new iPhone",
"NASA's Perseverance rover collects first Mars rock sample"
]
}<|end_func_response|>
<|im_end|>
<|im_start|>assistant
Here are the latest news headlines for the United States:
1. Biden announces new vaccine mandates
2. Hurricane Ida devastates Louisiana
3. Apple unveils new iPhone
4. NASA's Perseverance rover collects first Mars rock sample
<|im_end|>
```
|
0-hero/Matter-0.1-7B-boost
|
{"id": "0-hero/Matter-0.1-7B-boost", "author": "0-hero", "sha": "07ba98e43a2f9e587050e9d6f84af5e0040c3e0e", "last_modified": "2024-03-23 18:07:10+00:00", "created_at": "2024-03-19 11:26:56+00:00", "private": false, "gated": false, "disabled": false, "downloads": 23, "downloads_all_time": null, "likes": 1, "library_name": "transformers", "gguf": null, "inference": null, "inference_provider_mapping": null, "tags": ["transformers", "pytorch", "safetensors", "mistral", "text-generation", "conversational", "en", "dataset:0-hero/Matter-0.1", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us"], "pipeline_tag": "text-generation", "mask_token": null, "trending_score": null, "card_data": "datasets:\n- 0-hero/Matter-0.1\nlanguage:\n- en\nlicense: apache-2.0", "widget_data": [{"text": "Hi, what can you help me with?"}, {"text": "What is 84 * 3 / 2?"}, {"text": "Tell me an interesting fact about the universe!"}, {"text": "Explain quantum computing in simple terms."}], "model_index": null, "config": {"architectures": ["MistralForCausalLM"], "model_type": "mistral", "tokenizer_config": {"bos_token": "<s>", "chat_template": "{% if messages[0]['role'] == 'system' %}{% set loop_messages = messages[1:] %}{% set system_message = messages[0]['content'] %}{% else %}{% set loop_messages = messages %}{% set system_message = 'You are a helpful assistant. Please give a long and detailed answer.' %}{% endif %}{% if not add_generation_prompt is defined %}{% set add_generation_prompt = false %}{% endif %}{% for message in loop_messages %}{% if loop.index0 == 0 %}{{'<|im_start|>system\n' + system_message + '<|im_end|>\n'}}{% endif %}{{'<|im_start|>' + message['role'] + '\n' + message['content'] + '<|im_end|>' + '\n'}}{% endfor %}{% if add_generation_prompt %}{{ '<|im_start|>assistant\n' }}{% endif %}", "eos_token": "<|im_end|>", "pad_token": "</s>", "unk_token": "<unk>", "use_default_system_prompt": false}, "additional_chat_templates": {}}, "transformers_info": {"auto_model": "AutoModelForCausalLM", "custom_class": null, "pipeline_tag": "text-generation", "processor": "AutoTokenizer"}, "siblings": ["RepoSibling(rfilename='.gitattributes', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='README.md', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='added_tokens.json', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='config.json', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='generation_config.json', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='model.safetensors', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='pytorch_model.bin', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='special_tokens_map.json', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='tokenizer.model', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='tokenizer_config.json', size=None, blob_id=None, lfs=None)"], "spaces": [], "safetensors": {"parameters": {"BF16": 7241781248}, "total": 7241781248}, "security_repo_status": null, "xet_enabled": null, "lastModified": "2024-03-23 18:07:10+00:00", "cardData": "datasets:\n- 0-hero/Matter-0.1\nlanguage:\n- en\nlicense: apache-2.0", "transformersInfo": {"auto_model": "AutoModelForCausalLM", "custom_class": null, "pipeline_tag": "text-generation", "processor": "AutoTokenizer"}, "_id": "65f97680c88ab7d1fb3c81c9", "modelId": "0-hero/Matter-0.1-7B-boost", "usedStorage": 57934891741}
|
---
license: apache-2.0
datasets:
- 0-hero/Matter-0.1
language:
- en
---
## Matter 7B - 0.1 Boost (Mistral 7B Finetune)
Matter 7B is full-finetune on the [Matter dataset](https://huggingface.co/datasets/0-hero/Matter-0.1), which is curated from over 35 datsets analyzing >6B tokens
Boost version is finetuned on some additional data
### Training
Prompt format: This model uses ChatML prompt format.
```
<|im_start|>system
You are a helpful AI assistant.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
### Function Calling
Model also supports function calling. Additional tokens for function calling
Model function call tokens
- <|begin_func|> - Function call start token
- <|end_func|> - Function call end token
Function call response tokens
- <|begin_func_response|> - Function response start token
- <|end_func_response|> - Function response end token
Example
```
<|im_start|>system
You are a helpful assistant with access to the following functions. Use them if required -
{ "name": "get_news_headlines",
"description": "Get the latest news headlines",
"parameters":
{ "type": "object",
"properties":
{ "country":
{ "type": "string",
"description": "The country for which to fetch news"
}
},
"required": [ "country" ]
}
}
<|im_end|>
<|im_start|>user
Can you tell me the latest news headlines for the United States?<|im_end|>
<|im_start|>assistant
<|begin_func|>{"name": "get_news_headlines", "arguments": '{"country": "United States"}'}<|end_func|><|im_end|>
<|im_start|>user
<|begin_func_response|>{
"headlines":
[
"Biden announces new vaccine mandates",
"Hurricane Ida devastates Louisiana",
"Apple unveils new iPhone",
"NASA's Perseverance rover collects first Mars rock sample"
]
}<|end_func_response|>
<|im_end|>
<|im_start|>assistant
Here are the latest news headlines for the United States:
1. Biden announces new vaccine mandates
2. Hurricane Ida devastates Louisiana
3. Apple unveils new iPhone
4. NASA's Perseverance rover collects first Mars rock sample
<|im_end|>
```
|
0-hero/Matter-0.1-7B-boost-DPO-preview
|
{"id": "0-hero/Matter-0.1-7B-boost-DPO-preview", "author": "0-hero", "sha": "619faec83e91b303d54a691c375dcedf50fdfc44", "last_modified": "2024-03-23 18:07:36+00:00", "created_at": "2024-03-21 13:04:58+00:00", "private": false, "gated": false, "disabled": false, "downloads": 17, "downloads_all_time": null, "likes": 3, "library_name": "transformers", "gguf": null, "inference": null, "inference_provider_mapping": null, "tags": ["transformers", "pytorch", "safetensors", "mistral", "text-generation", "conversational", "en", "dataset:0-hero/Matter-0.1", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us"], "pipeline_tag": "text-generation", "mask_token": null, "trending_score": null, "card_data": "datasets:\n- 0-hero/Matter-0.1\nlanguage:\n- en\nlicense: apache-2.0", "widget_data": [{"text": "Hi, what can you help me with?"}, {"text": "What is 84 * 3 / 2?"}, {"text": "Tell me an interesting fact about the universe!"}, {"text": "Explain quantum computing in simple terms."}], "model_index": null, "config": {"architectures": ["MistralForCausalLM"], "model_type": "mistral", "tokenizer_config": {"bos_token": "<s>", "chat_template": "{% if not add_generation_prompt is defined %}{% set add_generation_prompt = false %}{% endif %}{% for message in messages %}{{'<|im_start|>' + message['role'] + '\n' + message['content'] + '<|im_end|>' + '\n'}}{% endfor %}{% if add_generation_prompt %}{{ '<|im_start|>assistant\n' }}{% endif %}", "eos_token": "<|im_end|>", "pad_token": "</s>", "unk_token": "<unk>", "use_default_system_prompt": false}, "additional_chat_templates": {}}, "transformers_info": {"auto_model": "AutoModelForCausalLM", "custom_class": null, "pipeline_tag": "text-generation", "processor": "AutoTokenizer"}, "siblings": ["RepoSibling(rfilename='.gitattributes', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='README.md', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='added_tokens.json', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='config.json', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='generation_config.json', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='model.safetensors', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='pytorch_model.bin', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='special_tokens_map.json', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='tokenizer.model', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='tokenizer_config.json', size=None, blob_id=None, lfs=None)"], "spaces": [], "safetensors": {"parameters": {"BF16": 7241781248}, "total": 7241781248}, "security_repo_status": null, "xet_enabled": null, "lastModified": "2024-03-23 18:07:36+00:00", "cardData": "datasets:\n- 0-hero/Matter-0.1\nlanguage:\n- en\nlicense: apache-2.0", "transformersInfo": {"auto_model": "AutoModelForCausalLM", "custom_class": null, "pipeline_tag": "text-generation", "processor": "AutoTokenizer"}, "_id": "65fc307a2c813664be6289a1", "modelId": "0-hero/Matter-0.1-7B-boost-DPO-preview", "usedStorage": 28967692696}
|
---
license: apache-2.0
datasets:
- 0-hero/Matter-0.1
language:
- en
---
## Matter 7B - 0.1 Boost - DPO (Mistral 7B Finetune)
DPO version of [Matter 7B Boost](https://huggingface.co/0-hero/Matter-0.1-7B-boost) fine-tuned on the [Matter dataset](https://huggingface.co/datasets/0-hero/Matter-0.1), which is curated from over 35 datsets analyzing >6B tokens
### Training
Prompt format: This model uses ChatML prompt format.
```
<|im_start|>system
You are a helpful AI assistant.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
### Function Calling
Model also supports function calling. Additional tokens for function calling
Model function call tokens
- <|begin_func|> - Function call start token
- <|end_func|> - Function call end token
Function call response tokens
- <|begin_func_response|> - Function response start token
- <|end_func_response|> - Function response end token
Example
```
<|im_start|>system
You are a helpful assistant with access to the following functions. Use them if required -
{ "name": "get_news_headlines",
"description": "Get the latest news headlines",
"parameters":
{ "type": "object",
"properties":
{ "country":
{ "type": "string",
"description": "The country for which to fetch news"
}
},
"required": [ "country" ]
}
}
<|im_end|>
<|im_start|>user
Can you tell me the latest news headlines for the United States?<|im_end|>
<|im_start|>assistant
<|begin_func|>{"name": "get_news_headlines", "arguments": '{"country": "United States"}'}<|end_func|><|im_end|>
<|im_start|>user
<|begin_func_response|>{
"headlines":
[
"Biden announces new vaccine mandates",
"Hurricane Ida devastates Louisiana",
"Apple unveils new iPhone",
"NASA's Perseverance rover collects first Mars rock sample"
]
}<|end_func_response|>
<|im_end|>
<|im_start|>assistant
Here are the latest news headlines for the United States:
1. Biden announces new vaccine mandates
2. Hurricane Ida devastates Louisiana
3. Apple unveils new iPhone
4. NASA's Perseverance rover collects first Mars rock sample
<|im_end|>
```
|
0-hero/Matter-0.1-Slim-7B-A
|
{"id": "0-hero/Matter-0.1-Slim-7B-A", "author": "0-hero", "sha": "af6d32df0ed40f2b07cea3bacf21fddb1e632332", "last_modified": "2024-04-03 06:02:46+00:00", "created_at": "2024-03-13 15:56:35+00:00", "private": false, "gated": false, "disabled": false, "downloads": 10, "downloads_all_time": null, "likes": 2, "library_name": "transformers", "gguf": null, "inference": null, "inference_provider_mapping": null, "tags": ["transformers", "pytorch", "safetensors", "mistral", "text-generation", "conversational", "en", "dataset:0-hero/Matter-0.1-Slim-A", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us"], "pipeline_tag": "text-generation", "mask_token": null, "trending_score": null, "card_data": "datasets:\n- 0-hero/Matter-0.1-Slim-A\nlanguage:\n- en\nlicense: apache-2.0", "widget_data": [{"text": "Hi, what can you help me with?"}, {"text": "What is 84 * 3 / 2?"}, {"text": "Tell me an interesting fact about the universe!"}, {"text": "Explain quantum computing in simple terms."}], "model_index": null, "config": {"architectures": ["MistralForCausalLM"], "model_type": "mistral", "tokenizer_config": {"bos_token": "<s>", "chat_template": "{% if messages[0]['role'] == 'system' %}{% set loop_messages = messages[1:] %}{% set system_message = messages[0]['content'] %}{% else %}{% set loop_messages = messages %}{% set system_message = 'You are a helpful assistant.' %}{% endif %}{% if not add_generation_prompt is defined %}{% set add_generation_prompt = false %}{% endif %}{% for message in loop_messages %}{% if loop.index0 == 0 %}{{'<|im_start|>system\n' + system_message + '<|im_end|>\n'}}{% endif %}{{'<|im_start|>' + message['role'] + '\n' + message['content'] + '<|im_end|>' + '\n'}}{% endfor %}{% if add_generation_prompt %}{{ '<|im_start|>assistant\n' }}{% endif %}", "eos_token": "<|im_end|>", "pad_token": "</s>", "unk_token": "<unk>", "use_default_system_prompt": false}, "additional_chat_templates": {}}, "transformers_info": {"auto_model": "AutoModelForCausalLM", "custom_class": null, "pipeline_tag": "text-generation", "processor": "AutoTokenizer"}, "siblings": ["RepoSibling(rfilename='.gitattributes', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='README.md', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='added_tokens.json', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='config.json', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='generation_config.json', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='model.safetensors', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='pytorch_model.bin', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='special_tokens_map.json', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='tokenizer.model', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='tokenizer_config.json', size=None, blob_id=None, lfs=None)"], "spaces": [], "safetensors": {"parameters": {"BF16": 7241781248}, "total": 7241781248}, "security_repo_status": null, "xet_enabled": null, "lastModified": "2024-04-03 06:02:46+00:00", "cardData": "datasets:\n- 0-hero/Matter-0.1-Slim-A\nlanguage:\n- en\nlicense: apache-2.0", "transformersInfo": {"auto_model": "AutoModelForCausalLM", "custom_class": null, "pipeline_tag": "text-generation", "processor": "AutoTokenizer"}, "_id": "65f1ccb389aedc3dbe241402", "modelId": "0-hero/Matter-0.1-Slim-7B-A", "usedStorage": 28967692696}
|
---
license: apache-2.0
datasets:
- 0-hero/Matter-0.1-Slim-A
language:
- en
---
## Matter 7B (Mistral 7B Finetune)
Matter 7B is finetuned on the [slim-A version of the Matter dataset](https://huggingface.co/datasets/0-hero/Matter-0.1-Slim-A) (~285k rows), which is curated from over 35 datsets analyzing >6B tokens
### Training
It took ~15 hours to train 3 epochs on 4x A100s (80GB) using Axolotl for a full-finetune
Prompt format: This model uses ChatML prompt format.
```
<|im_start|>system
You are a helpful AI assistant.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
### Function Calling
Model also supports function calling. Additional tokens for function calling
Model function call tokens
- <|begin_func|> - Function call start token
- <|end_func|> - Function call end token
Function call response tokens
- <|begin_func_response|> - Function response start token
- <|end_func_response|> - Function response end token
Example
```
<|im_start|>system
You are a helpful assistant with access to the following functions. Use them if required -
{ "name": "get_news_headlines",
"description": "Get the latest news headlines",
"parameters":
{ "type": "object",
"properties":
{ "country":
{ "type": "string",
"description": "The country for which to fetch news"
}
},
"required": [ "country" ]
}
}
<|im_end|>
<|im_start|>user
Can you tell me the latest news headlines for the United States?<|im_end|>
<|im_start|>assistant
<|begin_func|>{"name": "get_news_headlines", "arguments": '{"country": "United States"}'}<|end_func|><|im_end|>
<|im_start|>user
<|begin_func_response|>{
"headlines":
[
"Biden announces new vaccine mandates",
"Hurricane Ida devastates Louisiana",
"Apple unveils new iPhone",
"NASA's Perseverance rover collects first Mars rock sample"
]
}<|end_func_response|>
<|im_end|>
<|im_start|>assistant
Here are the latest news headlines for the United States:
1. Biden announces new vaccine mandates
2. Hurricane Ida devastates Louisiana
3. Apple unveils new iPhone
4. NASA's Perseverance rover collects first Mars rock sample
<|im_end|>
```
|
0-hero/Matter-0.1-Slim-7B-B
|
{"id": "0-hero/Matter-0.1-Slim-7B-B", "author": "0-hero", "sha": "a46569b4db2d150517491072d447e182d90c9b42", "last_modified": "2024-04-07 07:27:36+00:00", "created_at": "2024-03-14 13:35:58+00:00", "private": false, "gated": false, "disabled": false, "downloads": 17, "downloads_all_time": null, "likes": 0, "library_name": "transformers", "gguf": null, "inference": null, "inference_provider_mapping": null, "tags": ["transformers", "pytorch", "safetensors", "mistral", "text-generation", "conversational", "en", "dataset:0-hero/Matter-0.1-Slim-B", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us"], "pipeline_tag": "text-generation", "mask_token": null, "trending_score": null, "card_data": "datasets:\n- 0-hero/Matter-0.1-Slim-B\nlanguage:\n- en\nlicense: apache-2.0", "widget_data": [{"text": "Hi, what can you help me with?"}, {"text": "What is 84 * 3 / 2?"}, {"text": "Tell me an interesting fact about the universe!"}, {"text": "Explain quantum computing in simple terms."}], "model_index": null, "config": {"architectures": ["MistralForCausalLM"], "model_type": "mistral", "tokenizer_config": {"bos_token": "<s>", "chat_template": "{% if messages[0]['role'] == 'system' %}{% set loop_messages = messages[1:] %}{% set system_message = messages[0]['content'] %}{% else %}{% set loop_messages = messages %}{% set system_message = 'You are a helpful assistant.' %}{% endif %}{% if not add_generation_prompt is defined %}{% set add_generation_prompt = false %}{% endif %}{% for message in loop_messages %}{% if loop.index0 == 0 %}{{'<|im_start|>system\n' + system_message + '<|im_end|>\n'}}{% endif %}{{'<|im_start|>' + message['role'] + '\n' + message['content'] + '<|im_end|>' + '\n'}}{% endfor %}{% if add_generation_prompt %}{{ '<|im_start|>assistant\n' }}{% endif %}", "eos_token": "<|im_end|>", "pad_token": "</s>", "unk_token": "<unk>", "use_default_system_prompt": false}, "additional_chat_templates": {}}, "transformers_info": {"auto_model": "AutoModelForCausalLM", "custom_class": null, "pipeline_tag": "text-generation", "processor": "AutoTokenizer"}, "siblings": ["RepoSibling(rfilename='.gitattributes', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='README.md', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='added_tokens.json', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='config.json', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='generation_config.json', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='model.safetensors', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='pytorch_model.bin', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='special_tokens_map.json', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='tokenizer.model', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='tokenizer_config.json', size=None, blob_id=None, lfs=None)"], "spaces": [], "safetensors": {"parameters": {"BF16": 7241781248}, "total": 7241781248}, "security_repo_status": null, "xet_enabled": null, "lastModified": "2024-04-07 07:27:36+00:00", "cardData": "datasets:\n- 0-hero/Matter-0.1-Slim-B\nlanguage:\n- en\nlicense: apache-2.0", "transformersInfo": {"auto_model": "AutoModelForCausalLM", "custom_class": null, "pipeline_tag": "text-generation", "processor": "AutoTokenizer"}, "_id": "65f2fd3e0433be9a4a520a31", "modelId": "0-hero/Matter-0.1-Slim-7B-B", "usedStorage": 28967692696}
|
---
license: apache-2.0
datasets:
- 0-hero/Matter-0.1-Slim-B
language:
- en
---
## Matter 7B (Mistral 7B Finetune)
Matter 7B is an extended full-finetune on the [slim-B version of the Matter dataset](https://huggingface.co/datasets/0-hero/Matter-0.1-Slim-B), which is curated from over 35 datsets analyzing >6B tokens
### Training
It took ~15 hours to train 3 epochs on 4x A100s (80GB) using Axolotl for a full-finetune
Prompt format: This model uses ChatML prompt format.
```
<|im_start|>system
You are a helpful AI assistant.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
### Function Calling
Model also supports function calling. Additional tokens for function calling
Model function call tokens
- <|begin_func|> - Function call start token
- <|end_func|> - Function call end token
Function call response tokens
- <|begin_func_response|> - Function response start token
- <|end_func_response|> - Function response end token
Example
```
<|im_start|>system
You are a helpful assistant with access to the following functions. Use them if required -
{ "name": "get_news_headlines",
"description": "Get the latest news headlines",
"parameters":
{ "type": "object",
"properties":
{ "country":
{ "type": "string",
"description": "The country for which to fetch news"
}
},
"required": [ "country" ]
}
}
<|im_end|>
<|im_start|>user
Can you tell me the latest news headlines for the United States?<|im_end|>
<|im_start|>assistant
<|begin_func|>{"name": "get_news_headlines", "arguments": '{"country": "United States"}'}<|end_func|><|im_end|>
<|im_start|>user
<|begin_func_response|>{
"headlines":
[
"Biden announces new vaccine mandates",
"Hurricane Ida devastates Louisiana",
"Apple unveils new iPhone",
"NASA's Perseverance rover collects first Mars rock sample"
]
}<|end_func_response|>
<|im_end|>
<|im_start|>assistant
Here are the latest news headlines for the United States:
1. Biden announces new vaccine mandates
2. Hurricane Ida devastates Louisiana
3. Apple unveils new iPhone
4. NASA's Perseverance rover collects first Mars rock sample
<|im_end|>
```
|
0-hero/Matter-0.1-Slim-7B-C
|
{"id": "0-hero/Matter-0.1-Slim-7B-C", "author": "0-hero", "sha": "f66a5281d14ed87d920ac22a5530d1438ca2f456", "last_modified": "2024-04-03 08:37:19+00:00", "created_at": "2024-03-15 05:39:43+00:00", "private": false, "gated": false, "disabled": false, "downloads": 20, "downloads_all_time": null, "likes": 0, "library_name": "transformers", "gguf": null, "inference": null, "inference_provider_mapping": null, "tags": ["transformers", "pytorch", "safetensors", "mistral", "text-generation", "conversational", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us"], "pipeline_tag": "text-generation", "mask_token": null, "trending_score": null, "card_data": "license: apache-2.0", "widget_data": [{"text": "Hi, what can you help me with?"}, {"text": "What is 84 * 3 / 2?"}, {"text": "Tell me an interesting fact about the universe!"}, {"text": "Explain quantum computing in simple terms."}], "model_index": null, "config": {"architectures": ["MistralForCausalLM"], "model_type": "mistral", "tokenizer_config": {"bos_token": "<s>", "chat_template": "{% if messages[0]['role'] == 'system' %}{% set loop_messages = messages[1:] %}{% set system_message = messages[0]['content'] %}{% else %}{% set loop_messages = messages %}{% set system_message = 'You are a helpful assistant.' %}{% endif %}{% if not add_generation_prompt is defined %}{% set add_generation_prompt = false %}{% endif %}{% for message in loop_messages %}{% if loop.index0 == 0 %}{{'<|im_start|>system\n' + system_message + '<|im_end|>\n'}}{% endif %}{{'<|im_start|>' + message['role'] + '\n' + message['content'] + '<|im_end|>' + '\n'}}{% endfor %}{% if add_generation_prompt %}{{ '<|im_start|>assistant\n' }}{% endif %}", "eos_token": "<|im_end|>", "pad_token": "</s>", "unk_token": "<unk>", "use_default_system_prompt": false}, "additional_chat_templates": {}}, "transformers_info": {"auto_model": "AutoModelForCausalLM", "custom_class": null, "pipeline_tag": "text-generation", "processor": "AutoTokenizer"}, "siblings": ["RepoSibling(rfilename='.gitattributes', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='README.md', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='added_tokens.json', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='config.json', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='generation_config.json', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='model.safetensors', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='pytorch_model.bin', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='special_tokens_map.json', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='tokenizer.model', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='tokenizer_config.json', size=None, blob_id=None, lfs=None)"], "spaces": [], "safetensors": {"parameters": {"BF16": 7241781248}, "total": 7241781248}, "security_repo_status": null, "xet_enabled": null, "lastModified": "2024-04-03 08:37:19+00:00", "cardData": "license: apache-2.0", "transformersInfo": {"auto_model": "AutoModelForCausalLM", "custom_class": null, "pipeline_tag": "text-generation", "processor": "AutoTokenizer"}, "_id": "65f3df1f5f86a2581bd6299b", "modelId": "0-hero/Matter-0.1-Slim-7B-C", "usedStorage": 28967692696}
|
---
license: apache-2.0
---
## Matter 7B (Mistral 7B Finetune)
Matter 7B is contunied full-finetuned on the [slim-C version of the Matter dataset](https://huggingface.co/datasets/0-hero/Matter-0.1-Slim-C), which is curated from over 35 datsets analyzing >6B tokens
### Training
It took ~17 hours to train 3 epochs on 4x A100s (80GB) using Axolotl for a full-finetune
Prompt format: This model uses ChatML prompt format.
```
<|im_start|>system
You are a helpful AI assistant.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
### Function Calling
Model also supports function calling. Additional tokens for function calling
Model function call tokens
- <|begin_func|> - Function call start token
- <|end_func|> - Function call end token
Function call response tokens
- <|begin_func_response|> - Function response start token
- <|end_func_response|> - Function response end token
Example
```
<|im_start|>system
You are a helpful assistant with access to the following functions. Use them if required -
{ "name": "get_news_headlines",
"description": "Get the latest news headlines",
"parameters":
{ "type": "object",
"properties":
{ "country":
{ "type": "string",
"description": "The country for which to fetch news"
}
},
"required": [ "country" ]
}
}
<|im_end|>
<|im_start|>user
Can you tell me the latest news headlines for the United States?<|im_end|>
<|im_start|>assistant
<|begin_func|>{"name": "get_news_headlines", "arguments": '{"country": "United States"}'}<|end_func|><|im_end|>
<|im_start|>user
<|begin_func_response|>{
"headlines":
[
"Biden announces new vaccine mandates",
"Hurricane Ida devastates Louisiana",
"Apple unveils new iPhone",
"NASA's Perseverance rover collects first Mars rock sample"
]
}<|end_func_response|>
<|im_end|>
<|im_start|>assistant
Here are the latest news headlines for the United States:
1. Biden announces new vaccine mandates
2. Hurricane Ida devastates Louisiana
3. Apple unveils new iPhone
4. NASA's Perseverance rover collects first Mars rock sample
<|im_end|>
```
|
0-hero/Matter-0.1-Slim-7B-C-DPO
|
{"id": "0-hero/Matter-0.1-Slim-7B-C-DPO", "author": "0-hero", "sha": "559c7e12127c8978a6a809c56a3da18c9a7b84e3", "last_modified": "2024-04-07 07:27:27+00:00", "created_at": "2024-03-17 14:20:21+00:00", "private": false, "gated": false, "disabled": false, "downloads": 10, "downloads_all_time": null, "likes": 0, "library_name": "transformers", "gguf": null, "inference": null, "inference_provider_mapping": null, "tags": ["transformers", "pytorch", "safetensors", "mistral", "text-generation", "conversational", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us"], "pipeline_tag": "text-generation", "mask_token": null, "trending_score": null, "card_data": "license: apache-2.0", "widget_data": [{"text": "Hi, what can you help me with?"}, {"text": "What is 84 * 3 / 2?"}, {"text": "Tell me an interesting fact about the universe!"}, {"text": "Explain quantum computing in simple terms."}], "model_index": null, "config": {"architectures": ["MistralForCausalLM"], "model_type": "mistral", "tokenizer_config": {"bos_token": "<s>", "chat_template": "{% if messages[0]['role'] == 'system' %}{% set loop_messages = messages[1:] %}{% set system_message = messages[0]['content'] %}{% else %}{% set loop_messages = messages %}{% set system_message = 'You are a helpful assistant.' %}{% endif %}{% if not add_generation_prompt is defined %}{% set add_generation_prompt = false %}{% endif %}{% for message in loop_messages %}{% if loop.index0 == 0 %}{{'<|im_start|>system\n' + system_message + '<|im_end|>\n'}}{% endif %}{{'<|im_start|>' + message['role'] + '\n' + message['content'] + '<|im_end|>' + '\n'}}{% endfor %}{% if add_generation_prompt %}{{ '<|im_start|>assistant\n' }}{% endif %}", "eos_token": "<|im_end|>", "pad_token": "</s>", "unk_token": "<unk>", "use_default_system_prompt": false}, "additional_chat_templates": {}}, "transformers_info": {"auto_model": "AutoModelForCausalLM", "custom_class": null, "pipeline_tag": "text-generation", "processor": "AutoTokenizer"}, "siblings": ["RepoSibling(rfilename='.gitattributes', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='README.md', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='added_tokens.json', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='config.json', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='generation_config.json', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='model.safetensors', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='pytorch_model.bin', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='special_tokens_map.json', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='tokenizer.model', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='tokenizer_config.json', size=None, blob_id=None, lfs=None)"], "spaces": [], "safetensors": {"parameters": {"BF16": 7241781248}, "total": 7241781248}, "security_repo_status": null, "xet_enabled": null, "lastModified": "2024-04-07 07:27:27+00:00", "cardData": "license: apache-2.0", "transformersInfo": {"auto_model": "AutoModelForCausalLM", "custom_class": null, "pipeline_tag": "text-generation", "processor": "AutoTokenizer"}, "_id": "65f6fc25ae5c84dcaf37db9d", "modelId": "0-hero/Matter-0.1-Slim-7B-C-DPO", "usedStorage": 28967692696}
|
---
license: apache-2.0
---
## Matter 7B (Mistral 7B Finetune) - DPO
Matter 7B is contunied full-finetuned on the [slim-C version of the Matter dataset](https://huggingface.co/datasets/0-hero/Matter-0.1-Slim-C), which is curated from over 35 datsets analyzing >6B tokens
### Training
It took ~17 hours to train 3 epochs on 4x A100s (80GB) using Axolotl for a full-finetune
Prompt format: This model uses ChatML prompt format.
```
<|im_start|>system
You are a helpful AI assistant.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
### Function Calling
Model also supports function calling. Additional tokens for function calling
Model function call tokens
- <|begin_func|> - Function call start token
- <|end_func|> - Function call end token
Function call response tokens
- <|begin_func_response|> - Function response start token
- <|end_func_response|> - Function response end token
Example
```
<|im_start|>system
You are a helpful assistant with access to the following functions. Use them if required -
{ "name": "get_news_headlines",
"description": "Get the latest news headlines",
"parameters":
{ "type": "object",
"properties":
{ "country":
{ "type": "string",
"description": "The country for which to fetch news"
}
},
"required": [ "country" ]
}
}
<|im_end|>
<|im_start|>user
Can you tell me the latest news headlines for the United States?<|im_end|>
<|im_start|>assistant
<|begin_func|>{"name": "get_news_headlines", "arguments": '{"country": "United States"}'}<|end_func|><|im_end|>
<|im_start|>user
<|begin_func_response|>{
"headlines":
[
"Biden announces new vaccine mandates",
"Hurricane Ida devastates Louisiana",
"Apple unveils new iPhone",
"NASA's Perseverance rover collects first Mars rock sample"
]
}<|end_func_response|>
<|im_end|>
<|im_start|>assistant
Here are the latest news headlines for the United States:
1. Biden announces new vaccine mandates
2. Hurricane Ida devastates Louisiana
3. Apple unveils new iPhone
4. NASA's Perseverance rover collects first Mars rock sample
<|im_end|>
```
|
0-hero/Matter-0.1-Slim-7B-preview
|
{"id": "0-hero/Matter-0.1-Slim-7B-preview", "author": "0-hero", "sha": "22cbc27ebcd20570adf75287115c79b2140c5b12", "last_modified": "2024-03-17 07:02:21+00:00", "created_at": "2024-03-15 20:11:50+00:00", "private": false, "gated": false, "disabled": false, "downloads": 69, "downloads_all_time": null, "likes": 0, "library_name": "transformers", "gguf": null, "inference": null, "inference_provider_mapping": null, "tags": ["transformers", "safetensors", "mistral", "text-generation", "conversational", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us"], "pipeline_tag": "text-generation", "mask_token": null, "trending_score": null, "card_data": "license: apache-2.0", "widget_data": [{"text": "Hi, what can you help me with?"}, {"text": "What is 84 * 3 / 2?"}, {"text": "Tell me an interesting fact about the universe!"}, {"text": "Explain quantum computing in simple terms."}], "model_index": null, "config": {"architectures": ["MistralForCausalLM"], "model_type": "mistral", "tokenizer_config": {"bos_token": "<s>", "chat_template": "{% if messages[0]['role'] == 'system' %}{% set loop_messages = messages[1:] %}{% set system_message = messages[0]['content'] %}{% else %}{% set loop_messages = messages %}{% set system_message = 'You are a helpful assistant.' %}{% endif %}{% if not add_generation_prompt is defined %}{% set add_generation_prompt = false %}{% endif %}{% for message in loop_messages %}{% if loop.index0 == 0 %}{{'<|im_start|>system\n' + system_message + '<|im_end|>\n'}}{% endif %}{{'<|im_start|>' + message['role'] + '\n' + message['content'] + '<|im_end|>' + '\n'}}{% endfor %}{% if add_generation_prompt %}{{ '<|im_start|>assistant\n' }}{% endif %}", "eos_token": "<|im_end|>", "pad_token": "</s>", "unk_token": "<unk>", "use_default_system_prompt": false}, "additional_chat_templates": {}}, "transformers_info": {"auto_model": "AutoModelForCausalLM", "custom_class": null, "pipeline_tag": "text-generation", "processor": "AutoTokenizer"}, "siblings": ["RepoSibling(rfilename='.gitattributes', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='README.md', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='added_tokens.json', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='config.json', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='generation_config.json', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='model-00001-of-00003.safetensors', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='model-00002-of-00003.safetensors', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='model-00003-of-00003.safetensors', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='model.safetensors.index.json', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='special_tokens_map.json', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='tokenizer.model', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='tokenizer_config.json', size=None, blob_id=None, lfs=None)"], "spaces": [], "safetensors": {"parameters": {"BF16": 7241781248}, "total": 7241781248}, "security_repo_status": null, "xet_enabled": null, "lastModified": "2024-03-17 07:02:21+00:00", "cardData": "license: apache-2.0", "transformersInfo": {"auto_model": "AutoModelForCausalLM", "custom_class": null, "pipeline_tag": "text-generation", "processor": "AutoTokenizer"}, "_id": "65f4ab866bd4bac5b6a4d7b1", "modelId": "0-hero/Matter-0.1-Slim-7B-preview", "usedStorage": 57934878723}
|
---
license: apache-2.0
---
## Matter 7B (Mistral 7B Finetune)
Matter 7B is continued full-finetuned on the [slim-D version of the Matter dataset](https://huggingface.co/datasets/0-hero/Matter-0.1-Slim-D), which is curated from over 35 datsets analyzing >6B tokens
### Training
It took ~17 hours to train 3 epochs on 4x A100s (80GB) using Axolotl for a full-finetune
Prompt format: This model uses ChatML prompt format.
```
<|im_start|>system
You are a helpful AI assistant.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
### Function Calling
Model also supports function calling. Additional tokens for function calling
Model function call tokens
- <|begin_func|> - Function call start token
- <|end_func|> - Function call end token
Function call response tokens
- <|begin_func_response|> - Function response start token
- <|end_func_response|> - Function response end token
Example
```
<|im_start|>system
You are a helpful assistant with access to the following functions. Use them if required -
{ "name": "get_news_headlines",
"description": "Get the latest news headlines",
"parameters":
{ "type": "object",
"properties":
{ "country":
{ "type": "string",
"description": "The country for which to fetch news"
}
},
"required": [ "country" ]
}
}
<|im_end|>
<|im_start|>user
Can you tell me the latest news headlines for the United States?<|im_end|>
<|im_start|>assistant
<|begin_func|>{"name": "get_news_headlines", "arguments": '{"country": "United States"}'}<|end_func|><|im_end|>
<|im_start|>user
<|begin_func_response|>{
"headlines":
[
"Biden announces new vaccine mandates",
"Hurricane Ida devastates Louisiana",
"Apple unveils new iPhone",
"NASA's Perseverance rover collects first Mars rock sample"
]
}<|end_func_response|>
<|im_end|>
<|im_start|>assistant
Here are the latest news headlines for the United States:
1. Biden announces new vaccine mandates
2. Hurricane Ida devastates Louisiana
3. Apple unveils new iPhone
4. NASA's Perseverance rover collects first Mars rock sample
<|im_end|>
```
|
0-hero/Matter-0.2-32B
|
{"id": "0-hero/Matter-0.2-32B", "author": "0-hero", "sha": "2ac6c856b96c25a4c81917288f795e8c7de35a32", "last_modified": "2024-04-15 08:52:21+00:00", "created_at": "2024-04-15 04:35:10+00:00", "private": false, "gated": false, "disabled": false, "downloads": 1601, "downloads_all_time": null, "likes": 3, "library_name": "transformers", "gguf": null, "inference": null, "inference_provider_mapping": null, "tags": ["transformers", "pytorch", "safetensors", "qwen2", "text-generation", "conversational", "en", "dataset:0-hero/Matter-0.2-alpha-Slim-A", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us"], "pipeline_tag": "text-generation", "mask_token": null, "trending_score": null, "card_data": "datasets:\n- 0-hero/Matter-0.2-alpha-Slim-A\nlanguage:\n- en\nlicense: apache-2.0", "widget_data": [{"text": "Hi, what can you help me with?"}, {"text": "What is 84 * 3 / 2?"}, {"text": "Tell me an interesting fact about the universe!"}, {"text": "Explain quantum computing in simple terms."}], "model_index": null, "config": {"architectures": ["Qwen2ForCausalLM"], "model_type": "qwen2", "tokenizer_config": {"bos_token": "<s>", "chat_template": "{% if not add_generation_prompt is defined %}{% set add_generation_prompt = false %}{% endif %}{% for message in messages %}{{'<|im_start|>' + message['role'] + '\n' + message['content'] + '<|im_end|>' + '\n'}}{% endfor %}{% if add_generation_prompt %}{{ '<|im_start|>assistant\n' }}{% endif %}", "eos_token": "<|im_end|>", "pad_token": "<|endoftext|>", "unk_token": "<unk>"}, "additional_chat_templates": {}}, "transformers_info": {"auto_model": "AutoModelForCausalLM", "custom_class": null, "pipeline_tag": "text-generation", "processor": "AutoTokenizer"}, "siblings": ["RepoSibling(rfilename='.gitattributes', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='README.md', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='added_tokens.json', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='config.json', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='generation_config.json', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='merges.txt', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='model-00001-of-00014.safetensors', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='model-00002-of-00014.safetensors', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='model-00003-of-00014.safetensors', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='model-00004-of-00014.safetensors', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='model-00005-of-00014.safetensors', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='model-00006-of-00014.safetensors', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='model-00007-of-00014.safetensors', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='model-00008-of-00014.safetensors', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='model-00009-of-00014.safetensors', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='model-00010-of-00014.safetensors', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='model-00011-of-00014.safetensors', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='model-00012-of-00014.safetensors', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='model-00013-of-00014.safetensors', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='model-00014-of-00014.safetensors', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='model.safetensors.index.json', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='pytorch_model-00001-of-00014.bin', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='pytorch_model-00002-of-00014.bin', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='pytorch_model-00003-of-00014.bin', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='pytorch_model-00004-of-00014.bin', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='pytorch_model-00005-of-00014.bin', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='pytorch_model-00006-of-00014.bin', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='pytorch_model-00007-of-00014.bin', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='pytorch_model-00008-of-00014.bin', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='pytorch_model-00009-of-00014.bin', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='pytorch_model-00010-of-00014.bin', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='pytorch_model-00011-of-00014.bin', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='pytorch_model-00012-of-00014.bin', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='pytorch_model-00013-of-00014.bin', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='pytorch_model-00014-of-00014.bin', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='pytorch_model.bin.index.json', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='special_tokens_map.json', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='tokenizer.json', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='tokenizer_config.json', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='vocab.json', size=None, blob_id=None, lfs=None)"], "spaces": ["Darok/Featherless-Feud"], "safetensors": {"parameters": {"BF16": 32512218112}, "total": 32512218112}, "security_repo_status": null, "xet_enabled": null, "lastModified": "2024-04-15 08:52:21+00:00", "cardData": "datasets:\n- 0-hero/Matter-0.2-alpha-Slim-A\nlanguage:\n- en\nlicense: apache-2.0", "transformersInfo": {"auto_model": "AutoModelForCausalLM", "custom_class": null, "pipeline_tag": "text-generation", "processor": "AutoTokenizer"}, "_id": "661cae7e330621410495e164", "modelId": "0-hero/Matter-0.2-32B", "usedStorage": 130049228358}
|
---
license: apache-2.0
datasets:
- 0-hero/Matter-0.2-alpha-Slim-A
language:
- en
---
## Matter 32B - 0.2 (Qwen 32B 0.2 Finetune)
Matter 32B 0.2 is finetune on the **Matter 0.2 dataset**, which is curated from over 35 datsets analyzing >6B tokens
### Recommended Usage for best results
System Prompt - `You are a helpful assistant`
### Training
Prompt format: This model uses ChatML prompt format.
```
<|im_start|>system
You are a helpful AI assistant.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
### Function Calling
Model also supports function calling. Additional tokens for function calling
Model function call tokens
- `<tool_call>` - Function call start token
- `</tool_call>` - Function call end token
Function call response tokens
- `<tool_response>` - Function response start token
- `</tool_response>` - Function response end token
Example
```
<|im_start|>system
You are a helpful assistant with access to the following functions. Use them if required -
{ "name": "get_news_headlines",
"description": "Get the latest news headlines",
"parameters":
{ "type": "object",
"properties":
{ "country":
{ "type": "string",
"description": "The country for which to fetch news"
}
},
"required": [ "country" ]
}
}
<|im_end|>
<|im_start|>user
Can you tell me the latest news headlines for the United States?<|im_end|>
<|im_start|>assistant
<tool_call>{"name": "get_news_headlines", "arguments": '{"country": "United States"}'}</tool_call><|im_end|>
<|im_start|>user
<tool_response>{
"headlines":
[
"Biden announces new vaccine mandates",
"Hurricane Ida devastates Louisiana",
"Apple unveils new iPhone",
"NASA's Perseverance rover collects first Mars rock sample"
]
}</tool_response>
<|im_end|>
<|im_start|>assistant
Here are the latest news headlines for the United States:
1. Biden announces new vaccine mandates
2. Hurricane Ida devastates Louisiana
3. Apple unveils new iPhone
4. NASA's Perseverance rover collects first Mars rock sample
<|im_end|>
```
|
0-hero/Matter-0.2-7B
|
{"id": "0-hero/Matter-0.2-7B", "author": "0-hero", "sha": "99a106b63a6c6e51ae062cc7b3b19936b1dae80b", "last_modified": "2024-04-03 18:34:12+00:00", "created_at": "2024-04-02 11:04:47+00:00", "private": false, "gated": false, "disabled": false, "downloads": 20, "downloads_all_time": null, "likes": 0, "library_name": "transformers", "gguf": null, "inference": null, "inference_provider_mapping": null, "tags": ["transformers", "pytorch", "safetensors", "mistral", "text-generation", "conversational", "en", "dataset:0-hero/Matter-0.2", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us"], "pipeline_tag": "text-generation", "mask_token": null, "trending_score": null, "card_data": "datasets:\n- 0-hero/Matter-0.2\nlanguage:\n- en\nlicense: apache-2.0", "widget_data": [{"text": "Hi, what can you help me with?"}, {"text": "What is 84 * 3 / 2?"}, {"text": "Tell me an interesting fact about the universe!"}, {"text": "Explain quantum computing in simple terms."}], "model_index": null, "config": {"architectures": ["MistralForCausalLM"], "model_type": "mistral", "tokenizer_config": {"bos_token": "<s>", "chat_template": "{% if not add_generation_prompt is defined %}{% set add_generation_prompt = false %}{% endif %}{% for message in messages %}{{'<|im_start|>' + message['role'] + '\n' + message['content'] + '<|im_end|>' + '\n'}}{% endfor %}{% if add_generation_prompt %}{{ '<|im_start|>assistant\n' }}{% endif %}", "eos_token": "<|im_end|>", "pad_token": "</s>", "unk_token": "<unk>", "use_default_system_prompt": false}, "additional_chat_templates": {}}, "transformers_info": {"auto_model": "AutoModelForCausalLM", "custom_class": null, "pipeline_tag": "text-generation", "processor": "AutoTokenizer"}, "siblings": ["RepoSibling(rfilename='.gitattributes', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='README.md', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='added_tokens.json', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='config.json', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='generation_config.json', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='model.safetensors', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='pytorch_model.bin', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='special_tokens_map.json', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='tokenizer.model', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='tokenizer_config.json', size=None, blob_id=None, lfs=None)"], "spaces": [], "safetensors": {"parameters": {"BF16": 7241781248}, "total": 7241781248}, "security_repo_status": null, "xet_enabled": null, "lastModified": "2024-04-03 18:34:12+00:00", "cardData": "datasets:\n- 0-hero/Matter-0.2\nlanguage:\n- en\nlicense: apache-2.0", "transformersInfo": {"auto_model": "AutoModelForCausalLM", "custom_class": null, "pipeline_tag": "text-generation", "processor": "AutoTokenizer"}, "_id": "660be64ffb218d395af889d5", "modelId": "0-hero/Matter-0.2-7B", "usedStorage": 28967692696}
|
---
license: apache-2.0
datasets:
- 0-hero/Matter-0.2
language:
- en
---
## Matter 7B - 0.2 (Mistral 7B 0.2 Finetune)
Matter 7B 0.2 is full-finetune on the **Matter 0.2 dataset** (not yet released), which is curated from over 35 datsets analyzing >6B tokens
### Recommended Usage for best results
System Prompt - `You are a helpful assistant`
### Training
Prompt format: This model uses ChatML prompt format.
```
<|im_start|>system
You are a helpful AI assistant.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
### Function Calling
Model also supports function calling. Additional tokens for function calling
Model function call tokens
- `<tool_call>` - Function call start token
- `</tool_call>` - Function call end token
Function call response tokens
- `<tool_response>` - Function response start token
- `</tool_response>` - Function response end token
Example
```
<|im_start|>system
You are a helpful assistant with access to the following functions. Use them if required -
{ "name": "get_news_headlines",
"description": "Get the latest news headlines",
"parameters":
{ "type": "object",
"properties":
{ "country":
{ "type": "string",
"description": "The country for which to fetch news"
}
},
"required": [ "country" ]
}
}
<|im_end|>
<|im_start|>user
Can you tell me the latest news headlines for the United States?<|im_end|>
<|im_start|>assistant
<tool_call>{"name": "get_news_headlines", "arguments": '{"country": "United States"}'}</tool_call><|im_end|>
<|im_start|>user
<tool_response>{
"headlines":
[
"Biden announces new vaccine mandates",
"Hurricane Ida devastates Louisiana",
"Apple unveils new iPhone",
"NASA's Perseverance rover collects first Mars rock sample"
]
}</tool_response>
<|im_end|>
<|im_start|>assistant
Here are the latest news headlines for the United States:
1. Biden announces new vaccine mandates
2. Hurricane Ida devastates Louisiana
3. Apple unveils new iPhone
4. NASA's Perseverance rover collects first Mars rock sample
<|im_end|>
```
|
0-hero/Matter-0.2-7B-DPO
|
{"id": "0-hero/Matter-0.2-7B-DPO", "author": "0-hero", "sha": "9b6ada7e55857cb743911190e178801c00faa171", "last_modified": "2024-08-09 16:55:00+00:00", "created_at": "2024-04-13 12:24:15+00:00", "private": false, "gated": false, "disabled": false, "downloads": 30, "downloads_all_time": null, "likes": 3, "library_name": "transformers", "gguf": null, "inference": null, "inference_provider_mapping": null, "tags": ["transformers", "pytorch", "safetensors", "mistral", "text-generation", "conversational", "en", "dataset:0-hero/Matter-0.2-alpha", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us"], "pipeline_tag": "text-generation", "mask_token": null, "trending_score": null, "card_data": "datasets:\n- 0-hero/Matter-0.2-alpha\nlanguage:\n- en\nlicense: apache-2.0\nmodel-index:\n- name: Matter-0.2-7B-DPO\n results:\n - task:\n type: text-generation\n name: Text Generation\n dataset:\n name: IFEval (0-Shot)\n type: HuggingFaceH4/ifeval\n args:\n num_few_shot: 0\n metrics:\n - type: inst_level_strict_acc and prompt_level_strict_acc\n value: 33.03\n name: strict accuracy\n verified: false\n source:\n url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=0-hero/Matter-0.2-7B-DPO\n name: Open LLM Leaderboard\n - task:\n type: text-generation\n name: Text Generation\n dataset:\n name: BBH (3-Shot)\n type: BBH\n args:\n num_few_shot: 3\n metrics:\n - type: acc_norm\n value: 10.06\n name: normalized accuracy\n verified: false\n source:\n url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=0-hero/Matter-0.2-7B-DPO\n name: Open LLM Leaderboard\n - task:\n type: text-generation\n name: Text Generation\n dataset:\n name: MATH Lvl 5 (4-Shot)\n type: hendrycks/competition_math\n args:\n num_few_shot: 4\n metrics:\n - type: exact_match\n value: 0.83\n name: exact match\n verified: false\n source:\n url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=0-hero/Matter-0.2-7B-DPO\n name: Open LLM Leaderboard\n - task:\n type: text-generation\n name: Text Generation\n dataset:\n name: GPQA (0-shot)\n type: Idavidrein/gpqa\n args:\n num_few_shot: 0\n metrics:\n - type: acc_norm\n value: 1.23\n name: acc_norm\n verified: false\n source:\n url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=0-hero/Matter-0.2-7B-DPO\n name: Open LLM Leaderboard\n - task:\n type: text-generation\n name: Text Generation\n dataset:\n name: MuSR (0-shot)\n type: TAUR-Lab/MuSR\n args:\n num_few_shot: 0\n metrics:\n - type: acc_norm\n value: 5.87\n name: acc_norm\n verified: false\n source:\n url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=0-hero/Matter-0.2-7B-DPO\n name: Open LLM Leaderboard\n - task:\n type: text-generation\n name: Text Generation\n dataset:\n name: MMLU-PRO (5-shot)\n type: TIGER-Lab/MMLU-Pro\n config: main\n split: test\n args:\n num_few_shot: 5\n metrics:\n - type: acc\n value: 1.82\n name: accuracy\n verified: false\n source:\n url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=0-hero/Matter-0.2-7B-DPO\n name: Open LLM Leaderboard", "widget_data": [{"text": "Hi, what can you help me with?"}, {"text": "What is 84 * 3 / 2?"}, {"text": "Tell me an interesting fact about the universe!"}, {"text": "Explain quantum computing in simple terms."}], "model_index": [{"name": "Matter-0.2-7B-DPO", "results": [{"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "IFEval (0-Shot)", "type": "HuggingFaceH4/ifeval", "args": {"num_few_shot": 0}}, "metrics": [{"type": "inst_level_strict_acc and prompt_level_strict_acc", "value": 33.03, "name": "strict accuracy", "verified": false}], "source": {"url": "https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=0-hero/Matter-0.2-7B-DPO", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "BBH (3-Shot)", "type": "BBH", "args": {"num_few_shot": 3}}, "metrics": [{"type": "acc_norm", "value": 10.06, "name": "normalized accuracy", "verified": false}], "source": {"url": "https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=0-hero/Matter-0.2-7B-DPO", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "MATH Lvl 5 (4-Shot)", "type": "hendrycks/competition_math", "args": {"num_few_shot": 4}}, "metrics": [{"type": "exact_match", "value": 0.83, "name": "exact match", "verified": false}], "source": {"url": "https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=0-hero/Matter-0.2-7B-DPO", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "GPQA (0-shot)", "type": "Idavidrein/gpqa", "args": {"num_few_shot": 0}}, "metrics": [{"type": "acc_norm", "value": 1.23, "name": "acc_norm", "verified": false}], "source": {"url": "https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=0-hero/Matter-0.2-7B-DPO", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "MuSR (0-shot)", "type": "TAUR-Lab/MuSR", "args": {"num_few_shot": 0}}, "metrics": [{"type": "acc_norm", "value": 5.87, "name": "acc_norm", "verified": false}], "source": {"url": "https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=0-hero/Matter-0.2-7B-DPO", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "MMLU-PRO (5-shot)", "type": "TIGER-Lab/MMLU-Pro", "config": "main", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 1.82, "name": "accuracy", "verified": false}], "source": {"url": "https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=0-hero/Matter-0.2-7B-DPO", "name": "Open LLM Leaderboard"}}]}], "config": {"architectures": ["MistralForCausalLM"], "model_type": "mistral", "tokenizer_config": {"bos_token": "<s>", "chat_template": "{% if messages[0]['role'] == 'system' %}{% set loop_messages = messages[1:] %}{% set system_message = messages[0]['content'] %}{% else %}{% set loop_messages = messages %}{% set system_message = 'You are a helpful assistant.' %}{% endif %}{% if not add_generation_prompt is defined %}{% set add_generation_prompt = false %}{% endif %}{% for message in loop_messages %}{% if loop.index0 == 0 %}{{'<|im_start|>system\n' + system_message + '<|im_end|>\n'}}{% endif %}{{'<|im_start|>' + message['role'] + '\n' + message['content'] + '<|im_end|>' + '\n'}}{% endfor %}{% if add_generation_prompt %}{{ '<|im_start|>assistant\n' }}{% endif %}", "eos_token": "<|im_end|>", "pad_token": "</s>", "unk_token": "<unk>", "use_default_system_prompt": false}, "additional_chat_templates": {}}, "transformers_info": {"auto_model": "AutoModelForCausalLM", "custom_class": null, "pipeline_tag": "text-generation", "processor": "AutoTokenizer"}, "siblings": ["RepoSibling(rfilename='.gitattributes', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='README.md', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='added_tokens.json', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='config.json', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='generation_config.json', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='model.safetensors', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='pytorch_model.bin', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='special_tokens_map.json', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='tokenizer.model', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='tokenizer_config.json', size=None, blob_id=None, lfs=None)"], "spaces": ["logikon/open_cot_leaderboard", "cot-leaderboard/open-cot-dashboard"], "safetensors": {"parameters": {"BF16": 7241781248}, "total": 7241781248}, "security_repo_status": null, "xet_enabled": null, "lastModified": "2024-08-09 16:55:00+00:00", "cardData": "datasets:\n- 0-hero/Matter-0.2-alpha\nlanguage:\n- en\nlicense: apache-2.0\nmodel-index:\n- name: Matter-0.2-7B-DPO\n results:\n - task:\n type: text-generation\n name: Text Generation\n dataset:\n name: IFEval (0-Shot)\n type: HuggingFaceH4/ifeval\n args:\n num_few_shot: 0\n metrics:\n - type: inst_level_strict_acc and prompt_level_strict_acc\n value: 33.03\n name: strict accuracy\n verified: false\n source:\n url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=0-hero/Matter-0.2-7B-DPO\n name: Open LLM Leaderboard\n - task:\n type: text-generation\n name: Text Generation\n dataset:\n name: BBH (3-Shot)\n type: BBH\n args:\n num_few_shot: 3\n metrics:\n - type: acc_norm\n value: 10.06\n name: normalized accuracy\n verified: false\n source:\n url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=0-hero/Matter-0.2-7B-DPO\n name: Open LLM Leaderboard\n - task:\n type: text-generation\n name: Text Generation\n dataset:\n name: MATH Lvl 5 (4-Shot)\n type: hendrycks/competition_math\n args:\n num_few_shot: 4\n metrics:\n - type: exact_match\n value: 0.83\n name: exact match\n verified: false\n source:\n url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=0-hero/Matter-0.2-7B-DPO\n name: Open LLM Leaderboard\n - task:\n type: text-generation\n name: Text Generation\n dataset:\n name: GPQA (0-shot)\n type: Idavidrein/gpqa\n args:\n num_few_shot: 0\n metrics:\n - type: acc_norm\n value: 1.23\n name: acc_norm\n verified: false\n source:\n url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=0-hero/Matter-0.2-7B-DPO\n name: Open LLM Leaderboard\n - task:\n type: text-generation\n name: Text Generation\n dataset:\n name: MuSR (0-shot)\n type: TAUR-Lab/MuSR\n args:\n num_few_shot: 0\n metrics:\n - type: acc_norm\n value: 5.87\n name: acc_norm\n verified: false\n source:\n url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=0-hero/Matter-0.2-7B-DPO\n name: Open LLM Leaderboard\n - task:\n type: text-generation\n name: Text Generation\n dataset:\n name: MMLU-PRO (5-shot)\n type: TIGER-Lab/MMLU-Pro\n config: main\n split: test\n args:\n num_few_shot: 5\n metrics:\n - type: acc\n value: 1.82\n name: accuracy\n verified: false\n source:\n url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=0-hero/Matter-0.2-7B-DPO\n name: Open LLM Leaderboard", "transformersInfo": {"auto_model": "AutoModelForCausalLM", "custom_class": null, "pipeline_tag": "text-generation", "processor": "AutoTokenizer"}, "_id": "661a796f8328f43c6a9192a9", "modelId": "0-hero/Matter-0.2-7B-DPO", "usedStorage": 28967692696}
|
---
language:
- en
license: apache-2.0
datasets:
- 0-hero/Matter-0.2-alpha
model-index:
- name: Matter-0.2-7B-DPO
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 33.03
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=0-hero/Matter-0.2-7B-DPO
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 10.06
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=0-hero/Matter-0.2-7B-DPO
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 0.83
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=0-hero/Matter-0.2-7B-DPO
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 1.23
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=0-hero/Matter-0.2-7B-DPO
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 5.87
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=0-hero/Matter-0.2-7B-DPO
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 1.82
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=0-hero/Matter-0.2-7B-DPO
name: Open LLM Leaderboard
---
## Matter 7B - 0.2 - DPO (Mistral 7B Finetune)
DPO version of [Matter 7B](https://huggingface.co/0-hero/Matter-0.2-7B) fine-tuned on the [Matter dataset](https://huggingface.co/datasets/0-hero/Matter-0.2-alpha), which is curated from over 35 datsets analyzing >6B tokens
### Training
Prompt format: This model uses ChatML prompt format.
```
<|im_start|>system
You are a helpful AI assistant.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
### Function Calling
Model also supports function calling. Additional tokens for function calling
Model function call tokens
- <|begin_func|> - Function call start token
- <|end_func|> - Function call end token
Function call response tokens
- <|begin_func_response|> - Function response start token
- <|end_func_response|> - Function response end token
Example
```
<|im_start|>system
You are a helpful assistant with access to the following functions. Use them if required -
{ "name": "get_news_headlines",
"description": "Get the latest news headlines",
"parameters":
{ "type": "object",
"properties":
{ "country":
{ "type": "string",
"description": "The country for which to fetch news"
}
},
"required": [ "country" ]
}
}
<|im_end|>
<|im_start|>user
Can you tell me the latest news headlines for the United States?<|im_end|>
<|im_start|>assistant
<|begin_func|>{"name": "get_news_headlines", "arguments": '{"country": "United States"}'}<|end_func|><|im_end|>
<|im_start|>user
<|begin_func_response|>{
"headlines":
[
"Biden announces new vaccine mandates",
"Hurricane Ida devastates Louisiana",
"Apple unveils new iPhone",
"NASA's Perseverance rover collects first Mars rock sample"
]
}<|end_func_response|>
<|im_end|>
<|im_start|>assistant
Here are the latest news headlines for the United States:
1. Biden announces new vaccine mandates
2. Hurricane Ida devastates Louisiana
3. Apple unveils new iPhone
4. NASA's Perseverance rover collects first Mars rock sample
<|im_end|>
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_0-hero__Matter-0.2-7B-DPO)
| Metric |Value|
|-------------------|----:|
|Avg. | 8.81|
|IFEval (0-Shot) |33.03|
|BBH (3-Shot) |10.06|
|MATH Lvl 5 (4-Shot)| 0.83|
|GPQA (0-shot) | 1.23|
|MuSR (0-shot) | 5.87|
|MMLU-PRO (5-shot) | 1.82|
|
0-hero/Matter-0.2-8x22B
|
{"id": "0-hero/Matter-0.2-8x22B", "author": "0-hero", "sha": "284f99f77283ab193c9d3996973c7fdb94424235", "last_modified": "2024-04-14 17:50:48+00:00", "created_at": "2024-04-14 13:20:56+00:00", "private": false, "gated": false, "disabled": false, "downloads": 28, "downloads_all_time": null, "likes": 1, "library_name": "transformers", "gguf": null, "inference": null, "inference_provider_mapping": null, "tags": ["transformers", "pytorch", "safetensors", "mixtral", "text-generation", "conversational", "en", "dataset:0-hero/Matter-0.2-alpha-Slim-A", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us"], "pipeline_tag": "text-generation", "mask_token": null, "trending_score": null, "card_data": "datasets:\n- 0-hero/Matter-0.2-alpha-Slim-A\nlanguage:\n- en\nlicense: apache-2.0", "widget_data": [{"text": "Hi, what can you help me with?"}, {"text": "What is 84 * 3 / 2?"}, {"text": "Tell me an interesting fact about the universe!"}, {"text": "Explain quantum computing in simple terms."}], "model_index": null, "config": {"architectures": ["MixtralForCausalLM"], "model_type": "mixtral", "tokenizer_config": {"bos_token": "<s>", "chat_template": "{% if not add_generation_prompt is defined %}{% set add_generation_prompt = false %}{% endif %}{% for message in messages %}{{'<|im_start|>' + message['role'] + '\n' + message['content'] + '<|im_end|>' + '\n'}}{% endfor %}{% if add_generation_prompt %}{{ '<|im_start|>assistant\n' }}{% endif %}", "eos_token": "<|im_end|>", "pad_token": "</s>", "unk_token": "<unk>", "use_default_system_prompt": false}, "additional_chat_templates": {}}, "transformers_info": {"auto_model": "AutoModelForCausalLM", "custom_class": null, "pipeline_tag": "text-generation", "processor": "AutoTokenizer"}, "siblings": ["RepoSibling(rfilename='.gitattributes', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='README.md', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='added_tokens.json', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='config.json', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='generation_config.json', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='model-00001-of-00059.safetensors', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='model-00002-of-00059.safetensors', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='model-00003-of-00059.safetensors', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='model-00004-of-00059.safetensors', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='model-00005-of-00059.safetensors', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='model-00006-of-00059.safetensors', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='model-00007-of-00059.safetensors', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='model-00008-of-00059.safetensors', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='model-00009-of-00059.safetensors', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='model-00010-of-00059.safetensors', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='model-00011-of-00059.safetensors', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='model-00012-of-00059.safetensors', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='model-00013-of-00059.safetensors', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='model-00014-of-00059.safetensors', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='model-00015-of-00059.safetensors', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='model-00016-of-00059.safetensors', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='model-00017-of-00059.safetensors', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='model-00018-of-00059.safetensors', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='model-00019-of-00059.safetensors', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='model-00020-of-00059.safetensors', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='model-00021-of-00059.safetensors', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='model-00022-of-00059.safetensors', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='model-00023-of-00059.safetensors', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='model-00024-of-00059.safetensors', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='model-00025-of-00059.safetensors', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='model-00026-of-00059.safetensors', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='model-00027-of-00059.safetensors', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='model-00028-of-00059.safetensors', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='model-00029-of-00059.safetensors', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='model-00030-of-00059.safetensors', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='model-00031-of-00059.safetensors', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='model-00032-of-00059.safetensors', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='model-00033-of-00059.safetensors', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='model-00034-of-00059.safetensors', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='model-00035-of-00059.safetensors', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='model-00036-of-00059.safetensors', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='model-00037-of-00059.safetensors', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='model-00038-of-00059.safetensors', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='model-00039-of-00059.safetensors', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='model-00040-of-00059.safetensors', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='model-00041-of-00059.safetensors', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='model-00042-of-00059.safetensors', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='model-00043-of-00059.safetensors', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='model-00044-of-00059.safetensors', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='model-00045-of-00059.safetensors', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='model-00046-of-00059.safetensors', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='model-00047-of-00059.safetensors', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='model-00048-of-00059.safetensors', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='model-00049-of-00059.safetensors', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='model-00050-of-00059.safetensors', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='model-00051-of-00059.safetensors', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='model-00052-of-00059.safetensors', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='model-00053-of-00059.safetensors', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='model-00054-of-00059.safetensors', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='model-00055-of-00059.safetensors', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='model-00056-of-00059.safetensors', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='model-00057-of-00059.safetensors', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='model-00058-of-00059.safetensors', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='model-00059-of-00059.safetensors', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='model.safetensors.index.json', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='pytorch_model-00001-of-00059.bin', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='pytorch_model-00002-of-00059.bin', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='pytorch_model-00003-of-00059.bin', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='pytorch_model-00004-of-00059.bin', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='pytorch_model-00005-of-00059.bin', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='pytorch_model-00006-of-00059.bin', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='pytorch_model-00007-of-00059.bin', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='pytorch_model-00008-of-00059.bin', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='pytorch_model-00009-of-00059.bin', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='pytorch_model-00010-of-00059.bin', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='pytorch_model-00011-of-00059.bin', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='pytorch_model-00012-of-00059.bin', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='pytorch_model-00013-of-00059.bin', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='pytorch_model-00014-of-00059.bin', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='pytorch_model-00015-of-00059.bin', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='pytorch_model-00016-of-00059.bin', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='pytorch_model-00017-of-00059.bin', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='pytorch_model-00018-of-00059.bin', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='pytorch_model-00019-of-00059.bin', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='pytorch_model-00020-of-00059.bin', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='pytorch_model-00021-of-00059.bin', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='pytorch_model-00022-of-00059.bin', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='pytorch_model-00023-of-00059.bin', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='pytorch_model-00024-of-00059.bin', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='pytorch_model-00025-of-00059.bin', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='pytorch_model-00026-of-00059.bin', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='pytorch_model-00027-of-00059.bin', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='pytorch_model-00028-of-00059.bin', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='pytorch_model-00029-of-00059.bin', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='pytorch_model-00030-of-00059.bin', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='pytorch_model-00031-of-00059.bin', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='pytorch_model-00032-of-00059.bin', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='pytorch_model-00033-of-00059.bin', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='pytorch_model-00034-of-00059.bin', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='pytorch_model-00035-of-00059.bin', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='pytorch_model-00036-of-00059.bin', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='pytorch_model-00037-of-00059.bin', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='pytorch_model-00038-of-00059.bin', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='pytorch_model-00039-of-00059.bin', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='pytorch_model-00040-of-00059.bin', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='pytorch_model-00041-of-00059.bin', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='pytorch_model-00042-of-00059.bin', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='pytorch_model-00043-of-00059.bin', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='pytorch_model-00044-of-00059.bin', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='pytorch_model-00045-of-00059.bin', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='pytorch_model-00046-of-00059.bin', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='pytorch_model-00047-of-00059.bin', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='pytorch_model-00048-of-00059.bin', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='pytorch_model-00049-of-00059.bin', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='pytorch_model-00050-of-00059.bin', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='pytorch_model-00051-of-00059.bin', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='pytorch_model-00052-of-00059.bin', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='pytorch_model-00053-of-00059.bin', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='pytorch_model-00054-of-00059.bin', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='pytorch_model-00055-of-00059.bin', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='pytorch_model-00056-of-00059.bin', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='pytorch_model-00057-of-00059.bin', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='pytorch_model-00058-of-00059.bin', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='pytorch_model-00059-of-00059.bin', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='pytorch_model.bin.index.json', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='special_tokens_map.json', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='tokenizer.model', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='tokenizer_config.json', size=None, blob_id=None, lfs=None)"], "spaces": [], "safetensors": {"parameters": {"BF16": 140620707840}, "total": 140620707840}, "security_repo_status": null, "xet_enabled": null, "lastModified": "2024-04-14 17:50:48+00:00", "cardData": "datasets:\n- 0-hero/Matter-0.2-alpha-Slim-A\nlanguage:\n- en\nlicense: apache-2.0", "transformersInfo": {"auto_model": "AutoModelForCausalLM", "custom_class": null, "pipeline_tag": "text-generation", "processor": "AutoTokenizer"}, "_id": "661bd8387692f1c3cdde2417", "modelId": "0-hero/Matter-0.2-8x22B", "usedStorage": 562484156809}
|
---
license: apache-2.0
datasets:
- 0-hero/Matter-0.2-alpha-Slim-A
language:
- en
---
## Matter 8x22B - 0.2 (Mixtral 8x22B 0.2 Finetune)
Matter 8x22B 0.2 is finetune on the **Matter 0.2 dataset**, which is curated from over 35 datsets analyzing >6B tokens
### Recommended Usage for best results
System Prompt - `You are a helpful assistant`
### Training
Prompt format: This model uses ChatML prompt format.
```
<|im_start|>system
You are a helpful AI assistant.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
### Function Calling
Model also supports function calling. Additional tokens for function calling
Model function call tokens
- `<tool_call>` - Function call start token
- `</tool_call>` - Function call end token
Function call response tokens
- `<tool_response>` - Function response start token
- `</tool_response>` - Function response end token
Example
```
<|im_start|>system
You are a helpful assistant with access to the following functions. Use them if required -
{ "name": "get_news_headlines",
"description": "Get the latest news headlines",
"parameters":
{ "type": "object",
"properties":
{ "country":
{ "type": "string",
"description": "The country for which to fetch news"
}
},
"required": [ "country" ]
}
}
<|im_end|>
<|im_start|>user
Can you tell me the latest news headlines for the United States?<|im_end|>
<|im_start|>assistant
<tool_call>{"name": "get_news_headlines", "arguments": '{"country": "United States"}'}</tool_call><|im_end|>
<|im_start|>user
<tool_response>{
"headlines":
[
"Biden announces new vaccine mandates",
"Hurricane Ida devastates Louisiana",
"Apple unveils new iPhone",
"NASA's Perseverance rover collects first Mars rock sample"
]
}</tool_response>
<|im_end|>
<|im_start|>assistant
Here are the latest news headlines for the United States:
1. Biden announces new vaccine mandates
2. Hurricane Ida devastates Louisiana
3. Apple unveils new iPhone
4. NASA's Perseverance rover collects first Mars rock sample
<|im_end|>
```
|
0-hero/R1-7B-MATH-GRPO-FULL-AWQ
|
{"id": "0-hero/R1-7B-MATH-GRPO-FULL-AWQ", "author": "0-hero", "sha": "4a7a0643fd22133288c300c6c3a8d27b0c06db53", "last_modified": "2025-03-09 11:18:31+00:00", "created_at": "2025-03-09 11:11:32+00:00", "private": false, "gated": false, "disabled": false, "downloads": 10, "downloads_all_time": null, "likes": 0, "library_name": null, "gguf": null, "inference": null, "inference_provider_mapping": null, "tags": ["safetensors", "qwen2", "4-bit", "awq", "region:us"], "pipeline_tag": null, "mask_token": null, "trending_score": null, "card_data": null, "widget_data": null, "model_index": null, "config": {"architectures": ["Qwen2ForCausalLM"], "model_type": "qwen2", "quantization_config": {"bits": 4, "quant_method": "awq"}, "tokenizer_config": {"bos_token": "<\uff5cbegin\u2581of\u2581sentence\uff5c>", "chat_template": "{% if not add_generation_prompt is defined %}{% set add_generation_prompt = false %}{% endif %}{% set ns = namespace(is_first=false, is_tool=false, is_output_first=true, system_prompt='') %}{%- for message in messages %}{%- if message['role'] == 'system' %}{% set ns.system_prompt = message['content'] %}{%- endif %}{%- endfor %}{{bos_token}}{{ns.system_prompt}}{%- for message in messages %}{%- if message['role'] == 'user' %}{%- set ns.is_tool = false -%}{{'<\uff5cUser\uff5c>' + message['content']}}{%- endif %}{%- if message['role'] == 'assistant' and message['content'] is none %}{%- set ns.is_tool = false -%}{%- for tool in message['tool_calls']%}{%- if not ns.is_first %}{{'<\uff5cAssistant\uff5c><\uff5ctool\u2581calls\u2581begin\uff5c><\uff5ctool\u2581call\u2581begin\uff5c>' + tool['type'] + '<\uff5ctool\u2581sep\uff5c>' + tool['function']['name'] + '\\n' + '```json' + '\\n' + tool['function']['arguments'] + '\\n' + '```' + '<\uff5ctool\u2581call\u2581end\uff5c>'}}{%- set ns.is_first = true -%}{%- else %}{{'\\n' + '<\uff5ctool\u2581call\u2581begin\uff5c>' + tool['type'] + '<\uff5ctool\u2581sep\uff5c>' + tool['function']['name'] + '\\n' + '```json' + '\\n' + tool['function']['arguments'] + '\\n' + '```' + '<\uff5ctool\u2581call\u2581end\uff5c>'}}{{'<\uff5ctool\u2581calls\u2581end\uff5c><\uff5cend\u2581of\u2581sentence\uff5c>'}}{%- endif %}{%- endfor %}{%- endif %}{%- if message['role'] == 'assistant' and message['content'] is not none %}{%- if ns.is_tool %}{{'<\uff5ctool\u2581outputs\u2581end\uff5c>' + message['content'] + '<\uff5cend\u2581of\u2581sentence\uff5c>'}}{%- set ns.is_tool = false -%}{%- else %}{% set content = message['content'] %}{% if '</think>' in content %}{% set content = content.split('</think>')[-1] %}{% endif %}{{'<\uff5cAssistant\uff5c>' + content + '<\uff5cend\u2581of\u2581sentence\uff5c>'}}{%- endif %}{%- endif %}{%- if message['role'] == 'tool' %}{%- set ns.is_tool = true -%}{%- if ns.is_output_first %}{{'<\uff5ctool\u2581outputs\u2581begin\uff5c><\uff5ctool\u2581output\u2581begin\uff5c>' + message['content'] + '<\uff5ctool\u2581output\u2581end\uff5c>'}}{%- set ns.is_output_first = false %}{%- else %}{{'\\n<\uff5ctool\u2581output\u2581begin\uff5c>' + message['content'] + '<\uff5ctool\u2581output\u2581end\uff5c>'}}{%- endif %}{%- endif %}{%- endfor -%}{% if ns.is_tool %}{{'<\uff5ctool\u2581outputs\u2581end\uff5c>'}}{% endif %}{% if add_generation_prompt and not ns.is_tool %}{{'<\uff5cAssistant\uff5c><think>\\n'}}{% endif %}", "eos_token": "<\uff5cend\u2581of\u2581sentence\uff5c>", "pad_token": "<\uff5cend\u2581of\u2581sentence\uff5c>", "unk_token": null, "use_default_system_prompt": false}, "additional_chat_templates": {}}, "transformers_info": null, "siblings": ["RepoSibling(rfilename='.gitattributes', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='config.json', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='generation_config.json', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='model-00001-of-00002.safetensors', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='model-00002-of-00002.safetensors', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='model.safetensors.index.json', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='special_tokens_map.json', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='tokenizer.json', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='tokenizer_config.json', size=None, blob_id=None, lfs=None)"], "spaces": [], "safetensors": {"parameters": {"I32": 822033408, "BF16": 1090199040, "F16": 51107840}, "total": 1963340288}, "security_repo_status": null, "xet_enabled": null, "lastModified": "2025-03-09 11:18:31+00:00", "cardData": null, "transformersInfo": null, "_id": "67cd776463e85ff7d39264d8", "modelId": "0-hero/R1-7B-MATH-GRPO-FULL-AWQ", "usedStorage": 5582252714}
|
N/A
|
0-hero/fineweb-edu-100BT-GPT2-tokenized
|
{"id": "0-hero/fineweb-edu-100BT-GPT2-tokenized", "author": "0-hero", "sha": "f34a70e86fa2312b2d9cd6671cce3f4290739860", "last_modified": "2024-09-24 21:34:00+00:00", "created_at": "2024-09-24 06:11:31+00:00", "private": false, "gated": false, "disabled": false, "downloads": 0, "downloads_all_time": null, "likes": 0, "library_name": null, "gguf": null, "inference": null, "inference_provider_mapping": null, "tags": ["region:us"], "pipeline_tag": null, "mask_token": null, "trending_score": null, "card_data": null, "widget_data": null, "model_index": null, "config": null, "transformers_info": null, "siblings": ["RepoSibling(rfilename='.gitattributes', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='meta.json', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='train_001.bin', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='train_002.bin', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='train_003.bin', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='train_004.bin', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='train_005.bin', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='train_006.bin', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='train_007.bin', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='val.bin', size=None, blob_id=None, lfs=None)"], "spaces": [], "safetensors": null, "security_repo_status": null, "xet_enabled": null, "lastModified": "2024-09-24 21:34:00+00:00", "cardData": null, "transformersInfo": null, "_id": "66f258137967278698c64ca7", "modelId": "0-hero/fineweb-edu-100BT-GPT2-tokenized", "usedStorage": 200332930142}
|
N/A
|
0-hero/fineweb-edu-10BT-GPT2-tokenized
|
{"id": "0-hero/fineweb-edu-10BT-GPT2-tokenized", "author": "0-hero", "sha": "71124a55b342b3e3e6c73f5e918b57d143b4cf62", "last_modified": "2024-09-21 19:16:44+00:00", "created_at": "2024-09-21 19:10:13+00:00", "private": false, "gated": false, "disabled": false, "downloads": 0, "downloads_all_time": null, "likes": 0, "library_name": null, "gguf": null, "inference": null, "inference_provider_mapping": null, "tags": ["region:us"], "pipeline_tag": null, "mask_token": null, "trending_score": null, "card_data": null, "widget_data": null, "model_index": null, "config": null, "transformers_info": null, "siblings": ["RepoSibling(rfilename='.gitattributes', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='meta.json', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='train.bin', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='val.bin', size=None, blob_id=None, lfs=None)"], "spaces": [], "safetensors": null, "security_repo_status": null, "xet_enabled": null, "lastModified": "2024-09-21 19:16:44+00:00", "cardData": null, "transformersInfo": null, "_id": "66ef1a15ce6e5db9b3272945", "modelId": "0-hero/fineweb-edu-10BT-GPT2-tokenized", "usedStorage": 19907978688}
|
N/A
|
0-hero/flan-OIG-base
|
{"id": "0-hero/flan-OIG-base", "author": "0-hero", "sha": "b5848ebf64ac335711009361a1c726ce8eda3ab8", "last_modified": "2023-04-13 04:44:03+00:00", "created_at": "2023-03-30 18:20:35+00:00", "private": false, "gated": false, "disabled": false, "downloads": 69, "downloads_all_time": null, "likes": 1, "library_name": "transformers", "gguf": null, "inference": null, "inference_provider_mapping": null, "tags": ["transformers", "pytorch", "safetensors", "t5", "text2text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us"], "pipeline_tag": "text-generation", "mask_token": null, "trending_score": null, "card_data": null, "widget_data": [{"text": "My name is Julien and I like to"}, {"text": "I like traveling by train because"}, {"text": "Paris is an amazing place to visit,"}, {"text": "Once upon a time,"}], "model_index": null, "config": {"architectures": ["T5ForConditionalGeneration"], "model_type": "t5", "tokenizer_config": {"eos_token": "</s>", "pad_token": "<pad>", "unk_token": "<unk>"}, "additional_chat_templates": {}}, "transformers_info": {"auto_model": "AutoModelForSeq2SeqLM", "custom_class": null, "pipeline_tag": "text2text-generation", "processor": "AutoTokenizer"}, "siblings": ["RepoSibling(rfilename='.gitattributes', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='config.json', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='generation_config.json', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='model.safetensors', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='pytorch_model.bin', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='special_tokens_map.json', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='spiece.model', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='tokenizer.json', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='tokenizer_config.json', size=None, blob_id=None, lfs=None)"], "spaces": [], "safetensors": {"parameters": {"F32": 247577856}, "total": 247577856}, "security_repo_status": null, "xet_enabled": null, "lastModified": "2023-04-13 04:44:03+00:00", "cardData": null, "transformersInfo": {"auto_model": "AutoModelForSeq2SeqLM", "custom_class": null, "pipeline_tag": "text2text-generation", "processor": "AutoTokenizer"}, "_id": "6425d2f3ce640161f292ab18", "modelId": "0-hero/flan-OIG-base", "usedStorage": 1981541639}
|
N/A
|
0-hero/flan-OIG-small
|
{"id": "0-hero/flan-OIG-small", "author": "0-hero", "sha": "d54fa8986b081bcf022d6355249c84fbf5e214df", "last_modified": "2023-04-13 04:43:18+00:00", "created_at": "2023-03-30 17:05:56+00:00", "private": false, "gated": false, "disabled": false, "downloads": 22, "downloads_all_time": null, "likes": 0, "library_name": "transformers", "gguf": null, "inference": null, "inference_provider_mapping": null, "tags": ["transformers", "pytorch", "safetensors", "t5", "text2text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us"], "pipeline_tag": "text-generation", "mask_token": null, "trending_score": null, "card_data": null, "widget_data": [{"text": "My name is Julien and I like to"}, {"text": "I like traveling by train because"}, {"text": "Paris is an amazing place to visit,"}, {"text": "Once upon a time,"}], "model_index": null, "config": {"architectures": ["T5ForConditionalGeneration"], "model_type": "t5", "tokenizer_config": {"eos_token": "</s>", "pad_token": "<pad>", "unk_token": "<unk>"}, "additional_chat_templates": {}}, "transformers_info": {"auto_model": "AutoModelForSeq2SeqLM", "custom_class": null, "pipeline_tag": "text2text-generation", "processor": "AutoTokenizer"}, "siblings": ["RepoSibling(rfilename='.gitattributes', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='config.json', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='generation_config.json', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='model.safetensors', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='pytorch_model.bin', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='special_tokens_map.json', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='spiece.model', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='tokenizer.json', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='tokenizer_config.json', size=None, blob_id=None, lfs=None)"], "spaces": [], "safetensors": {"parameters": {"F32": 76961152}, "total": 76961152}, "security_repo_status": null, "xet_enabled": null, "lastModified": "2023-04-13 04:43:18+00:00", "cardData": null, "transformersInfo": {"auto_model": "AutoModelForSeq2SeqLM", "custom_class": null, "pipeline_tag": "text2text-generation", "processor": "AutoTokenizer"}, "_id": "6425c1741454fbdb60183f91", "modelId": "0-hero/flan-OIG-small", "usedStorage": 616566163}
|
N/A
|
0-hero/flan-OIG-ul2
|
{"id": "0-hero/flan-OIG-ul2", "author": "0-hero", "sha": "44c581289e9237c11e2805f943884df716ec9f0e", "last_modified": "2023-03-30 17:16:41+00:00", "created_at": "2023-03-30 16:46:43+00:00", "private": false, "gated": false, "disabled": false, "downloads": 25, "downloads_all_time": null, "likes": 1, "library_name": "transformers", "gguf": null, "inference": null, "inference_provider_mapping": null, "tags": ["transformers", "pytorch", "t5", "text2text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us"], "pipeline_tag": "text-generation", "mask_token": null, "trending_score": null, "card_data": null, "widget_data": [{"text": "My name is Julien and I like to"}, {"text": "I like traveling by train because"}, {"text": "Paris is an amazing place to visit,"}, {"text": "Once upon a time,"}], "model_index": null, "config": {"architectures": ["T5ForConditionalGeneration"], "model_type": "t5", "tokenizer_config": {"eos_token": "</s>", "pad_token": "<pad>", "unk_token": "<unk>"}, "additional_chat_templates": {}}, "transformers_info": {"auto_model": "AutoModelForSeq2SeqLM", "custom_class": null, "pipeline_tag": "text2text-generation", "processor": "AutoTokenizer"}, "siblings": ["RepoSibling(rfilename='.gitattributes', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='config.json', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='generation_config.json', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='pytorch_model-00001-of-00008.bin', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='pytorch_model-00002-of-00008.bin', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='pytorch_model-00003-of-00008.bin', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='pytorch_model-00004-of-00008.bin', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='pytorch_model-00005-of-00008.bin', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='pytorch_model-00006-of-00008.bin', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='pytorch_model-00007-of-00008.bin', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='pytorch_model-00008-of-00008.bin', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='pytorch_model.bin.index.json', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='special_tokens_map.json', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='tokenizer.json', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='tokenizer_config.json', size=None, blob_id=None, lfs=None)"], "spaces": [], "safetensors": null, "security_repo_status": null, "xet_enabled": null, "lastModified": "2023-03-30 17:16:41+00:00", "cardData": null, "transformersInfo": {"auto_model": "AutoModelForSeq2SeqLM", "custom_class": null, "pipeline_tag": "text2text-generation", "processor": "AutoTokenizer"}, "_id": "6425bcf3ce640161f291ff9a", "modelId": "0-hero/flan-OIG-ul2", "usedStorage": 78891486253}
|
N/A
|
0-hero/flan-OIG-xl
|
{"id": "0-hero/flan-OIG-xl", "author": "0-hero", "sha": "9976eeefd83edf1546994e5941a3848a3394ca97", "last_modified": "2023-03-30 21:47:45+00:00", "created_at": "2023-03-30 21:25:34+00:00", "private": false, "gated": false, "disabled": false, "downloads": 22, "downloads_all_time": null, "likes": 1, "library_name": "transformers", "gguf": null, "inference": null, "inference_provider_mapping": null, "tags": ["transformers", "pytorch", "t5", "text2text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us"], "pipeline_tag": "text-generation", "mask_token": null, "trending_score": null, "card_data": null, "widget_data": [{"text": "My name is Julien and I like to"}, {"text": "I like traveling by train because"}, {"text": "Paris is an amazing place to visit,"}, {"text": "Once upon a time,"}], "model_index": null, "config": {"architectures": ["T5ForConditionalGeneration"], "model_type": "t5", "tokenizer_config": {"eos_token": "</s>", "pad_token": "<pad>", "unk_token": "<unk>"}}, "transformers_info": {"auto_model": "AutoModelForSeq2SeqLM", "custom_class": null, "pipeline_tag": "text2text-generation", "processor": "AutoTokenizer"}, "siblings": ["RepoSibling(rfilename='.gitattributes', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='config.json', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='generation_config.json', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='pytorch_model-00001-of-00002.bin', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='pytorch_model-00002-of-00002.bin', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='pytorch_model.bin.index.json', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='special_tokens_map.json', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='tokenizer.json', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='tokenizer_config.json', size=None, blob_id=None, lfs=None)"], "spaces": [], "safetensors": null, "security_repo_status": null, "xet_enabled": null, "lastModified": "2023-03-30 21:47:45+00:00", "cardData": null, "transformersInfo": {"auto_model": "AutoModelForSeq2SeqLM", "custom_class": null, "pipeline_tag": "text2text-generation", "processor": "AutoTokenizer"}, "_id": "6425fe4e03e27cca35e4a53c", "modelId": "0-hero/flan-OIG-xl", "usedStorage": 22798323242}
|
N/A
|
0-hero/flan-alpaca-ul2
|
{"id": "0-hero/flan-alpaca-ul2", "author": "0-hero", "sha": "91a03bdacaf6adb86f55bfdd5644eba95673ab29", "last_modified": "2023-03-30 11:59:23+00:00", "created_at": "2023-03-30 11:11:03+00:00", "private": false, "gated": false, "disabled": false, "downloads": 22, "downloads_all_time": null, "likes": 4, "library_name": "transformers", "gguf": null, "inference": null, "inference_provider_mapping": null, "tags": ["transformers", "pytorch", "t5", "text2text-generation", "dataset:tatsu-lab/alpaca", "arxiv:2210.11416", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us"], "pipeline_tag": "text-generation", "mask_token": null, "trending_score": null, "card_data": "datasets:\n- tatsu-lab/alpaca\nlicense: apache-2.0", "widget_data": [{"text": "My name is Julien and I like to"}, {"text": "I like traveling by train because"}, {"text": "Paris is an amazing place to visit,"}, {"text": "Once upon a time,"}], "model_index": null, "config": {"architectures": ["T5ForConditionalGeneration"], "model_type": "t5", "tokenizer_config": {"eos_token": "</s>", "pad_token": "<pad>", "unk_token": "<unk>"}}, "transformers_info": {"auto_model": "AutoModelForSeq2SeqLM", "custom_class": null, "pipeline_tag": "text2text-generation", "processor": "AutoTokenizer"}, "siblings": ["RepoSibling(rfilename='.gitattributes', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='README.md', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='config.json', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='generation_config.json', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='pytorch_model-00001-of-00008.bin', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='pytorch_model-00002-of-00008.bin', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='pytorch_model-00003-of-00008.bin', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='pytorch_model-00004-of-00008.bin', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='pytorch_model-00005-of-00008.bin', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='pytorch_model-00006-of-00008.bin', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='pytorch_model-00007-of-00008.bin', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='pytorch_model-00008-of-00008.bin', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='pytorch_model.bin.index.json', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='special_tokens_map.json', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='tokenizer.json', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='tokenizer_config.json', size=None, blob_id=None, lfs=None)"], "spaces": [], "safetensors": null, "security_repo_status": null, "xet_enabled": null, "lastModified": "2023-03-30 11:59:23+00:00", "cardData": "datasets:\n- tatsu-lab/alpaca\nlicense: apache-2.0", "transformersInfo": {"auto_model": "AutoModelForSeq2SeqLM", "custom_class": null, "pipeline_tag": "text2text-generation", "processor": "AutoTokenizer"}, "_id": "64256e477bccde5588b7a00e", "modelId": "0-hero/flan-alpaca-ul2", "usedStorage": 157782803557}
|
---
license: apache-2.0
datasets:
- tatsu-lab/alpaca
---
## 🍮 🦙 Flan-Alpaca: Instruction Tuning from Humans and Machines
Thanks to [declare-lab](https://huggingface.co/declare-lab) for the training [repository](https://github.com/declare-lab/flan-alpaca), contains code for extending the [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca)
synthetic instruction tuning to existing instruction-tuned models such as [Flan-T5](https://arxiv.org/abs/2210.11416).
The pretrained models and demos are available on HuggingFace 🤗 :
| Model | Parameters | Training GPUs |
|---------------------------------------------------------------------------|------------|-----------------|
| [Flan-Alpaca-Base](https://huggingface.co/declare-lab/flan-alpaca-base) | 220M | 1x A6000 |
| [Flan-Alpaca-Large](https://huggingface.co/declare-lab/flan-alpaca-large) | 770M | 1x A6000 |
| [Flan-Alpaca-XL](https://huggingface.co/declare-lab/flan-alpaca-xl) | 3B | 1x A6000 |
| [Flan-Alpaca-XXL](https://huggingface.co/declare-lab/flan-alpaca-xxl) | 11B | 4x A6000 (FSDP) |
| [Flan-Alpaca-UL2](https://huggingface.co/0-hero/flan-alpaca-ul2) | 20B | 4x A100 (80G) (FSDP) |
### Why?
[Alpaca](https://crfm.stanford.edu/2023/03/13/alpaca.html) represents an exciting new direction
to approximate the performance of large language models (LLMs) like ChatGPT cheaply and easily.
Concretely, they leverage an LLM such as GPT-3 to generate instructions as synthetic training data.
The synthetic data which covers more than 50k tasks can then be used to finetune a smaller model.
However, the original implementation is less accessible due to licensing constraints of the
underlying [LLaMA](https://ai.facebook.com/blog/large-language-model-llama-meta-ai/) model.
Furthermore, users have noted [potential noise](https://github.com/tloen/alpaca-lora/issues/65) in the synthetic
dataset. Hence, it may be better to explore a fully accessible model that is already trained on high-quality (but
less diverse) instructions such as [Flan-T5](https://arxiv.org/abs/2210.11416).
### Usage
```
from transformers import pipeline
prompt = "Write an email about an alpaca that likes flan"
model = pipeline(model="0-hero/flan-alpaca-ul2")
model(prompt, max_length=128, do_sample=True)
```
Readme forked from declare-lab/flan-alpaca-xxl
|
0-hero/led-large-legal-summary
|
{"id": "0-hero/led-large-legal-summary", "author": "0-hero", "sha": "6a2a61bd597bb447fa6d915ec0d5dfb315e67c46", "last_modified": "2023-06-26 10:41:44+00:00", "created_at": "2022-11-28 20:34:35+00:00", "private": false, "gated": false, "disabled": false, "downloads": 27, "downloads_all_time": null, "likes": 2, "library_name": "transformers", "gguf": null, "inference": null, "inference_provider_mapping": null, "tags": ["transformers", "pytorch", "safetensors", "autotrain", "summarization", "unk", "dataset:0-hero/autotrain-data-legal-summarisation", "co2_eq_emissions", "endpoints_compatible", "region:us"], "pipeline_tag": "summarization", "mask_token": null, "trending_score": null, "card_data": "datasets:\n- 0-hero/autotrain-data-legal-summarisation\nlanguage:\n- unk\ntags:\n- autotrain\n- summarization\nwidget:\n- text: I love AutoTrain \ud83e\udd17\nco2_eq_emissions:\n emissions: 0.14139281336849252", "widget_data": [{"text": "I love AutoTrain \ud83e\udd17"}], "model_index": null, "config": {"additional_chat_templates": {}}, "transformers_info": {"auto_model": "AutoModel", "custom_class": null, "pipeline_tag": null, "processor": null}, "siblings": ["RepoSibling(rfilename='.gitattributes', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='README.md', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='config.json', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='merges.txt', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='model.safetensors', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='pytorch_model.bin', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='special_tokens_map.json', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='tokenizer.json', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='tokenizer_config.json', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='vocab.json', size=None, blob_id=None, lfs=None)"], "spaces": [], "safetensors": {"parameters": {"F32": 459851865}, "total": 459851865}, "security_repo_status": null, "xet_enabled": null, "lastModified": "2023-06-26 10:41:44+00:00", "cardData": "datasets:\n- 0-hero/autotrain-data-legal-summarisation\nlanguage:\n- unk\ntags:\n- autotrain\n- summarization\nwidget:\n- text: I love AutoTrain \ud83e\udd17\nco2_eq_emissions:\n emissions: 0.14139281336849252", "transformersInfo": {"auto_model": "AutoModel", "custom_class": null, "pipeline_tag": null, "processor": null}, "_id": "63851b5b16ccd1034bb3e4b0", "modelId": "0-hero/led-large-legal-summary", "usedStorage": 3681993967}
|
---
tags:
- autotrain
- summarization
language:
- unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- 0-hero/autotrain-data-legal-summarisation
co2_eq_emissions:
emissions: 0.14139281336849252
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 2269972136
- CO2 Emissions (in grams): 0.1414
## Validation Metrics
- Loss: 2.098
- Rouge1: 36.855
- Rouge2: 22.050
- RougeL: 33.547
- RougeLsum: 34.607
- Gen Len: 27.633
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/0-hero/autotrain-legal-summarisation-2269972136
```
|
0-hero/mistral-speech-to-text
|
{"id": "0-hero/mistral-speech-to-text", "author": "0-hero", "sha": "8e80a0c73d95563be9b7127b3e6c709190732191", "last_modified": "2024-04-05 08:37:28+00:00", "created_at": "2024-04-04 21:49:55+00:00", "private": false, "gated": false, "disabled": false, "downloads": 25, "downloads_all_time": null, "likes": 1, "library_name": "transformers", "gguf": null, "inference": null, "inference_provider_mapping": null, "tags": ["transformers", "pytorch", "safetensors", "mistral", "text-generation", "conversational", "dataset:0-hero/lj_speech_with_spectogram_conversations", "dataset:0-hero/lj_speech_with_spectogram", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us"], "pipeline_tag": "text-generation", "mask_token": null, "trending_score": null, "card_data": "datasets:\n- 0-hero/lj_speech_with_spectogram_conversations\n- 0-hero/lj_speech_with_spectogram\nlicense: apache-2.0", "widget_data": [{"text": "Hi, what can you help me with?"}, {"text": "What is 84 * 3 / 2?"}, {"text": "Tell me an interesting fact about the universe!"}, {"text": "Explain quantum computing in simple terms."}], "model_index": null, "config": {"architectures": ["MistralForCausalLM"], "model_type": "mistral", "tokenizer_config": {"bos_token": "<s>", "chat_template": "{% if not add_generation_prompt is defined %}{% set add_generation_prompt = false %}{% endif %}{% for message in messages %}{{'<|im_start|>' + message['role'] + '\n' + message['content'] + '<|im_end|>' + '\n'}}{% endfor %}{% if add_generation_prompt %}{{ '<|im_start|>assistant\n' }}{% endif %}", "eos_token": "<|im_end|>", "pad_token": "</s>", "unk_token": "<unk>", "use_default_system_prompt": false}, "additional_chat_templates": {}}, "transformers_info": {"auto_model": "AutoModelForCausalLM", "custom_class": null, "pipeline_tag": "text-generation", "processor": "AutoTokenizer"}, "siblings": ["RepoSibling(rfilename='.gitattributes', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='README.md', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='added_tokens.json', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='config.json', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='generation_config.json', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='model.safetensors', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='pytorch_model.bin', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='special_tokens_map.json', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='tokenizer.model', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='tokenizer_config.json', size=None, blob_id=None, lfs=None)"], "spaces": [], "safetensors": {"parameters": {"BF16": 7241748480}, "total": 7241748480}, "security_repo_status": null, "xet_enabled": null, "lastModified": "2024-04-05 08:37:28+00:00", "cardData": "datasets:\n- 0-hero/lj_speech_with_spectogram_conversations\n- 0-hero/lj_speech_with_spectogram\nlicense: apache-2.0", "transformersInfo": {"auto_model": "AutoModelForCausalLM", "custom_class": null, "pipeline_tag": "text-generation", "processor": "AutoTokenizer"}, "_id": "660f2083d5671fdffc72ecce", "modelId": "0-hero/mistral-speech-to-text", "usedStorage": 28967561624}
|
---
license: apache-2.0
datasets:
- 0-hero/lj_speech_with_spectogram_conversations
- 0-hero/lj_speech_with_spectogram
---
## Explanation
A small experiment insipred by the [Mistral playing DOOM experiment](https://github.com/umuthopeyildirim/DOOM-Mistral/tree/main) from the Mistral Hackathon
**How it works?**
```
Audio -> Waveform Visualization -> Waveform ASCII Art -> Finetune Mistral on ASCII Art to predict text from ASCII Art
```
**Quick video explanation**
<video width="820" controls src="https://cdn-uploads.huggingface.co/production/uploads/6382255fcae34727b9cc149e/nCB8Qu8QwDbJAKcq9IzPE.mp4"></video>
#### Example Waveform

#### Example ASCII Art
```
y
x
x
x
ux
tx
dx
9 ex v
4 dr v
4 do s
1Y 6o e o5
x 1Y y 6o ze 7 iYY
e YYz r Ys Ym xd Y gYY
6 YYY r Ys Yl xd sY Y eYY Y
z 6 YYY y s k z Ys Yl xd sY Y m vYYY s 5Y z
Yh 6Y xfYYY y qq i 2m Ydu Y6 Y vd sYY Yz cq vYYY eY 5Y0 b z q
YYz 6Y xywYYYY ZzqYt f YY Ydu Y6 Y vd sYY Yl 6d iYYY YYz 5YY Y z 9 g il
YYx YYo gfwYYYY ZxkY6 wf Z9YY Ydu xz6 Y6 Yi8 v4 sYY z YY 6d 4YYY uYYu o 5YY f Y zkyY 0 1Y z
YYw YY7 hfwYYYY 7 ZYkYY 1 z 7Y yYYYY YdY 8zY Y6 Y78 v4 sYYyt Y YY 6d 4YYYz3YYY o 4YYx e Yzx0xY Z 1Y 8
YYw YYY 5dwYYYYeYmZYmYY Y Y YY sYYYYY YdYl x 50Yo 9 Y6 YYY4 v5 vYY3tpY YY 6d 4YYYyYYYY m 3YYq Y Yzg0iY Y 4ZY 98 p
YYsjYYYYz 5Y7YYYYYYdYYYYY Y Y YY YYYYYY YcYY 5lY8YYa1 Y6 YYY2 v7YYYYYYpYz YYY 6d ZYYYtYYYY Yg 3YYo Y Yz8YZY xw Y YYY Y5 p Y
YY6YYYYYa YY9YYYYYYYYYYYYrYyYz YYe YYYYYY YYYYv YsYYYYYYY uY5 YYYY0 vYYYYYYYoYz YYY 6Y YYYYtYYYY Y7 3YYi rY wYz7YYYvkt YYYYY Y4yo4 1Yay
3YYdYYYYY8 YYYYYYYYYYYYYYYYYwYu YY3 YYYYYY YYYYn YeYYYYYYYZ uY5 YYYY0 uYYYYYYY6Yz YYY 6Y9 YYYYsYYYY Y58 3YYi6pY ZYz5YYY2e5 YYYYY Y2qeYz1YY9
YYY5YYYYY5 YYYYYYYYYYYYYYYYYuYu YY1 YYYYYY YYYY3 YYYYYYYYYY uYYYYYYY0 hYYYYYYY6YyrYYY 6YY YYYYYYYYY YYY 3YYfYsY YYz1YYY1Y2Y YYYYY YYq0YYYYYY
YYYYYYYYYY YYYYYYYYYYYYYYYYYsYuyYYYoYYYYYY1 YYYY2YYYYYYYYYYY vYYYYYYYY z4YYYYYYYYYYYYYYZg1YYYYYYYYYYYYzYYYz ZYYYYdYlYYYYYYYZYYY YYYYYzYYYYYYYYYY
YYYYYYYYYYYYY1YYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYZY112YYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYY
YYY1YYYYYY YYYYYYYYYYYYYYYYYsYu YYYoYYYYYY4 YYYY2YYYYYYYYYYY vYYYYYYYY 7YYYYYYYZYYYYYYZv7YYYYYYYYYYYY YYYz 1YYYYdYmYYYYYYYZYYY YYYYYzYYYYYYYYYY
YYY6YYYYY6 YYYYYYYYYYYYYYYYYvYu YY1 YYYYYY YYYY3 YYYYYYYYYY uYYYYYYY0 hYYYYYYY6YyxYYY 6YY YYYYYYYYY YYY 3YYfYtY YYz3YYY1Y2Y YYYYY YYq0YYYYYY
cYYcYYYYY8 YYYYYYYYYYYYYYYYYxYu YY3 YYYYYY YYYYn YfYYYYYYYZ uY5 YYYY0 vYYYYYYY6Yz YYY 6Y9 YYYYsYYYY Y5n 3YYi8pY 1Yz6YYY3g7 YYYYY Y2qkYz1YYa
YY9YYYYYf YY9YYYYYYYYYYYYrYyYz YYi YYYYYY Y1YYv YsYZYYYYY uY5 YYYYb vYYYYYYYoYz YYY 6Y YYYYtYYYY Y7 3YYi rY wYz7YYYvku YYYYY Y3zo6 1Yn
YYskYYYYz 5Y8YYYYYYdYY2YY Y Y YY YYYYYY YcYY 5lY8YYd3 Y6 YYY4 v9YYYYYYpYz YYY 6d 1YYYtYYYY Yg 3YYp Y Yz8Y1Y yw Y YYY Y5 q Y
YYw YYY 5dwYYYYhYpZYmYY Y Y YY wYYYYY YdYr z 50Yo 0 Y6 YYY5 v5 vYY4tpY YY 6d 4YYYzYYYY m 3YYq Y Yzh0iY Y 4ZY 98 o
YYw YY7 hfwYYYY b ZYkYY 4 cY yYYYY YdY dzY Y6 Y78 v4 sYYzt Y YY 6d 4YYY 7YYY o 5YYx f Yzy0yY 3 1Y 8
YYx ZYo hfwYYYY ZykY7 wf Z9YY Ydu xz7 Y6 Ykb v3 sYY YY 6d 9YYY uYYy o 5YY f Y zpyY b 1Y z
YY 6Y xywYYYY ZzqYy f YY Ydu Y6 Y vd sYY Yo 6d jYYY 1Yz 5YY Y z 9 g lm
Yi 6Y ygYYY y qq i 9m Ydu Y6 Y vd sYY Y cs vYYY rY 5Yp e z t
6 YYY y s o Ys Yl xd sY Y r vYYY y 5Y z
6 YYY r Ys Yl xd sY Y eYY Y
e YYz r Ys Ym xd Y gYY
x 1Y z 6o ze 8 mYY
14 6o e o5
4 do t
4 dv v
h ex v
dx
ux
ux
x
x
x
y
```
## Models & Results
Mistral 7B 0.2 finetunes on ascii art. **As seen in the results, experiment didn't amount to much**
- 2 epochs - [0-hero/mistral-speech-to-text-preview](https://huggingface.co/0-hero/mistral-speech-to-text-preview/) - Loss pretty much flattened after this epoch
- ```
Metrics:
rouge-1: {'r': 0.12919024091165357, 'p': 0.1163312036605547, 'f': 0.11315199212991178}
rouge-2: {'r': 0.013705453572242508, 'p': 0.0137500428446463, 'f': 0.012676757505648992}
rouge-l: {'r': 0.11261286554140228, 'p': 0.09921920076529338, 'f': 0.09705621471622536}
length_correlation: 0.014470676120233311
avg_actual_length: 16.59
avg_pred_length: 21.46
exact_match_accuracy: 0.0
```
- 4 epochs - [0-hero/mistral-speech-to-text](https://huggingface.co/0-hero/mistral-speech-to-text/) - (this)
- ```
Metrics:
rouge-1: {'r': 0.11869828051815862, 'p': 0.11697319273190071, 'f': 0.11154343875398197}
rouge-2: {'r': 0.008572925612399297, 'p': 0.009040061245943597, 'f': 0.008369604666309954}
rouge-l: {'r': 0.10780857719316121, 'p': 0.10373665666448233, 'f': 0.09985384905943501}
length_correlation: -0.1500200314034927
avg_actual_length: 16.59
avg_pred_length: 18.32
exact_match_accuracy: 0.0
```
## Datasets
[lj_speech](https://huggingface.co/datasets/lj_speech) dataset used to convert audio waveforms into ASCII Art
- [0-hero/lj_speech_with_spectogram_conversations](https://huggingface.co/datasets/0-hero/lj_speech_with_spectogram_conversations) - ShareGPT style finetuning dataset with train, test split
- [0-hero/lj_speech_with_spectogram](https://huggingface.co/datasets/0-hero/lj_speech_with_spectogram) - Raw dataset with ASCII Art
|
0-hero/mistral-speech-to-text-preview
|
{"id": "0-hero/mistral-speech-to-text-preview", "author": "0-hero", "sha": "f82e552e0f639fdbad6dea8d59a1094b88e68161", "last_modified": "2024-04-05 08:36:36+00:00", "created_at": "2024-04-04 21:15:45+00:00", "private": false, "gated": false, "disabled": false, "downloads": 11, "downloads_all_time": null, "likes": 0, "library_name": "transformers", "gguf": null, "inference": null, "inference_provider_mapping": null, "tags": ["transformers", "safetensors", "mistral", "text-generation", "conversational", "dataset:0-hero/lj_speech_with_spectogram_conversations", "dataset:0-hero/lj_speech_with_spectogram", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us"], "pipeline_tag": "text-generation", "mask_token": null, "trending_score": null, "card_data": "datasets:\n- 0-hero/lj_speech_with_spectogram_conversations\n- 0-hero/lj_speech_with_spectogram\nlicense: apache-2.0", "widget_data": [{"text": "Hi, what can you help me with?"}, {"text": "What is 84 * 3 / 2?"}, {"text": "Tell me an interesting fact about the universe!"}, {"text": "Explain quantum computing in simple terms."}], "model_index": null, "config": {"architectures": ["MistralForCausalLM"], "model_type": "mistral", "tokenizer_config": {"bos_token": "<s>", "chat_template": "{% if not add_generation_prompt is defined %}{% set add_generation_prompt = false %}{% endif %}{% for message in messages %}{{'<|im_start|>' + message['role'] + '\n' + message['content'] + '<|im_end|>' + '\n'}}{% endfor %}{% if add_generation_prompt %}{{ '<|im_start|>assistant\n' }}{% endif %}", "eos_token": "<|im_end|>", "pad_token": "</s>", "unk_token": "<unk>", "use_default_system_prompt": false}, "additional_chat_templates": {}}, "transformers_info": {"auto_model": "AutoModelForCausalLM", "custom_class": null, "pipeline_tag": "text-generation", "processor": "AutoTokenizer"}, "siblings": ["RepoSibling(rfilename='.gitattributes', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='Mistral-Speech-To-Text.mp4', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='README.md', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='added_tokens.json', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='config.json', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='model-00001-of-00003.safetensors', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='model-00002-of-00003.safetensors', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='model-00003-of-00003.safetensors', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='model.safetensors.index.json', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='special_tokens_map.json', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='tokenizer.model', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='tokenizer_config.json', size=None, blob_id=None, lfs=None)"], "spaces": [], "safetensors": {"parameters": {"BF16": 7241748480}, "total": 7241748480}, "security_repo_status": null, "xet_enabled": null, "lastModified": "2024-04-05 08:36:36+00:00", "cardData": "datasets:\n- 0-hero/lj_speech_with_spectogram_conversations\n- 0-hero/lj_speech_with_spectogram\nlicense: apache-2.0", "transformersInfo": {"auto_model": "AutoModelForCausalLM", "custom_class": null, "pipeline_tag": "text-generation", "processor": "AutoTokenizer"}, "_id": "660f18812dcd816ad15aaeca", "modelId": "0-hero/mistral-speech-to-text-preview", "usedStorage": 14492363311}
|
---
license: apache-2.0
datasets:
- 0-hero/lj_speech_with_spectogram_conversations
- 0-hero/lj_speech_with_spectogram
---
## Explanation
A small experiment insipred by the [Mistral playing DOOM experiment](https://github.com/umuthopeyildirim/DOOM-Mistral/tree/main) from the Mistral Hackathon
**How it works?**
```
Audio -> Waveform Visualization -> Waveform ASCII Art -> Finetune Mistral on ASCII Art to predict text from ASCII Art
```
**Quick video explanation**
<video width="820" controls src="https://cdn-uploads.huggingface.co/production/uploads/6382255fcae34727b9cc149e/nCB8Qu8QwDbJAKcq9IzPE.mp4"></video>
#### Example Waveform

#### Example ASCII Art
```
y
x
x
x
ux
tx
dx
9 ex v
4 dr v
4 do s
1Y 6o e o5
x 1Y y 6o ze 7 iYY
e YYz r Ys Ym xd Y gYY
6 YYY r Ys Yl xd sY Y eYY Y
z 6 YYY y s k z Ys Yl xd sY Y m vYYY s 5Y z
Yh 6Y xfYYY y qq i 2m Ydu Y6 Y vd sYY Yz cq vYYY eY 5Y0 b z q
YYz 6Y xywYYYY ZzqYt f YY Ydu Y6 Y vd sYY Yl 6d iYYY YYz 5YY Y z 9 g il
YYx YYo gfwYYYY ZxkY6 wf Z9YY Ydu xz6 Y6 Yi8 v4 sYY z YY 6d 4YYY uYYu o 5YY f Y zkyY 0 1Y z
YYw YY7 hfwYYYY 7 ZYkYY 1 z 7Y yYYYY YdY 8zY Y6 Y78 v4 sYYyt Y YY 6d 4YYYz3YYY o 4YYx e Yzx0xY Z 1Y 8
YYw YYY 5dwYYYYeYmZYmYY Y Y YY sYYYYY YdYl x 50Yo 9 Y6 YYY4 v5 vYY3tpY YY 6d 4YYYyYYYY m 3YYq Y Yzg0iY Y 4ZY 98 p
YYsjYYYYz 5Y7YYYYYYdYYYYY Y Y YY YYYYYY YcYY 5lY8YYa1 Y6 YYY2 v7YYYYYYpYz YYY 6d ZYYYtYYYY Yg 3YYo Y Yz8YZY xw Y YYY Y5 p Y
YY6YYYYYa YY9YYYYYYYYYYYYrYyYz YYe YYYYYY YYYYv YsYYYYYYY uY5 YYYY0 vYYYYYYYoYz YYY 6Y YYYYtYYYY Y7 3YYi rY wYz7YYYvkt YYYYY Y4yo4 1Yay
3YYdYYYYY8 YYYYYYYYYYYYYYYYYwYu YY3 YYYYYY YYYYn YeYYYYYYYZ uY5 YYYY0 uYYYYYYY6Yz YYY 6Y9 YYYYsYYYY Y58 3YYi6pY ZYz5YYY2e5 YYYYY Y2qeYz1YY9
YYY5YYYYY5 YYYYYYYYYYYYYYYYYuYu YY1 YYYYYY YYYY3 YYYYYYYYYY uYYYYYYY0 hYYYYYYY6YyrYYY 6YY YYYYYYYYY YYY 3YYfYsY YYz1YYY1Y2Y YYYYY YYq0YYYYYY
YYYYYYYYYY YYYYYYYYYYYYYYYYYsYuyYYYoYYYYYY1 YYYY2YYYYYYYYYYY vYYYYYYYY z4YYYYYYYYYYYYYYZg1YYYYYYYYYYYYzYYYz ZYYYYdYlYYYYYYYZYYY YYYYYzYYYYYYYYYY
YYYYYYYYYYYYY1YYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYZY112YYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYY
YYY1YYYYYY YYYYYYYYYYYYYYYYYsYu YYYoYYYYYY4 YYYY2YYYYYYYYYYY vYYYYYYYY 7YYYYYYYZYYYYYYZv7YYYYYYYYYYYY YYYz 1YYYYdYmYYYYYYYZYYY YYYYYzYYYYYYYYYY
YYY6YYYYY6 YYYYYYYYYYYYYYYYYvYu YY1 YYYYYY YYYY3 YYYYYYYYYY uYYYYYYY0 hYYYYYYY6YyxYYY 6YY YYYYYYYYY YYY 3YYfYtY YYz3YYY1Y2Y YYYYY YYq0YYYYYY
cYYcYYYYY8 YYYYYYYYYYYYYYYYYxYu YY3 YYYYYY YYYYn YfYYYYYYYZ uY5 YYYY0 vYYYYYYY6Yz YYY 6Y9 YYYYsYYYY Y5n 3YYi8pY 1Yz6YYY3g7 YYYYY Y2qkYz1YYa
YY9YYYYYf YY9YYYYYYYYYYYYrYyYz YYi YYYYYY Y1YYv YsYZYYYYY uY5 YYYYb vYYYYYYYoYz YYY 6Y YYYYtYYYY Y7 3YYi rY wYz7YYYvku YYYYY Y3zo6 1Yn
YYskYYYYz 5Y8YYYYYYdYY2YY Y Y YY YYYYYY YcYY 5lY8YYd3 Y6 YYY4 v9YYYYYYpYz YYY 6d 1YYYtYYYY Yg 3YYp Y Yz8Y1Y yw Y YYY Y5 q Y
YYw YYY 5dwYYYYhYpZYmYY Y Y YY wYYYYY YdYr z 50Yo 0 Y6 YYY5 v5 vYY4tpY YY 6d 4YYYzYYYY m 3YYq Y Yzh0iY Y 4ZY 98 o
YYw YY7 hfwYYYY b ZYkYY 4 cY yYYYY YdY dzY Y6 Y78 v4 sYYzt Y YY 6d 4YYY 7YYY o 5YYx f Yzy0yY 3 1Y 8
YYx ZYo hfwYYYY ZykY7 wf Z9YY Ydu xz7 Y6 Ykb v3 sYY YY 6d 9YYY uYYy o 5YY f Y zpyY b 1Y z
YY 6Y xywYYYY ZzqYy f YY Ydu Y6 Y vd sYY Yo 6d jYYY 1Yz 5YY Y z 9 g lm
Yi 6Y ygYYY y qq i 9m Ydu Y6 Y vd sYY Y cs vYYY rY 5Yp e z t
6 YYY y s o Ys Yl xd sY Y r vYYY y 5Y z
6 YYY r Ys Yl xd sY Y eYY Y
e YYz r Ys Ym xd Y gYY
x 1Y z 6o ze 8 mYY
14 6o e o5
4 do t
4 dv v
h ex v
dx
ux
ux
x
x
x
y
```
## Models & Results
Mistral 7B 0.2 finetunes on ascii art. **As seen in the results, experiment didn't amount to much**
- 2 epochs - [0-hero/mistral-speech-to-text-preview](https://huggingface.co/0-hero/mistral-speech-to-text-preview/) (this) - Loss pretty much flattened after this epoch
- ```
Metrics:
rouge-1: {'r': 0.12919024091165357, 'p': 0.1163312036605547, 'f': 0.11315199212991178}
rouge-2: {'r': 0.013705453572242508, 'p': 0.0137500428446463, 'f': 0.012676757505648992}
rouge-l: {'r': 0.11261286554140228, 'p': 0.09921920076529338, 'f': 0.09705621471622536}
length_correlation: 0.014470676120233311
avg_actual_length: 16.59
avg_pred_length: 21.46
exact_match_accuracy: 0.0
```
- 4 epochs - [0-hero/mistral-speech-to-text](https://huggingface.co/0-hero/mistral-speech-to-text/)
- ```
Metrics:
rouge-1: {'r': 0.11869828051815862, 'p': 0.11697319273190071, 'f': 0.11154343875398197}
rouge-2: {'r': 0.008572925612399297, 'p': 0.009040061245943597, 'f': 0.008369604666309954}
rouge-l: {'r': 0.10780857719316121, 'p': 0.10373665666448233, 'f': 0.09985384905943501}
length_correlation: -0.1500200314034927
avg_actual_length: 16.59
avg_pred_length: 18.32
exact_match_accuracy: 0.0
```
## Datasets
[lj_speech](https://huggingface.co/datasets/lj_speech) dataset used to convert audio waveforms into ASCII Art
- [0-hero/lj_speech_with_spectogram_conversations](https://huggingface.co/datasets/0-hero/lj_speech_with_spectogram_conversations) - ShareGPT style finetuning dataset with train, test split
- [0-hero/lj_speech_with_spectogram](https://huggingface.co/datasets/0-hero/lj_speech_with_spectogram) - Raw dataset with ASCII Art
|
0-ma/beit-geometric-shapes-base
|
{"id": "0-ma/beit-geometric-shapes-base", "author": "0-ma", "sha": "6b2aabfa59a5cb6fa00ac0fd5ca71aef6ca26ff0", "last_modified": "2025-04-11 09:17:34+00:00", "created_at": "2024-09-12 12:37:17+00:00", "private": false, "gated": false, "disabled": false, "downloads": 30, "downloads_all_time": null, "likes": 1, "library_name": null, "gguf": null, "inference": null, "inference_provider_mapping": null, "tags": ["safetensors", "beit", "image-classification", "dataset:0-ma/geometric-shapes", "base_model:microsoft/beit-base-patch16-224-pt22k-ft22k", "base_model:finetune:microsoft/beit-base-patch16-224-pt22k-ft22k", "license:apache-2.0", "region:us"], "pipeline_tag": "image-classification", "mask_token": null, "trending_score": null, "card_data": "base_model: microsoft/beit-base-patch16-224-pt22k-ft22k\ndatasets:\n- 0-ma/geometric-shapes\nlicense: apache-2.0\nmetrics:\n- accuracy\npipeline_tag: image-classification", "widget_data": null, "model_index": null, "config": {"architectures": ["BeitForImageClassification"], "model_type": "beit", "additional_chat_templates": {}}, "transformers_info": null, "siblings": ["RepoSibling(rfilename='.gitattributes', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='README.md', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='config.json', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='model.safetensors', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='preprocessor_config.json', size=None, blob_id=None, lfs=None)"], "spaces": ["0-ma/geometric-shape-detector"], "safetensors": {"parameters": {"F32": 85766598}, "total": 85766598}, "security_repo_status": null, "xet_enabled": null, "lastModified": "2025-04-11 09:17:34+00:00", "cardData": "base_model: microsoft/beit-base-patch16-224-pt22k-ft22k\ndatasets:\n- 0-ma/geometric-shapes\nlicense: apache-2.0\nmetrics:\n- accuracy\npipeline_tag: image-classification", "transformersInfo": null, "_id": "66e2e07da007fc9b30d3f08f", "modelId": "0-ma/beit-geometric-shapes-base", "usedStorage": 343092632}
|
---
base_model: microsoft/beit-base-patch16-224-pt22k-ft22k
datasets:
- 0-ma/geometric-shapes
license: apache-2.0
metrics:
- accuracy
pipeline_tag: image-classification
---
# Model Card for Beit Geometric Shapes Dataset Base
## Training Dataset
- **Repository:** https://huggingface.co/datasets/0-ma/geometric-shapes
## Base Model
- **Repository:** https://huggingface.co/microsoft/beit-base-patch16-224-pt22k-ft22k
## Accuracy
- Accuracy on dataset 0-ma/geometric-shapes [test] : 0.9998
# Loading and using the model
import numpy as np
from PIL import Image
from transformers import AutoImageProcessor, AutoModelForImageClassification
import requests
labels = [
"None",
"Circle",
"Triangle",
"Square",
"Pentagon",
"Hexagon"
]
images = [Image.open(requests.get("https://raw.githubusercontent.com/0-ma/geometric-shape-detector/main/input/exemple_circle.jpg", stream=True).raw),
Image.open(requests.get("https://raw.githubusercontent.com/0-ma/geometric-shape-detector/main/input/exemple_pentagone.jpg", stream=True).raw)]
feature_extractor = AutoImageProcessor.from_pretrained('0-ma/beit-geometric-shapes-base')
model = AutoModelForImageClassification.from_pretrained('0-ma/beit-geometric-shapes-base')
inputs = feature_extractor(images=images, return_tensors="pt")
logits = model(**inputs)['logits'].cpu().detach().numpy()
predictions = np.argmax(logits, axis=1)
predicted_labels = [labels[prediction] for prediction in predictions]
print(predicted_labels)
## Model generation
The model has been created using the 'train_shape_detector.py' of the project from the project https://github.com/0-ma/geometric-shape-detector. No external code sources were used.
|
0-ma/efficientnet-b2-geometric-shapes
|
{"id": "0-ma/efficientnet-b2-geometric-shapes", "author": "0-ma", "sha": "751c6e645d2be44de8a1c0c4e2b4a38b2fbc9205", "last_modified": "2025-04-11 09:17:49+00:00", "created_at": "2024-09-12 13:25:11+00:00", "private": false, "gated": false, "disabled": false, "downloads": 29, "downloads_all_time": null, "likes": 1, "library_name": null, "gguf": null, "inference": null, "inference_provider_mapping": null, "tags": ["safetensors", "efficientnet", "image-classification", "dataset:0-ma/geometric-shapes", "base_model:google/efficientnet-b2", "base_model:finetune:google/efficientnet-b2", "license:apache-2.0", "region:us"], "pipeline_tag": "image-classification", "mask_token": null, "trending_score": null, "card_data": "base_model: google/efficientnet-b2\ndatasets:\n- 0-ma/geometric-shapes\nlicense: apache-2.0\nmetrics:\n- accuracy\npipeline_tag: image-classification", "widget_data": null, "model_index": null, "config": {"architectures": ["EfficientNetForImageClassification"], "model_type": "efficientnet", "additional_chat_templates": {}}, "transformers_info": null, "siblings": ["RepoSibling(rfilename='.gitattributes', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='README.md', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='config.json', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='model.safetensors', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='preprocessor_config.json', size=None, blob_id=None, lfs=None)"], "spaces": ["0-ma/geometric-shape-detector"], "safetensors": {"parameters": {"F32": 7777016}, "total": 7777016}, "security_repo_status": null, "xet_enabled": null, "lastModified": "2025-04-11 09:17:49+00:00", "cardData": "base_model: google/efficientnet-b2\ndatasets:\n- 0-ma/geometric-shapes\nlicense: apache-2.0\nmetrics:\n- accuracy\npipeline_tag: image-classification", "transformersInfo": null, "_id": "66e2ebb7d1e45cc45d718fb9", "modelId": "0-ma/efficientnet-b2-geometric-shapes", "usedStorage": 31174192}
|
---
base_model: google/efficientnet-b2
datasets:
- 0-ma/geometric-shapes
license: apache-2.0
metrics:
- accuracy
pipeline_tag: image-classification
---
# Model Card for VIT Geometric Shapes Dataset Tiny
## Training Dataset
- **Repository:** https://huggingface.co/datasets/0-ma/geometric-shapes
## Base Model
- **Repository:** https://huggingface.co/google/efficientnet-b2
## Accuracy
- Accuracy on dataset 0-ma/geometric-shapes [test] : 0.9876
# Loading and using the model
import numpy as np
from PIL import Image
from transformers import AutoImageProcessor, AutoModelForImageClassification
import requests
labels = [
"None",
"Circle",
"Triangle",
"Square",
"Pentagon",
"Hexagon"
]
images = [Image.open(requests.get("https://raw.githubusercontent.com/0-ma/geometric-shape-detector/main/input/exemple_circle.jpg", stream=True).raw),
Image.open(requests.get("https://raw.githubusercontent.com/0-ma/geometric-shape-detector/main/input/exemple_pentagone.jpg", stream=True).raw)]
feature_extractor = AutoImageProcessor.from_pretrained('0-ma/efficientnet-b2-geometric-shapes')
model = AutoModelForImageClassification.from_pretrained('0-ma/efficientnet-b2-geometric-shapes')
inputs = feature_extractor(images=images, return_tensors="pt")
logits = model(**inputs)['logits'].cpu().detach().numpy()
predictions = np.argmax(logits, axis=1)
predicted_labels = [labels[prediction] for prediction in predictions]
print(predicted_labels)
## Model generation
The model has been created using the 'train_shape_detector.py' of the project from the project https://github.com/0-ma/geometric-shape-detector. No external code sources were used.
|
0-ma/focalnet-geometric-shapes-tiny
|
{"id": "0-ma/focalnet-geometric-shapes-tiny", "author": "0-ma", "sha": "c5386a015ac6585c1e6b092437016ee5828f14dc", "last_modified": "2025-04-11 09:18:05+00:00", "created_at": "2024-09-12 13:26:57+00:00", "private": false, "gated": false, "disabled": false, "downloads": 20, "downloads_all_time": null, "likes": 1, "library_name": null, "gguf": null, "inference": null, "inference_provider_mapping": null, "tags": ["safetensors", "focalnet", "image-classification", "dataset:0-ma/geometric-shapes", "base_model:microsoft/focalnet-tiny", "base_model:finetune:microsoft/focalnet-tiny", "license:apache-2.0", "region:us"], "pipeline_tag": "image-classification", "mask_token": null, "trending_score": null, "card_data": "base_model: microsoft/focalnet-tiny\ndatasets:\n- 0-ma/geometric-shapes\nlicense: apache-2.0\nmetrics:\n- accuracy\npipeline_tag: image-classification", "widget_data": null, "model_index": null, "config": {"architectures": ["FocalNetForImageClassification"], "model_type": "focalnet", "additional_chat_templates": {}}, "transformers_info": null, "siblings": ["RepoSibling(rfilename='.gitattributes', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='README.md', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='config.json', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='model.safetensors', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='preprocessor_config.json', size=None, blob_id=None, lfs=None)"], "spaces": ["0-ma/geometric-shape-detector"], "safetensors": {"parameters": {"F32": 27662730}, "total": 27662730}, "security_repo_status": null, "xet_enabled": null, "lastModified": "2025-04-11 09:18:05+00:00", "cardData": "base_model: microsoft/focalnet-tiny\ndatasets:\n- 0-ma/geometric-shapes\nlicense: apache-2.0\nmetrics:\n- accuracy\npipeline_tag: image-classification", "transformersInfo": null, "_id": "66e2ec2116ed9820c0bdf2c4", "modelId": "0-ma/focalnet-geometric-shapes-tiny", "usedStorage": 110677448}
|
---
base_model: microsoft/focalnet-tiny
datasets:
- 0-ma/geometric-shapes
license: apache-2.0
metrics:
- accuracy
pipeline_tag: image-classification
---
# Model Card for Focalnet Geometric Shapes Dataset Tiny
## Training Dataset
- **Repository:** https://huggingface.co/datasets/0-ma/geometric-shapes
## Base Model
- **Repository:** https://huggingface.co/microsoft/focalnet-tiny
## Accuracy
- Accuracy on dataset 0-ma/geometric-shapes [test] : 0.9981
# Loading and using the model
import numpy as np
from PIL import Image
from transformers import AutoImageProcessor, AutoModelForImageClassification
import requests
labels = [
"None",
"Circle",
"Triangle",
"Square",
"Pentagon",
"Hexagon"
]
images = [Image.open(requests.get("https://raw.githubusercontent.com/0-ma/geometric-shape-detector/main/input/exemple_circle.jpg", stream=True).raw),
Image.open(requests.get("https://raw.githubusercontent.com/0-ma/geometric-shape-detector/main/input/exemple_pentagone.jpg", stream=True).raw)]
feature_extractor = AutoImageProcessor.from_pretrained('0-ma/focalnet-geometric-shapes-tiny')
model = AutoModelForImageClassification.from_pretrained('0-ma/focalnet-geometric-shapes-tiny')
inputs = feature_extractor(images=images, return_tensors="pt")
logits = model(**inputs)['logits'].cpu().detach().numpy()
predictions = np.argmax(logits, axis=1)
predicted_labels = [labels[prediction] for prediction in predictions]
print(predicted_labels)
## Model generation
The model has been created using the 'train_shape_detector.py' of the project from the project https://github.com/0-ma/geometric-shape-detector. No external code sources were used.
|
0-ma/mit-b0-geometric-shapes
|
{"id": "0-ma/mit-b0-geometric-shapes", "author": "0-ma", "sha": "e28930e31ecb85973da5ac59bbc8139caae7be66", "last_modified": "2025-04-11 09:17:10+00:00", "created_at": "2024-09-12 09:38:47+00:00", "private": false, "gated": false, "disabled": false, "downloads": 25, "downloads_all_time": null, "likes": 1, "library_name": null, "gguf": null, "inference": null, "inference_provider_mapping": null, "tags": ["safetensors", "segformer", "image-classification", "dataset:0-ma/geometric-shapes", "base_model:nvidia/mit-b0", "base_model:finetune:nvidia/mit-b0", "license:other", "region:us"], "pipeline_tag": "image-classification", "mask_token": null, "trending_score": null, "card_data": "base_model: nvidia/mit-b0\ndatasets:\n- 0-ma/geometric-shapes\nlicense: other\nmetrics:\n- accuracy\npipeline_tag: image-classification", "widget_data": null, "model_index": null, "config": {"architectures": ["SegformerForImageClassification"], "model_type": "segformer", "additional_chat_templates": {}}, "transformers_info": null, "siblings": ["RepoSibling(rfilename='.gitattributes', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='LICENSE', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='README.md', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='config.json', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='model.safetensors', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='preprocessor_config.json', size=None, blob_id=None, lfs=None)"], "spaces": ["0-ma/geometric-shape-detector"], "safetensors": {"parameters": {"F32": 3320934}, "total": 3320934}, "security_repo_status": null, "xet_enabled": null, "lastModified": "2025-04-11 09:17:10+00:00", "cardData": "base_model: nvidia/mit-b0\ndatasets:\n- 0-ma/geometric-shapes\nlicense: other\nmetrics:\n- accuracy\npipeline_tag: image-classification", "transformersInfo": null, "_id": "66e2b6a7a90cb533488ed993", "modelId": "0-ma/mit-b0-geometric-shapes", "usedStorage": 13306288}
|
---
base_model: nvidia/mit-b0
datasets:
- 0-ma/geometric-shapes
license: other
metrics:
- accuracy
pipeline_tag: image-classification
---
# Model Card for Mit-B0 Geometric Shapes Dataset
## Training Dataset
- **Repository:** https://huggingface.co/datasets/0-ma/geometric-shapes
## Base Model
- **Repository:** https://huggingface.co/nvidia/mit-b0
## Accuracy
- Accuracy on dataset 0-ma/geometric-shapes [test] : 0.9907
# Loading and using the model
import numpy as np
from PIL import Image
from transformers import AutoImageProcessor, AutoModelForImageClassification
import requests
labels = [
"None",
"Circle",
"Triangle",
"Square",
"Pentagon",
"Hexagon"
]
images = [Image.open(requests.get("https://raw.githubusercontent.com/0-ma/geometric-shape-detector/main/input/exemple_circle.jpg", stream=True).raw),
Image.open(requests.get("https://raw.githubusercontent.com/0-ma/geometric-shape-detector/main/input/exemple_pentagone.jpg", stream=True).raw)]
feature_extractor = AutoImageProcessor.from_pretrained('0-ma/mit-b0-geometric-shapes')
model = AutoModelForImageClassification.from_pretrained('0-ma/mit-b0-geometric-shapes')
inputs = feature_extractor(images=images, return_tensors="pt")
logits = model(**inputs)['logits'].cpu().detach().numpy()
predictions = np.argmax(logits, axis=1)
predicted_labels = [labels[prediction] for prediction in predictions]
print(predicted_labels)
### License
The license for this model can be found [here](https://github.com/NVlabs/SegFormer/blob/master/LICENSE).
## Model generation
The model has been created using the 'train_shape_detector.py' of the project from the project https://github.com/0-ma/geometric-shape-detector. No external code sources were used.
|
0-ma/mobilenet-v2-geometric-shapes
|
{"id": "0-ma/mobilenet-v2-geometric-shapes", "author": "0-ma", "sha": "5c20d0a309849386a4e573bc9c071886190b012d", "last_modified": "2025-04-11 09:14:58+00:00", "created_at": "2024-09-12 13:29:24+00:00", "private": false, "gated": false, "disabled": false, "downloads": 37, "downloads_all_time": null, "likes": 1, "library_name": null, "gguf": null, "inference": null, "inference_provider_mapping": null, "tags": ["safetensors", "mobilenet_v2", "image-classification", "dataset:0-ma/geometric-shapes", "base_model:google/mobilenet_v2_1.0_224", "base_model:finetune:google/mobilenet_v2_1.0_224", "license:other", "region:us"], "pipeline_tag": "image-classification", "mask_token": null, "trending_score": null, "card_data": "base_model: google/mobilenet_v2_1.0_224\ndatasets:\n- 0-ma/geometric-shapes\nlicense: other\nmetrics:\n- accuracy\npipeline_tag: image-classification", "widget_data": null, "model_index": null, "config": {"architectures": ["MobileNetV2ForImageClassification"], "model_type": "mobilenet_v2", "additional_chat_templates": {}}, "transformers_info": null, "siblings": ["RepoSibling(rfilename='.gitattributes', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='LICENSE', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='README.md', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='config.json', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='model.safetensors', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='preprocessor_config.json', size=None, blob_id=None, lfs=None)"], "spaces": ["0-ma/geometric-shape-detector"], "safetensors": {"parameters": {"F32": 2265670}, "total": 2265670}, "security_repo_status": null, "xet_enabled": null, "lastModified": "2025-04-11 09:14:58+00:00", "cardData": "base_model: google/mobilenet_v2_1.0_224\ndatasets:\n- 0-ma/geometric-shapes\nlicense: other\nmetrics:\n- accuracy\npipeline_tag: image-classification", "transformersInfo": null, "_id": "66e2ecb4420a3111a8cd58aa", "modelId": "0-ma/mobilenet-v2-geometric-shapes", "usedStorage": 18201424}
|
---
base_model: google/mobilenet_v2_1.0_224
datasets:
- 0-ma/geometric-shapes
license: other
metrics:
- accuracy
pipeline_tag: image-classification
---
# Model Card for Mobilenet Geometric Shapes Dataset
## Training Dataset
- **Repository:** https://huggingface.co/datasets/0-ma/geometric-shapes
## Base Model
- **Repository:** https://huggingface.co/google/mobilenet_v2_1.0_224
## Accuracy
- Accuracy on dataset 0-ma/geometric-shapes [test] : 0.9914
# Loading and using the model
import numpy as np
from PIL import Image
from transformers import AutoImageProcessor, AutoModelForImageClassification
import requests
labels = [
"None",
"Circle",
"Triangle",
"Square",
"Pentagon",
"Hexagon"
]
images = [Image.open(requests.get("https://raw.githubusercontent.com/0-ma/geometric-shape-detector/main/input/exemple_circle.jpg", stream=True).raw),
Image.open(requests.get("https://raw.githubusercontent.com/0-ma/geometric-shape-detector/main/input/exemple_pentagone.jpg", stream=True).raw)]
feature_extractor = AutoImageProcessor.from_pretrained('0-ma/mobilenet-v2-geometric-shapes')
model = AutoModelForImageClassification.from_pretrained('0-ma/mobilenet-v2-geometric-shapes')
inputs = feature_extractor(images=images, return_tensors="pt")
logits = model(**inputs)['logits'].cpu().detach().numpy()
predictions = np.argmax(logits, axis=1)
predicted_labels = [labels[prediction] for prediction in predictions]
print(predicted_labels)
## Model generation
The model has been created using the 'train_shape_detector.py' of the project from the project https://github.com/0-ma/geometric-shape-detector. No external code sources were used.
|
0-ma/resnet-geometric-shapes
|
{"id": "0-ma/resnet-geometric-shapes", "author": "0-ma", "sha": "6ab90c8765165804aaa0582a28c159ec6724af3f", "last_modified": "2025-04-11 09:16:00+00:00", "created_at": "2024-09-12 08:40:45+00:00", "private": false, "gated": false, "disabled": false, "downloads": 107, "downloads_all_time": null, "likes": 1, "library_name": null, "gguf": null, "inference": null, "inference_provider_mapping": null, "tags": ["safetensors", "resnet", "image-classification", "dataset:0-ma/geometric-shapes", "base_model:microsoft/resnet-50", "base_model:finetune:microsoft/resnet-50", "license:apache-2.0", "region:us"], "pipeline_tag": "image-classification", "mask_token": null, "trending_score": null, "card_data": "base_model: microsoft/resnet-50\ndatasets:\n- 0-ma/geometric-shapes\nlicense: apache-2.0\nmetrics:\n- accuracy\npipeline_tag: image-classification", "widget_data": null, "model_index": null, "config": {"architectures": ["ResNetForImageClassification"], "model_type": "resnet", "additional_chat_templates": {}}, "transformers_info": null, "siblings": ["RepoSibling(rfilename='.gitattributes', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='README.md', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='config.json', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='model.safetensors', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='preprocessor_config.json', size=None, blob_id=None, lfs=None)"], "spaces": ["0-ma/geometric-shape-detector"], "safetensors": {"parameters": {"F32": 23573446}, "total": 23573446}, "security_repo_status": null, "xet_enabled": null, "lastModified": "2025-04-11 09:16:00+00:00", "cardData": "base_model: microsoft/resnet-50\ndatasets:\n- 0-ma/geometric-shapes\nlicense: apache-2.0\nmetrics:\n- accuracy\npipeline_tag: image-classification", "transformersInfo": null, "_id": "66e2a90d4552c7fe0349a046", "modelId": "0-ma/resnet-geometric-shapes", "usedStorage": 94335752}
|
---
base_model: microsoft/resnet-50
datasets:
- 0-ma/geometric-shapes
license: apache-2.0
metrics:
- accuracy
pipeline_tag: image-classification
---
# Model Card for Resnet Geometric Shapes Dataset
## Training Dataset
- **Repository:** https://huggingface.co/datasets/0-ma/geometric-shapes
## Base Model
- **Repository:** https://huggingface.co/microsoft/resnet-50
## Accuracy
- Accuracy on dataset 0-ma/geometric-shapes [test] : 0.9469
# Loading and using the model
import numpy as np
from PIL import Image
from transformers import AutoImageProcessor, AutoModelForImageClassification
import requests
labels = [
"None",
"Circle",
"Triangle",
"Square",
"Pentagon",
"Hexagon"
]
images = [Image.open(requests.get("https://raw.githubusercontent.com/0-ma/geometric-shape-detector/main/input/exemple_circle.jpg", stream=True).raw),
Image.open(requests.get("https://raw.githubusercontent.com/0-ma/geometric-shape-detector/main/input/exemple_pentagone.jpg", stream=True).raw)]
feature_extractor = AutoImageProcessor.from_pretrained('0-ma/resnet-geometric-shapes')
model = AutoModelForImageClassification.from_pretrained('0-ma/resnet-geometric-shapes')
inputs = feature_extractor(images=images, return_tensors="pt")
logits = model(**inputs)['logits'].cpu().detach().numpy()
predictions = np.argmax(logits, axis=1)
predicted_labels = [labels[prediction] for prediction in predictions]
print(predicted_labels)
## Model generation
The model has been created using the 'train_shape_detector.py' of the project from the project https://github.com/0-ma/geometric-shape-detector. No external code sources were used.
|
0-ma/swin-geometric-shapes-tiny
|
{"id": "0-ma/swin-geometric-shapes-tiny", "author": "0-ma", "sha": "d9959a6ba32352e47f604919c9580a1f10d25f89", "last_modified": "2025-04-11 09:17:23+00:00", "created_at": "2024-09-12 13:30:45+00:00", "private": false, "gated": false, "disabled": false, "downloads": 20, "downloads_all_time": null, "likes": 1, "library_name": null, "gguf": null, "inference": null, "inference_provider_mapping": null, "tags": ["safetensors", "swin", "image-classification", "dataset:0-ma/geometric-shapes", "base_model:microsoft/swin-tiny-patch4-window7-224", "base_model:finetune:microsoft/swin-tiny-patch4-window7-224", "license:apache-2.0", "region:us"], "pipeline_tag": "image-classification", "mask_token": null, "trending_score": null, "card_data": "base_model: microsoft/swin-tiny-patch4-window7-224\ndatasets:\n- 0-ma/geometric-shapes\nlicense: apache-2.0\nmetrics:\n- accuracy\npipeline_tag: image-classification", "widget_data": null, "model_index": null, "config": {"architectures": ["SwinForImageClassification"], "model_type": "swin", "additional_chat_templates": {}}, "transformers_info": null, "siblings": ["RepoSibling(rfilename='.gitattributes', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='README.md', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='config.json', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='model.safetensors', size=None, blob_id=None, lfs=None)", "RepoSibling(rfilename='preprocessor_config.json', size=None, blob_id=None, lfs=None)"], "spaces": ["0-ma/geometric-shape-detector"], "safetensors": {"parameters": {"I64": 28812, "F32": 27523968}, "total": 27552780}, "security_repo_status": null, "xet_enabled": null, "lastModified": "2025-04-11 09:17:23+00:00", "cardData": "base_model: microsoft/swin-tiny-patch4-window7-224\ndatasets:\n- 0-ma/geometric-shapes\nlicense: apache-2.0\nmetrics:\n- accuracy\npipeline_tag: image-classification", "transformersInfo": null, "_id": "66e2ed05f29939aa66ac3273", "modelId": "0-ma/swin-geometric-shapes-tiny", "usedStorage": 110355136}
|
---
base_model: microsoft/swin-tiny-patch4-window7-224
datasets:
- 0-ma/geometric-shapes
license: apache-2.0
metrics:
- accuracy
pipeline_tag: image-classification
---
# Model Card for SWIN Geometric Shapes Dataset Tiny
## Training Dataset
- **Repository:** https://huggingface.co/datasets/0-ma/geometric-shapes
## Base Model
- **Repository:** https://huggingface.co/microsoft/swin-tiny-patch4-window7-224
## Accuracy
- Accuracy on dataset 0-ma/geometric-shapes [test] : 0.9967
# Loading and using the model
import numpy as np
from PIL import Image
from transformers import AutoImageProcessor, AutoModelForImageClassification
import requests
labels = [
"None",
"Circle",
"Triangle",
"Square",
"Pentagon",
"Hexagon"
]
images = [Image.open(requests.get("https://raw.githubusercontent.com/0-ma/geometric-shape-detector/main/input/exemple_circle.jpg", stream=True).raw),
Image.open(requests.get("https://raw.githubusercontent.com/0-ma/geometric-shape-detector/main/input/exemple_pentagone.jpg", stream=True).raw)]
feature_extractor = AutoImageProcessor.from_pretrained('0-ma/swin-geometric-shapes-tiny')
model = AutoModelForImageClassification.from_pretrained('0-ma/swin-geometric-shapes-tiny')
inputs = feature_extractor(images=images, return_tensors="pt")
logits = model(**inputs)['logits'].cpu().detach().numpy()
predictions = np.argmax(logits, axis=1)
predicted_labels = [labels[prediction] for prediction in predictions]
print(predicted_labels)
## Model generation
The model has been created using the 'train_shape_detector.py' of the project from the project https://github.com/0-ma/geometric-shape-detector. No external code sources were used.
|
End of preview.
No dataset card yet
- Downloads last month
- 12