modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
list | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
obranzell/imdb-sentiment-distilbert
|
obranzell
| 2025-09-22T12:15:17Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-09-22T10:53:50Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
MattBou00/llama-3-2-1b-detox_v1f_RRETRT_Again_ROUND2-checkpoint-epoch-20
|
MattBou00
| 2025-09-22T12:14:43Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"ppo",
"reinforcement-learning",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
reinforcement-learning
| 2025-09-22T12:13:45Z |
---
license: apache-2.0
library_name: transformers
tags:
- trl
- ppo
- transformers
- reinforcement-learning
---
# TRL Model
This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for text generation.
## Usage
To use this model for inference, first install the TRL library:
```bash
python -m pip install trl
```
You can then generate text as follows:
```python
from transformers import pipeline
generator = pipeline("text-generation", model="MattBou00//content/IRL-Bayesian/outputs/2025-09-22_12-11-21/checkpoints/checkpoint-epoch-20")
outputs = generator("Hello, my llama is cute")
```
If you want to use the model for training or to obtain the outputs from the value head, load the model as follows:
```python
from transformers import AutoTokenizer
from trl import AutoModelForCausalLMWithValueHead
tokenizer = AutoTokenizer.from_pretrained("MattBou00//content/IRL-Bayesian/outputs/2025-09-22_12-11-21/checkpoints/checkpoint-epoch-20")
model = AutoModelForCausalLMWithValueHead.from_pretrained("MattBou00//content/IRL-Bayesian/outputs/2025-09-22_12-11-21/checkpoints/checkpoint-epoch-20")
inputs = tokenizer("Hello, my llama is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
```
|
maigonis/LLaDA-MoE-7B-A1B-Instruct-Q4_K_M-GGUF
|
maigonis
| 2025-09-22T12:14:26Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"dllm",
"diffusion",
"llm",
"text_generation",
"llama-cpp",
"gguf-my-repo",
"base_model:inclusionAI/LLaDA-MoE-7B-A1B-Instruct",
"base_model:quantized:inclusionAI/LLaDA-MoE-7B-A1B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-22T12:14:03Z |
---
license: apache-2.0
tags:
- dllm
- diffusion
- llm
- text_generation
- llama-cpp
- gguf-my-repo
library_name: transformers
base_model: inclusionAI/LLaDA-MoE-7B-A1B-Instruct
---
# maigonis/LLaDA-MoE-7B-A1B-Instruct-Q4_K_M-GGUF
This model was converted to GGUF format from [`inclusionAI/LLaDA-MoE-7B-A1B-Instruct`](https://huggingface.co/inclusionAI/LLaDA-MoE-7B-A1B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/inclusionAI/LLaDA-MoE-7B-A1B-Instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo maigonis/LLaDA-MoE-7B-A1B-Instruct-Q4_K_M-GGUF --hf-file llada-moe-7b-a1b-instruct-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo maigonis/LLaDA-MoE-7B-A1B-Instruct-Q4_K_M-GGUF --hf-file llada-moe-7b-a1b-instruct-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo maigonis/LLaDA-MoE-7B-A1B-Instruct-Q4_K_M-GGUF --hf-file llada-moe-7b-a1b-instruct-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo maigonis/LLaDA-MoE-7B-A1B-Instruct-Q4_K_M-GGUF --hf-file llada-moe-7b-a1b-instruct-q4_k_m.gguf -c 2048
```
|
veeravel/paraphraser
|
veeravel
| 2025-09-22T12:11:15Z | 0 | 0 | null |
[
"safetensors",
"t5",
"license:apache-2.0",
"region:us"
] | null | 2025-09-22T11:56:46Z |
---
license: apache-2.0
---
|
poolkiltzn/blockassist-bc-vigilant_alert_tuna_1758542967
|
poolkiltzn
| 2025-09-22T12:10:43Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"vigilant alert tuna",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-22T12:10:34Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- vigilant alert tuna
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
PrimeIntellect/Qwen3-30B-A3B-Base-Fast
|
PrimeIntellect
| 2025-09-22T12:10:11Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3_moe",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-22T12:08:50Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Aeronicr/LPWAN
|
Aeronicr
| 2025-09-22T12:08:47Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-22T12:08:33Z |
---
base_model: unsloth/llama-3-8b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Aeronicr
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
abandonedmonk/TinyLlama-1.1B-NL2SH-Alpaca-v1
|
abandonedmonk
| 2025-09-22T12:04:48Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"unsloth",
"text-generation",
"conversational",
"en",
"dataset:abandonedmonk/NL2SH-ALPACA",
"base_model:unsloth/tinyllama-chat",
"base_model:adapter:unsloth/tinyllama-chat",
"license:mit",
"region:us"
] |
text-generation
| 2025-09-22T09:50:07Z |
---
base_model: unsloth/tinyllama-chat
library_name: peft
model_name: outputs
tags:
- generated_from_trainer
- trl
- sft
- unsloth
licence: license
license: mit
datasets:
- abandonedmonk/NL2SH-ALPACA
language:
- en
new_version: TinyLlama/TinyLlama-1.1B-Chat-v1.0
pipeline_tag: text-generation
---
# Model Card for TinyLlama-1.1B-NL2SH-Alpaca
This model is a **fine-tuned version of [unsloth/tinyllama-chat](https://huggingface.co/unsloth/tinyllama-chat)**.
It has been fine-tuned on the **NL2SH-Alpaca dataset** for converting **natural language instructions into bash commands**.
The model outputs **one bash command per instruction**, even if multiple alternatives exist in the training dataset.
---
## Quick start
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Load the model and tokenizer
model_name = "abandonedmonk/TinyLlama-1.1B-NL2SH-Alpaca"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name).to("cuda")
# Inference helper
def generate_command(model, tokenizer, instruction, inp=""):
# build prompt in Alpaca-style
prompt = f"""Instruction: {instruction}
Input: {inp}
Response:
"""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(
**inputs,
max_new_tokens=100,
do_sample=False, # greedy decoding
temperature=0.0,
num_return_sequences=1
)
generated_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
# Extract the first line (command) after "Response:"
# If you want to keep all the commands, just simply return 'generated_text' instead of 'response'
response = generated_text.strip().split("Response:")[-1].strip().split('\n')[0]
return response
# Example usage
instruction = "Rename all files with .andnav extension to .tile"
bash_cmd = generate_command(model, tokenizer, instruction)
print("Generated bash command:", bash_cmd)
````
---
## Training procedure
This model was fine-tuned using **Supervised Fine-Tuning (SFT)** on the NL2SH-Alpaca dataset, which contains natural language instructions paired with shell commands.
* **Base model:** `unsloth/tinyllama-chat`
* **Dataset:** `abandonedmonk/NL2SH-ALPACA`
* **Frameworks:** PEFT, Transformers, Unsloth
* **Number of epochs:** 3
* **Batch size / seq length:** 4
---
## Citations
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
---
## License
This model is released under the **MIT License**
---
## Contributors / Maintainers
- **Anshuman Jena** – fine-tuner, and maintainer of this model 🐸
## Notes
* This model is designed for **English instructions** only.
* Outputs **one command per instruction**; alternative commands can be manually handled if desired.
* For reproducibility, set the same `seed` (3407) during fine-tuning.
|
TAUR-dev/M-BASELINE_gtp4o_BOLT-sft
|
TAUR-dev
| 2025-09-22T12:04:46Z | 0 | 0 | null |
[
"safetensors",
"qwen2",
"region:us"
] | null | 2025-09-22T12:04:16Z |
# M-BASELINE_gtp4o_BOLT-sft
This model was created as part of the **BASELINE_gtp4o_BOLT** experiment using the SkillFactory experiment management system.
## Model Details
- **Training Method**: LLaMAFactory SFT (Supervised Fine-Tuning)
- **Stage Name**: sft
- **Experiment**: BASELINE_gtp4o_BOLT
## Training Configuration
{"model_name_or_path": "Qwen/Qwen2.5-1.5B-Instruct", "trust_remote_code": true, "stage": "sft", "do_train": true, "finetuning_type": "full", "deepspeed": "/scratch/10416/zaynesprague/skill_factory_dir/skill-factory/thirdparty/LLaMA-Factory/examples/deepspeed/ds_z2_config.json", "dataset": "TAUR_dev__D_SFT_C_BASELINE_gtp4o_BOLT_sft_data__sft_train", "template": "qwen", "cutoff_len": 16384, "max_samples": 1000000, "overwrite_cache": true, "preprocessing_num_workers": 1, "dataloader_num_workers": 0, "disable_tqdm": false, "output_dir": "/scratch/10416/zaynesprague/skill_inject_outputs/sf_experiments/BASELINE_gpt4o_BOLT/llamafactory/checkpoints", "logging_steps": 10, "save_steps": 100000, "plot_loss": true, "overwrite_output_dir": true, "per_device_train_batch_size": 1, "gradient_accumulation_steps": 1, "learning_rate": 1e-06, "num_train_epochs": 1, "lr_scheduler_type": "cosine", "warmup_ratio": 0.05, "weight_decay": 0.0001, "adam_beta1": 0.9, "adam_beta2": 0.95, "bf16": true, "ddp_timeout": 180000000, "gradient_checkpointing": true, "save_only_model": true, "enable_masked_ranges": false, "save_strategy": "steps", "save_total_limit": 5, "sf_tracker_dataset_id": "TAUR-dev/D-ExpTracker__BASELINE_gtp4o_BOLT__v1", "sf_eval_before_training": false, "sf_wandb_project": "BASELINE_gtp4o_BOLT_sft", "sf_eval_steps": null, "run_name": "BASELINE_gtp4o_BOLT_sft"}
## Experiment Tracking
🔗 **View complete experiment details**: [Experiment Tracker Dataset](https://huggingface.co/datasets/TAUR-dev/D-ExpTracker__BASELINE_gtp4o_BOLT__v1)
## Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("TAUR-dev/M-BASELINE_gtp4o_BOLT-sft")
model = AutoModelForCausalLM.from_pretrained("TAUR-dev/M-BASELINE_gtp4o_BOLT-sft")
```
|
aamijar/Llama-2-7b-hf-dora-r8-boolq-epochs1
|
aamijar
| 2025-09-22T12:02:21Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-22T12:02:19Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Nabbers1999/Llama-4-Scout-17B-16E-Instruct-abliterated-v2-bnb-4bit
|
Nabbers1999
| 2025-09-22T12:02:10Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama4_text",
"text-generation",
"quantized",
"4bit",
"bitsandbytes",
"generated_from_original",
"conversational",
"base_model:jiangchengchengNLP/Llama-4-Scout-17B-16E-Instruct-abliterated-v2",
"base_model:quantized:jiangchengchengNLP/Llama-4-Scout-17B-16E-Instruct-abliterated-v2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"region:us"
] |
text-generation
| 2025-09-22T11:50:37Z |
---
license: apache-2.0
base_model: jiangchengchengNLP/Llama-4-Scout-17B-16E-Instruct-abliterated-v2
tags:
- quantized
- 4bit
- bitsandbytes
- generated_from_original
library_name: transformers
---
# Nabbers1999/Llama-4-Scout-17B-16E-Instruct-abliterated-v2-bnb-4bit
This is a 4-bit quantized version of [jiangchengchengNLP/Llama-4-Scout-17B-16E-Instruct-abliterated-v2](https://huggingface.co/jiangchengchengNLP/Llama-4-Scout-17B-16E-Instruct-abliterated-v2) using BitsAndBytes.
## Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained(
"Nabbers1999/Llama-4-Scout-17B-16E-Instruct-abliterated-v2-bnb-4bit",
device_map="auto",
trust_remote_code=True
)
```
|
tarundachepally/EGL_granite_8b_linear_full-Q4_K_S-GGUF
|
tarundachepally
| 2025-09-22T12:00:48Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:tarundachepally/EGL_granite_8b_linear_full",
"base_model:quantized:tarundachepally/EGL_granite_8b_linear_full",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-22T12:00:27Z |
---
library_name: transformers
tags:
- llama-cpp
- gguf-my-repo
base_model: tarundachepally/EGL_granite_8b_linear_full
---
# tarundachepally/EGL_granite_8b_linear_full-Q4_K_S-GGUF
This model was converted to GGUF format from [`tarundachepally/EGL_granite_8b_linear_full`](https://huggingface.co/tarundachepally/EGL_granite_8b_linear_full) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/tarundachepally/EGL_granite_8b_linear_full) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo tarundachepally/EGL_granite_8b_linear_full-Q4_K_S-GGUF --hf-file egl_granite_8b_linear_full-q4_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo tarundachepally/EGL_granite_8b_linear_full-Q4_K_S-GGUF --hf-file egl_granite_8b_linear_full-q4_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo tarundachepally/EGL_granite_8b_linear_full-Q4_K_S-GGUF --hf-file egl_granite_8b_linear_full-q4_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo tarundachepally/EGL_granite_8b_linear_full-Q4_K_S-GGUF --hf-file egl_granite_8b_linear_full-q4_k_s.gguf -c 2048
```
|
MattBou00/llama-3-2-1b-detox_v1f_RRETRT_Again_ROUND1-checkpoint-epoch-80
|
MattBou00
| 2025-09-22T11:59:57Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"ppo",
"reinforcement-learning",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
reinforcement-learning
| 2025-09-22T11:59:02Z |
---
license: apache-2.0
library_name: transformers
tags:
- trl
- ppo
- transformers
- reinforcement-learning
---
# TRL Model
This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for text generation.
## Usage
To use this model for inference, first install the TRL library:
```bash
python -m pip install trl
```
You can then generate text as follows:
```python
from transformers import pipeline
generator = pipeline("text-generation", model="MattBou00//content/IRL-Bayesian/outputs/2025-09-22_11-47-04/checkpoints/checkpoint-epoch-80")
outputs = generator("Hello, my llama is cute")
```
If you want to use the model for training or to obtain the outputs from the value head, load the model as follows:
```python
from transformers import AutoTokenizer
from trl import AutoModelForCausalLMWithValueHead
tokenizer = AutoTokenizer.from_pretrained("MattBou00//content/IRL-Bayesian/outputs/2025-09-22_11-47-04/checkpoints/checkpoint-epoch-80")
model = AutoModelForCausalLMWithValueHead.from_pretrained("MattBou00//content/IRL-Bayesian/outputs/2025-09-22_11-47-04/checkpoints/checkpoint-epoch-80")
inputs = tokenizer("Hello, my llama is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
```
|
0701phantom/all-t5-base-v1-contriever2fiqa
|
0701phantom
| 2025-09-22T11:57:28Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2025-09-22T11:57:05Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
MattBou00/llama-3-2-1b-detox_v1f_RRETRT_Again_ROUND1-checkpoint-epoch-60
|
MattBou00
| 2025-09-22T11:56:47Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"ppo",
"reinforcement-learning",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
reinforcement-learning
| 2025-09-22T11:55:51Z |
---
license: apache-2.0
library_name: transformers
tags:
- trl
- ppo
- transformers
- reinforcement-learning
---
# TRL Model
This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for text generation.
## Usage
To use this model for inference, first install the TRL library:
```bash
python -m pip install trl
```
You can then generate text as follows:
```python
from transformers import pipeline
generator = pipeline("text-generation", model="MattBou00//content/IRL-Bayesian/outputs/2025-09-22_11-47-04/checkpoints/checkpoint-epoch-60")
outputs = generator("Hello, my llama is cute")
```
If you want to use the model for training or to obtain the outputs from the value head, load the model as follows:
```python
from transformers import AutoTokenizer
from trl import AutoModelForCausalLMWithValueHead
tokenizer = AutoTokenizer.from_pretrained("MattBou00//content/IRL-Bayesian/outputs/2025-09-22_11-47-04/checkpoints/checkpoint-epoch-60")
model = AutoModelForCausalLMWithValueHead.from_pretrained("MattBou00//content/IRL-Bayesian/outputs/2025-09-22_11-47-04/checkpoints/checkpoint-epoch-60")
inputs = tokenizer("Hello, my llama is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
```
|
AlekseyCalvin/LYRICAL_MT_ru2en_3a13_Yandex8b_emaBeta05to098
|
AlekseyCalvin
| 2025-09-22T11:56:22Z | 0 | 0 | null |
[
"safetensors",
"llama",
"ru",
"en",
"base_model:yandex/YandexGPT-5-Lite-8B-pretrain",
"base_model:finetune:yandex/YandexGPT-5-Lite-8B-pretrain",
"license:other",
"region:us"
] | null | 2025-09-22T11:47:36Z |
---
license: other
license_name: yandexgpt-5-lite-8b
license_link: LICENSE
language:
- ru
- en
base_model:
- yandex/YandexGPT-5-Lite-8B-pretrain
---
# YandexGPT-5-Lite-Instruct
Instruct-версия большой языковой модели YandexGPT 5 Lite на 8B параметров с длиной контекста 32k токенов. Также в отдельном [репозитории](https://huggingface.co/yandex/YandexGPT-5-Lite-8B-instruct-GGUF) опубликована квантизованная версия модели в формате GGUF.
Обучена на базе [YandexGPT 5 Lite Pretrain](https://huggingface.co/yandex/YandexGPT-5-Lite-8B-pretrain), без использования весов каких-либо сторонних моделей. Алайнмент Lite-версии совпадает с алайнментом YandexGPT 5 Pro и состоит из этапов SFT и RLHF (более подробно о них — в [статье](https://habr.com/ru/companies/yandex/articles/885218/) на Хабре).
Задавайте вопросы в discussions.
## Бенчмарки
По результатам международных бенчмарков и их адаптаций для русского языка, YandexGPT 5 Lite вплотную приблизилась к аналогам (Llama-3.1-8B-instruct и Qwen-2.5-7B-instruct) и превосходит их в ряде сценариев, в том числе — в знании русской культуры и фактов.
<img src="https://habrastorage.org/r/w1560/getpro/habr/upload_files/6b5/eb4/9ea/6b5eb49ea757bc124c938717b21f1cf7.png" alt="Таблица бенчмарков" width="100%"/>
MMLU — 5-shot, все остальные бенчмарки — 0-shot.
## Как использовать
Модель можно запустить через HF Transformers:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
MODEL_NAME = "yandex/YandexGPT-5-Lite-8B-instruct"
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME)
model = AutoModelForCausalLM.from_pretrained(
MODEL_NAME,
device_map="cuda",
torch_dtype="auto",
)
messages = [{"role": "user", "content": "Для чего нужна токенизация?"}]
input_ids = tokenizer.apply_chat_template(
messages, tokenize=True, return_tensors="pt"
).to("cuda")
outputs = model.generate(input_ids, max_new_tokens=1024)
print(tokenizer.decode(outputs[0][input_ids.size(1) :], skip_special_tokens=True))
```
Или через vLLM:
```python
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer
MODEL_NAME = "yandex/YandexGPT-5-Lite-8B-instruct"
sampling_params = SamplingParams(
temperature=0.3,
top_p=0.9,
max_tokens=1024,
)
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME)
llm = LLM(
MODEL_NAME,
tensor_parallel_size=1,
)
messages = [{"role": "user", "content": "В чем смысл жизни?"}]
input_ids = tokenizer.apply_chat_template(
messages, tokenize=True, add_generation_prompt=True
)[1:] # remove bos
text = tokenizer.decode(input_ids)
outputs = llm.generate(text, use_tqdm=False, sampling_params=sampling_params)
print(tokenizer.decode(outputs[0].outputs[0].token_ids, skip_special_tokens=True))
```
Для запуска в llama.cpp и ollama можно воспользоваться нашей квантизованной моделью, которая выложена в репозитории [YandexGPT-5-Lite-8B-instruct-GGUF](https://huggingface.co/yandex/YandexGPT-5-Lite-8B-instruct-GGUF).
## Особенности токенизации
Для полного соответствия токенизации мы рекомендуем пользоваться оригинальным [sentencepiece](https://github.com/google/sentencepiece) — файл токенизатора лежит в папке `original_tokenizer`. В нашей инфраструктуре каждую реплику диалога мы токенизируем отдельно.
Из-за этого, в частности, появляется пробел в начале каждой реплики. Также `\n` токены мы заменяем на `[NL]`, это можно сделать с помощью `text.replace("\n", "[NL]")` перед токенизацией.
## Особенности шаблона
Мы используем нестандартный шаблон диалога — модель обучена генерировать только одну реплику после последовательности `Ассистент:[SEP]`, завершая её токеном `</s>`. При этом диалог в промпте может быть любой длины.
Это приводит к тому, что в интерактивном режиме модель может выдавать результаты, отличающиеся от вызова модели в режиме генерации на фиксированном диалоге. Поэтому мы рекомендуем использовать интерактивный режим только для ознакомления с моделью.
|
montenegrolu93/Qwen3-0.6B-Gensyn-Swarm-lumbering_gregarious_rabbit
|
montenegrolu93
| 2025-09-22T11:56:16Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am lumbering_gregarious_rabbit",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-22T06:55:59Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am lumbering_gregarious_rabbit
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
OCHone/Qwen3-0.6B-Gensyn-Swarm-powerful_prehistoric_lizard
|
OCHone
| 2025-09-22T11:54:39Z | 120 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am powerful_prehistoric_lizard",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-09T10:17:07Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am powerful_prehistoric_lizard
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
tarundachepally/EGL_granite_8b_linear_full
|
tarundachepally
| 2025-09-22T11:54:16Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-22T11:44:27Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/DPO_ablations_qwen_ultrafeedback_pref_filter_only-GGUF
|
mradermacher
| 2025-09-22T11:54:05Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:ketchup123/DPO_ablations_qwen_ultrafeedback_pref_filter_only",
"base_model:quantized:ketchup123/DPO_ablations_qwen_ultrafeedback_pref_filter_only",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-22T11:06:12Z |
---
base_model: ketchup123/DPO_ablations_qwen_ultrafeedback_pref_filter_only
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/ketchup123/DPO_ablations_qwen_ultrafeedback_pref_filter_only
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#DPO_ablations_qwen_ultrafeedback_pref_filter_only-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/DPO_ablations_qwen_ultrafeedback_pref_filter_only-GGUF/resolve/main/DPO_ablations_qwen_ultrafeedback_pref_filter_only.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/DPO_ablations_qwen_ultrafeedback_pref_filter_only-GGUF/resolve/main/DPO_ablations_qwen_ultrafeedback_pref_filter_only.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/DPO_ablations_qwen_ultrafeedback_pref_filter_only-GGUF/resolve/main/DPO_ablations_qwen_ultrafeedback_pref_filter_only.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/DPO_ablations_qwen_ultrafeedback_pref_filter_only-GGUF/resolve/main/DPO_ablations_qwen_ultrafeedback_pref_filter_only.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/DPO_ablations_qwen_ultrafeedback_pref_filter_only-GGUF/resolve/main/DPO_ablations_qwen_ultrafeedback_pref_filter_only.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/DPO_ablations_qwen_ultrafeedback_pref_filter_only-GGUF/resolve/main/DPO_ablations_qwen_ultrafeedback_pref_filter_only.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DPO_ablations_qwen_ultrafeedback_pref_filter_only-GGUF/resolve/main/DPO_ablations_qwen_ultrafeedback_pref_filter_only.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DPO_ablations_qwen_ultrafeedback_pref_filter_only-GGUF/resolve/main/DPO_ablations_qwen_ultrafeedback_pref_filter_only.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/DPO_ablations_qwen_ultrafeedback_pref_filter_only-GGUF/resolve/main/DPO_ablations_qwen_ultrafeedback_pref_filter_only.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/DPO_ablations_qwen_ultrafeedback_pref_filter_only-GGUF/resolve/main/DPO_ablations_qwen_ultrafeedback_pref_filter_only.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/DPO_ablations_qwen_ultrafeedback_pref_filter_only-GGUF/resolve/main/DPO_ablations_qwen_ultrafeedback_pref_filter_only.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/DPO_ablations_qwen_ultrafeedback_pref_filter_only-GGUF/resolve/main/DPO_ablations_qwen_ultrafeedback_pref_filter_only.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
MattBou00/llama-3-2-1b-detox_v1f_RRETRT_Again_ROUND1-checkpoint-epoch-40
|
MattBou00
| 2025-09-22T11:53:38Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"ppo",
"reinforcement-learning",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
reinforcement-learning
| 2025-09-22T11:52:37Z |
---
license: apache-2.0
library_name: transformers
tags:
- trl
- ppo
- transformers
- reinforcement-learning
---
# TRL Model
This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for text generation.
## Usage
To use this model for inference, first install the TRL library:
```bash
python -m pip install trl
```
You can then generate text as follows:
```python
from transformers import pipeline
generator = pipeline("text-generation", model="MattBou00//content/IRL-Bayesian/outputs/2025-09-22_11-47-04/checkpoints/checkpoint-epoch-40")
outputs = generator("Hello, my llama is cute")
```
If you want to use the model for training or to obtain the outputs from the value head, load the model as follows:
```python
from transformers import AutoTokenizer
from trl import AutoModelForCausalLMWithValueHead
tokenizer = AutoTokenizer.from_pretrained("MattBou00//content/IRL-Bayesian/outputs/2025-09-22_11-47-04/checkpoints/checkpoint-epoch-40")
model = AutoModelForCausalLMWithValueHead.from_pretrained("MattBou00//content/IRL-Bayesian/outputs/2025-09-22_11-47-04/checkpoints/checkpoint-epoch-40")
inputs = tokenizer("Hello, my llama is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
```
|
kiarashQ/whisper-small-fa
|
kiarashQ
| 2025-09-22T11:52:38Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-09-22T08:41:26Z |
---
library_name: transformers
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-small-fa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-fa
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1537
- Wer: 19.2460
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:-------:|
| 0.2216 | 0.1935 | 1000 | 0.2209 | 28.1653 |
| 0.1947 | 0.3871 | 2000 | 0.1808 | 24.9731 |
| 0.1465 | 0.5806 | 3000 | 0.1621 | 20.7613 |
| 0.129 | 0.7741 | 4000 | 0.1537 | 19.2460 |
### Framework versions
- Transformers 4.56.2
- Pytorch 2.8.0+cu128
- Datasets 4.1.1
- Tokenizers 0.22.1
|
afiyarah/embedding-ins-make
|
afiyarah
| 2025-09-22T11:49:40Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"gemma3_text",
"sentence-similarity",
"feature-extraction",
"dense",
"generated_from_trainer",
"dataset_size:9431",
"loss:CosineSimilarityLoss",
"arxiv:1908.10084",
"base_model:google/embeddinggemma-300m",
"base_model:finetune:google/embeddinggemma-300m",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-09-22T11:49:20Z |
---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- dense
- generated_from_trainer
- dataset_size:9431
- loss:CosineSimilarityLoss
base_model: google/embeddinggemma-300m
widget:
- source_sentence: 'In the car insurance domain, represent this car make entity in
arabic for entity similarity matching: دي سوتو'
sentences:
- 'In the car insurance domain, represent this car make entity in arabic for entity
similarity matching: هايتسو'
- 'In the car insurance domain, represent this car make entity in arabic for entity
similarity matching: ربيلكااوبرا'
- 'In the car insurance domain, represent this car make entity in arabic for entity
similarity matching: ديهاتسو'
- source_sentence: 'In the car insurance domain, represent this car make entity in
arabic for entity similarity matching: سي آر إس'
sentences:
- 'In the car insurance domain, represent this car make entity in arabic for entity
similarity matching: داسيا'
- 'In the car insurance domain, represent this car make entity in arabic for entity
similarity matching: كاوساكي'
- 'In the car insurance domain, represent this car make entity in arabic for entity
similarity matching: كيوتي'
- source_sentence: 'In the car insurance domain, represent this car make entity in
arabic for entity similarity matching: آمي'
sentences:
- 'In the car insurance domain, represent this car make entity in arabic for entity
similarity matching: كراز'
- 'In the car insurance domain, represent this car make entity in arabic for entity
similarity matching: سي ام سي دي'
- 'In the car insurance domain, represent this car make entity in arabic for entity
similarity matching: شميت'
- source_sentence: 'In the car insurance domain, represent this car make entity in
english for entity similarity matching: checker'
sentences:
- 'In the car insurance domain, represent this car make entity in english for entity
similarity matching: tiger'
- 'In the car insurance domain, represent this car make entity in arabic for entity
similarity matching: جاك'
- 'In the car insurance domain, represent this car make entity in arabic for entity
similarity matching: فوسو'
- source_sentence: 'In the car insurance domain, represent this car make entity in
arabic for entity similarity matching: جي إي سي'
sentences:
- 'In the car insurance domain, represent this car make entity in arabic for entity
similarity matching: ايدزل'
- 'In the car insurance domain, represent this car make entity in arabic for entity
similarity matching: واكر'
- 'In the car insurance domain, represent this car make entity in arabic for entity
similarity matching: سالك'
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- pearson_cosine
- spearman_cosine
model-index:
- name: SentenceTransformer based on google/embeddinggemma-300m
results:
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: insurance val
type: insurance-val
metrics:
- type: pearson_cosine
value: 0.8319304484319612
name: Pearson Cosine
- type: spearman_cosine
value: 0.6431780348935766
name: Spearman Cosine
---
# SentenceTransformer based on google/embeddinggemma-300m
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [google/embeddinggemma-300m](https://huggingface.co/google/embeddinggemma-300m). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [google/embeddinggemma-300m](https://huggingface.co/google/embeddinggemma-300m) <!-- at revision c5cfa06e5e282a820e85d57f7fb053207494f41d -->
- **Maximum Sequence Length:** 2048 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 2048, 'do_lower_case': False, 'architecture': 'Gemma3TextModel'})
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Dense({'in_features': 768, 'out_features': 3072, 'bias': False, 'activation_function': 'torch.nn.modules.linear.Identity'})
(3): Dense({'in_features': 3072, 'out_features': 768, 'bias': False, 'activation_function': 'torch.nn.modules.linear.Identity'})
(4): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
queries = [
"In the car insurance domain, represent this car make entity in arabic for entity similarity matching: \u062c\u064a \u0625\u064a \u0633\u064a",
]
documents = [
'In the car insurance domain, represent this car make entity in arabic for entity similarity matching: سالك',
'In the car insurance domain, represent this car make entity in arabic for entity similarity matching: ايدزل',
'In the car insurance domain, represent this car make entity in arabic for entity similarity matching: واكر',
]
query_embeddings = model.encode_query(queries)
document_embeddings = model.encode_document(documents)
print(query_embeddings.shape, document_embeddings.shape)
# [1, 768] [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(query_embeddings, document_embeddings)
print(similarities)
# tensor([[0.5667, 0.5606, 0.5776]])
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Semantic Similarity
* Dataset: `insurance-val`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.8319 |
| **spearman_cosine** | **0.6432** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 9,431 training samples
* Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | sentence_0 | sentence_1 | label |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:---------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 21 tokens</li><li>mean: 23.43 tokens</li><li>max: 27 tokens</li></ul> | <ul><li>min: 21 tokens</li><li>mean: 22.97 tokens</li><li>max: 27 tokens</li></ul> | <ul><li>min: 0.1</li><li>mean: 0.28</li><li>max: 1.0</li></ul> |
* Samples:
| sentence_0 | sentence_1 | label |
|:-------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------|:---------------------------------|
| <code>In the car insurance domain, represent this car make entity in arabic for entity similarity matching: إل تي إم جي</code> | <code>In the car insurance domain, represent this car make entity in arabic for entity similarity matching: بوماج</code> | <code>0.19999999999999998</code> |
| <code>In the car insurance domain, represent this car make entity in arabic for entity similarity matching: يو دي</code> | <code>In the car insurance domain, represent this car make entity in arabic for entity similarity matching: لادا</code> | <code>0.19999999999999998</code> |
| <code>In the car insurance domain, represent this car make entity in arabic for entity similarity matching: إنساين</code> | <code>In the car insurance domain, represent this car make entity in arabic for entity similarity matching: شانسي</code> | <code>0.4</code> |
* Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters:
```json
{
"loss_fct": "torch.nn.modules.loss.MSELoss"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 3
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `parallelism_config`: None
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `hub_revision`: None
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `liger_kernel_config`: None
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
- `router_mapping`: {}
- `learning_rate_mapping`: {}
</details>
### Training Logs
| Epoch | Step | Training Loss | insurance-val_spearman_cosine |
|:------:|:----:|:-------------:|:-----------------------------:|
| 0.0983 | 58 | - | 0.4972 |
| 0.1966 | 116 | - | 0.5621 |
| 0.2949 | 174 | - | 0.5636 |
| 0.3932 | 232 | - | 0.5194 |
| 0.4915 | 290 | - | 0.6253 |
| 0.5898 | 348 | - | 0.6236 |
| 0.6881 | 406 | - | 0.5702 |
| 0.7864 | 464 | - | 0.6208 |
| 0.8475 | 500 | 0.0209 | - |
| 0.8847 | 522 | - | 0.6018 |
| 0.9831 | 580 | - | 0.5994 |
| 1.0 | 590 | - | 0.6048 |
| 1.0814 | 638 | - | 0.6002 |
| 1.1797 | 696 | - | 0.6083 |
| 1.2780 | 754 | - | 0.5940 |
| 1.3763 | 812 | - | 0.6044 |
| 1.4746 | 870 | - | 0.6248 |
| 1.5729 | 928 | - | 0.6432 |
### Framework Versions
- Python: 3.12.11
- Sentence Transformers: 5.1.0
- Transformers: 4.56.1
- PyTorch: 2.8.0+cu126
- Accelerate: 1.10.1
- Datasets: 4.0.0
- Tokenizers: 0.22.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
its-zion-18/music-text-distilbert-predictor
|
its-zion-18
| 2025-09-22T11:42:19Z | 34 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:samder03/2025-24679-text-dataset",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-09-20T19:04:44Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: music-text-distilbert-predictor
results: []
datasets:
- samder03/2025-24679-text-dataset
---
# DistilBERT-based Music Era Classifier
This repository contains a fine-tuned text classification model based on distilbert-base-uncased. The model is designed to classify short text descriptions of eras in classical music into one of four historical musical eras: 0, 1, 2, and 3.
# Model Architecture & Training
The model was trained using the Hugging Face Trainer API. It utilizes a distilbert-base-uncased pre-trained model with a classification head on top.
- Tokenizer: AutoTokenizer.from_pretrained("distilbert-base-uncased")
- Model: AutoModelForSequenceClassification.from_pretrained("distilbert-base-uncased")
- Training Arguments: Learning Rate: 2×10−5
- Epochs: 5
- Batch Size: 8
- Evaluation Strategy: Per epoch
- Metric: accuracy
- Optimizer: AdamW
# music-text-distilbert-predictor
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the samder03/2025-24679-text-dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0495
- Accuracy: 1.0
- F1: 1.0
- Precision: 1.0
- Recall: 1.0
## Limitations
This model's primary limitations are:
Numerical Labels: The model outputs a numerical label (0, 1, 2, or 3). An external lookup table is required to map these numbers to their corresponding musical era names.
Language & Casing: As the model is based on distilbert-base-uncased, it is designed for English-language text and does not differentiate between uppercase and lowercase letters. It will not work for other languages.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.6387 | 1.0 | 80 | 0.5111 | 0.9563 | 0.9562 | 0.9574 | 0.9563 |
| 0.0833 | 2.0 | 160 | 0.1052 | 0.9812 | 0.9812 | 0.9814 | 0.9812 |
| 0.0221 | 3.0 | 240 | 0.0585 | 0.9812 | 0.9812 | 0.9814 | 0.9812 |
| 0.0122 | 4.0 | 320 | 0.0629 | 0.9812 | 0.9812 | 0.9814 | 0.9812 |
| 0.011 | 5.0 | 400 | 0.0614 | 0.9812 | 0.9812 | 0.9814 | 0.9812 |
### Framework versions
- Transformers 4.56.1
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.22.0
# Potential Errors
There could be a problem with dataleakage because the accuracy is at 100%
Because the model has already been trained on the augmented data, which is just a
derivative of the original data, the original dataset isn't a true holdout set.
The model is essentially being tested on data that it has already seen and, in some cases, memorized.
|
csukuangfj/vits-piper-ar_JO-SA_miro-high-int8
|
csukuangfj
| 2025-09-22T11:36:35Z | 0 | 0 | null |
[
"onnx",
"region:us"
] | null | 2025-09-22T11:05:14Z |
<!doctype html>
<html class="">
<head>
<meta charset="utf-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no" />
<meta name="description" content="We’re on a journey to advance and democratize artificial intelligence through open source and open science." />
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta name="twitter:image" content="https://cdn-thumbnails.huggingface.co/social-thumbnails/models/OpenVoiceOS/phoonnx_ar-SA_miro_espeak.png" />
<meta property="og:title" content="README.md · OpenVoiceOS/phoonnx_ar-SA_miro_espeak at main" />
<meta property="og:type" content="website" />
<meta property="og:url" content="https://huggingface.co/OpenVoiceOS/phoonnx_ar-SA_miro_espeak/blob/main/README.md" />
<meta property="og:image" content="https://cdn-thumbnails.huggingface.co/social-thumbnails/models/OpenVoiceOS/phoonnx_ar-SA_miro_espeak.png" />
<link rel="stylesheet" href="/front/build/kube-0e1a2e5/style.css" />
<link rel="preconnect" href="https://fonts.gstatic.com" />
<link
href="https://fonts.googleapis.com/css2?family=Source+Sans+Pro:ital,wght@0,200;0,300;0,400;0,600;0,700;1,200;1,300;1,400;1,600;1,700&display=swap"
rel="stylesheet"
/>
<link
href="https://fonts.googleapis.com/css2?family=IBM+Plex+Mono:wght@400;600;700&display=swap"
rel="stylesheet"
/>
<link
rel="preload"
href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.12.0/katex.min.css"
as="style"
onload="this.onload=null;this.rel='stylesheet'"
/>
<noscript>
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/KaTeX/0.12.0/katex.min.css" />
</noscript>
<script>const guestTheme = document.cookie.match(/theme=(\w+)/)?.[1]; document.documentElement.classList.toggle('dark', guestTheme === 'dark' || ( (!guestTheme || guestTheme === 'system') && window.matchMedia('(prefers-color-scheme: dark)').matches));</script>
<link rel="canonical" href="https://huggingface.co/OpenVoiceOS/phoonnx_ar-SA_miro_espeak/blob/main/README.md">
<title>README.md · OpenVoiceOS/phoonnx_ar-SA_miro_espeak at main</title>
<script
defer
data-domain="huggingface.co"
event-loggedIn="false"
src="/js/script.pageview-props.js"
></script>
<script>
window.plausible =
window.plausible ||
function () {
(window.plausible.q = window.plausible.q || []).push(arguments);
};
</script>
<script>
window.hubConfig = {"features":{"signupDisabled":false},"sshGitUrl":"[email protected]","moonHttpUrl":"https:\/\/huggingface.co","captchaApiKey":"bd5f2066-93dc-4bdd-a64b-a24646ca3859","captchaDisabledOnSignup":true,"datasetViewerPublicUrl":"https:\/\/datasets-server.huggingface.co","stripePublicKey":"pk_live_x2tdjFXBCvXo2FFmMybezpeM00J6gPCAAc","environment":"production","userAgent":"HuggingFace (production)","spacesIframeDomain":"hf.space","spacesApiUrl":"https:\/\/api.hf.space","docSearchKey":"ece5e02e57300e17d152c08056145326e90c4bff3dd07d7d1ae40cf1c8d39cb6","logoDev":{"apiUrl":"https:\/\/img.logo.dev\/","apiKey":"pk_UHS2HZOeRnaSOdDp7jbd5w"}};
</script>
<script type="text/javascript" src="https://de5282c3ca0c.edge.sdk.awswaf.com/de5282c3ca0c/526cf06acb0d/challenge.js" defer></script>
</head>
<body class="flex flex-col min-h-dvh bg-white dark:bg-gray-950 text-black ViewerBlobPage">
<div class="flex min-h-dvh flex-col"><div class="SVELTE_HYDRATER contents" data-target="SystemThemeMonitor" data-props="{"isLoggedIn":false}"></div>
<div class="SVELTE_HYDRATER contents" data-target="MainHeader" data-props="{"classNames":"","isWide":false,"isZh":false,"isPro":false}"><header class="border-b border-gray-100 "><div class="w-full px-4 container flex h-16 items-center"><div class="flex flex-1 items-center"><a class="mr-5 flex flex-none items-center lg:mr-6" href="/"><img alt="Hugging Face's logo" class="w-7 md:mr-2" src="/front/assets/huggingface_logo-noborder.svg">
<span class="hidden whitespace-nowrap text-lg font-bold md:block">Hugging Face</span></a>
<div class="relative flex-1 lg:max-w-sm mr-2 sm:mr-4 md:mr-3 xl:mr-6"><input autocomplete="off" class="w-full dark:bg-gray-950 pl-8 form-input-alt h-9 pr-3 focus:shadow-xl " name="" placeholder="Search models, datasets, users..." spellcheck="false" type="text" value="">
<svg class="absolute left-2.5 text-gray-400 top-1/2 transform -translate-y-1/2" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 28.59L22.45 21A11 11 0 1 0 21 22.45L28.59 30zM5 14a9 9 0 1 1 9 9a9 9 0 0 1-9-9z" fill="currentColor"></path></svg>
</div>
<div class="flex flex-none items-center justify-center p-0.5 place-self-stretch lg:hidden"><button class="relative z-40 flex h-6 w-8 items-center justify-center" type="button"><svg width="1em" height="1em" viewBox="0 0 10 10" class="text-xl" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" preserveAspectRatio="xMidYMid meet" fill="currentColor"><path fill-rule="evenodd" clip-rule="evenodd" d="M1.65039 2.9999C1.65039 2.8066 1.80709 2.6499 2.00039 2.6499H8.00039C8.19369 2.6499 8.35039 2.8066 8.35039 2.9999C8.35039 3.1932 8.19369 3.3499 8.00039 3.3499H2.00039C1.80709 3.3499 1.65039 3.1932 1.65039 2.9999ZM1.65039 4.9999C1.65039 4.8066 1.80709 4.6499 2.00039 4.6499H8.00039C8.19369 4.6499 8.35039 4.8066 8.35039 4.9999C8.35039 5.1932 8.19369 5.3499 8.00039 5.3499H2.00039C1.80709 5.3499 1.65039 5.1932 1.65039 4.9999ZM2.00039 6.6499C1.80709 6.6499 1.65039 6.8066 1.65039 6.9999C1.65039 7.1932 1.80709 7.3499 2.00039 7.3499H8.00039C8.19369 7.3499 8.35039 7.1932 8.35039 6.9999C8.35039 6.8066 8.19369 6.6499 8.00039 6.6499H2.00039Z"></path></svg>
</button>
</div></div>
<nav aria-label="Main" class="ml-auto hidden lg:block"><ul class="flex items-center gap-x-1 2xl:gap-x-2"><li class="hover:text-indigo-700"><a class="group flex items-center px-2 py-0.5 dark:text-gray-300 dark:hover:text-gray-100" href="/models"><svg class="mr-1.5 text-gray-400 group-hover:text-indigo-500" style="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg>
Models</a>
</li><li class="hover:text-red-700"><a class="group flex items-center px-2 py-0.5 dark:text-gray-300 dark:hover:text-gray-100" href="/datasets"><svg class="mr-1.5 text-gray-400 group-hover:text-red-500" style="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 25 25"><ellipse cx="12.5" cy="5" fill="currentColor" fill-opacity="0.25" rx="7.5" ry="2"></ellipse><path d="M12.5 15C16.6421 15 20 14.1046 20 13V20C20 21.1046 16.6421 22 12.5 22C8.35786 22 5 21.1046 5 20V13C5 14.1046 8.35786 15 12.5 15Z" fill="currentColor" opacity="0.5"></path><path d="M12.5 7C16.6421 7 20 6.10457 20 5V11.5C20 12.6046 16.6421 13.5 12.5 13.5C8.35786 13.5 5 12.6046 5 11.5V5C5 6.10457 8.35786 7 12.5 7Z" fill="currentColor" opacity="0.5"></path><path d="M5.23628 12C5.08204 12.1598 5 12.8273 5 13C5 14.1046 8.35786 15 12.5 15C16.6421 15 20 14.1046 20 13C20 12.8273 19.918 12.1598 19.7637 12C18.9311 12.8626 15.9947 13.5 12.5 13.5C9.0053 13.5 6.06886 12.8626 5.23628 12Z" fill="currentColor"></path></svg>
Datasets</a>
</li><li class="hover:text-blue-700"><a class="group flex items-center px-2 py-0.5 dark:text-gray-300 dark:hover:text-gray-100" href="/spaces"><svg class="mr-1.5 text-gray-400 group-hover:text-blue-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 25 25"><path opacity=".5" d="M6.016 14.674v4.31h4.31v-4.31h-4.31ZM14.674 14.674v4.31h4.31v-4.31h-4.31ZM6.016 6.016v4.31h4.31v-4.31h-4.31Z" fill="currentColor"></path><path opacity=".75" fill-rule="evenodd" clip-rule="evenodd" d="M3 4.914C3 3.857 3.857 3 4.914 3h6.514c.884 0 1.628.6 1.848 1.414a5.171 5.171 0 0 1 7.31 7.31c.815.22 1.414.964 1.414 1.848v6.514A1.914 1.914 0 0 1 20.086 22H4.914A1.914 1.914 0 0 1 3 20.086V4.914Zm3.016 1.102v4.31h4.31v-4.31h-4.31Zm0 12.968v-4.31h4.31v4.31h-4.31Zm8.658 0v-4.31h4.31v4.31h-4.31Zm0-10.813a2.155 2.155 0 1 1 4.31 0 2.155 2.155 0 0 1-4.31 0Z" fill="currentColor"></path><path opacity=".25" d="M16.829 6.016a2.155 2.155 0 1 0 0 4.31 2.155 2.155 0 0 0 0-4.31Z" fill="currentColor"></path></svg>
Spaces</a>
</li><li class="max-xl:hidden relative"><div class="relative ">
<button class="group flex items-center px-2 py-0.5 dark:text-gray-300 hover:text-yellow-700 dark:hover:text-gray-100 " type="button">
<svg class="mr-1.5 mr-1.5 text-gray-400 text-yellow-500! group-hover:text-yellow-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M20.6081 3C21.7684 3 22.8053 3.49196 23.5284 4.38415C23.9756 4.93678 24.4428 5.82749 24.4808 7.16133C24.9674 7.01707 25.4353 6.93643 25.8725 6.93643C26.9833 6.93643 27.9865 7.37587 28.696 8.17411C29.6075 9.19872 30.0124 10.4579 29.8361 11.7177C29.7523 12.3177 29.5581 12.8555 29.2678 13.3534C29.8798 13.8646 30.3306 14.5763 30.5485 15.4322C30.719 16.1032 30.8939 17.5006 29.9808 18.9403C30.0389 19.0342 30.0934 19.1319 30.1442 19.2318C30.6932 20.3074 30.7283 21.5229 30.2439 22.6548C29.5093 24.3704 27.6841 25.7219 24.1397 27.1727C21.9347 28.0753 19.9174 28.6523 19.8994 28.6575C16.9842 29.4379 14.3477 29.8345 12.0653 29.8345C7.87017 29.8345 4.8668 28.508 3.13831 25.8921C0.356375 21.6797 0.754104 17.8269 4.35369 14.1131C6.34591 12.058 7.67023 9.02782 7.94613 8.36275C8.50224 6.39343 9.97271 4.20438 12.4172 4.20438H12.4179C12.6236 4.20438 12.8314 4.2214 13.0364 4.25468C14.107 4.42854 15.0428 5.06476 15.7115 6.02205C16.4331 5.09583 17.134 4.359 17.7682 3.94323C18.7242 3.31737 19.6794 3 20.6081 3ZM20.6081 5.95917C20.2427 5.95917 19.7963 6.1197 19.3039 6.44225C17.7754 7.44319 14.8258 12.6772 13.7458 14.7131C13.3839 15.3952 12.7655 15.6837 12.2086 15.6837C11.1036 15.6837 10.2408 14.5497 12.1076 13.1085C14.9146 10.9402 13.9299 7.39584 12.5898 7.1776C12.5311 7.16799 12.4731 7.16355 12.4172 7.16355C11.1989 7.16355 10.6615 9.33114 10.6615 9.33114C10.6615 9.33114 9.0863 13.4148 6.38031 16.206C3.67434 18.998 3.5346 21.2388 5.50675 24.2246C6.85185 26.2606 9.42666 26.8753 12.0653 26.8753C14.8021 26.8753 17.6077 26.2139 19.1799 25.793C19.2574 25.7723 28.8193 22.984 27.6081 20.6107C27.4046 20.212 27.0693 20.0522 26.6471 20.0522C24.9416 20.0522 21.8393 22.6726 20.5057 22.6726C20.2076 22.6726 19.9976 22.5416 19.9116 22.222C19.3433 20.1173 28.552 19.2325 27.7758 16.1839C27.639 15.6445 27.2677 15.4256 26.746 15.4263C24.4923 15.4263 19.4358 19.5181 18.3759 19.5181C18.2949 19.5181 18.2368 19.4937 18.2053 19.4419C17.6743 18.557 17.9653 17.9394 21.7082 15.6009C25.4511 13.2617 28.0783 11.8545 26.5841 10.1752C26.4121 9.98141 26.1684 9.8956 25.8725 9.8956C23.6001 9.89634 18.2311 14.9403 18.2311 14.9403C18.2311 14.9403 16.7821 16.496 15.9057 16.496C15.7043 16.496 15.533 16.4139 15.4169 16.2112C14.7956 15.1296 21.1879 10.1286 21.5484 8.06535C21.7928 6.66715 21.3771 5.95917 20.6081 5.95917Z" fill="#FF9D00"></path><path d="M5.50686 24.2246C3.53472 21.2387 3.67446 18.9979 6.38043 16.206C9.08641 13.4147 10.6615 9.33111 10.6615 9.33111C10.6615 9.33111 11.2499 6.95933 12.59 7.17757C13.93 7.39581 14.9139 10.9401 12.1069 13.1084C9.29997 15.276 12.6659 16.7489 13.7459 14.713C14.8258 12.6772 17.7747 7.44316 19.304 6.44221C20.8326 5.44128 21.9089 6.00204 21.5484 8.06532C21.188 10.1286 14.795 15.1295 15.4171 16.2118C16.0391 17.2934 18.2312 14.9402 18.2312 14.9402C18.2312 14.9402 25.0907 8.49588 26.5842 10.1752C28.0776 11.8545 25.4512 13.2616 21.7082 15.6008C17.9646 17.9393 17.6744 18.557 18.2054 19.4418C18.7372 20.3266 26.9998 13.1351 27.7759 16.1838C28.5513 19.2324 19.3434 20.1173 19.9117 22.2219C20.48 24.3274 26.3979 18.2382 27.6082 20.6107C28.8193 22.9839 19.2574 25.7722 19.18 25.7929C16.0914 26.62 8.24723 28.3726 5.50686 24.2246Z" fill="#FFD21E"></path></svg>
Community
</button>
</div>
</li><li class="hover:text-yellow-700"><a class="group flex items-center px-2 py-0.5 dark:text-gray-300 dark:hover:text-gray-100" href="/docs"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" class="mr-1.5 text-gray-400 group-hover:text-yellow-500" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 16 16"><path d="m2.28 3.7-.3.16a.67.67 0 0 0-.34.58v8.73l.01.04.02.07.01.04.03.06.02.04.02.03.04.06.05.05.04.04.06.04.06.04.08.04.08.02h.05l.07.02h.11l.04-.01.07-.02.03-.01.07-.03.22-.12a5.33 5.33 0 0 1 5.15.1.67.67 0 0 0 .66 0 5.33 5.33 0 0 1 5.33 0 .67.67 0 0 0 1-.58V4.36a.67.67 0 0 0-.34-.5l-.3-.17v7.78a.63.63 0 0 1-.87.59 4.9 4.9 0 0 0-4.35.35l-.65.39a.29.29 0 0 1-.15.04.29.29 0 0 1-.16-.04l-.65-.4a4.9 4.9 0 0 0-4.34-.34.63.63 0 0 1-.87-.59V3.7Z" fill="currentColor" class="dark:opacity-40"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M8 3.1a5.99 5.99 0 0 0-5.3-.43.66.66 0 0 0-.42.62v8.18c0 .45.46.76.87.59a4.9 4.9 0 0 1 4.34.35l.65.39c.05.03.1.04.16.04.05 0 .1-.01.15-.04l.65-.4a4.9 4.9 0 0 1 4.35-.34.63.63 0 0 0 .86-.59V3.3a.67.67 0 0 0-.41-.62 5.99 5.99 0 0 0-5.3.43l-.3.17L8 3.1Zm.73 1.87a.43.43 0 1 0-.86 0v5.48a.43.43 0 0 0 .86 0V4.97Z" fill="currentColor" class="opacity-40 dark:opacity-100"></path><path d="M8.73 4.97a.43.43 0 1 0-.86 0v5.48a.43.43 0 1 0 .86 0V4.96Z" fill="currentColor" class="dark:opacity-40"></path></svg>
Docs</a>
</li><li class="hover:text-black dark:hover:text-white max-2xl:hidden"><a class="group flex items-center px-2 py-0.5 dark:text-gray-300 dark:hover:text-gray-100" href="/enterprise"><svg class="mr-1.5 text-gray-400 group-hover:text-black dark:group-hover:text-white" xmlns="http://www.w3.org/2000/svg" fill="none" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 12 12"><path fill-rule="evenodd" clip-rule="evenodd" d="M4.9 1.35a3.16 3.16 0 0 0-2.8 2.07L.37 8.58C0 9.71.7 10.65 1.86 10.65H7.3a3.2 3.2 0 0 0 2.84-2.07l1.67-5.16c.36-1.13-.3-2.07-1.46-2.07H4.91Zm.4 2.07L3.57 8.47h3.57l.36-1.12H5.4l.28-.91h1.75l.4-1.1H6.07l.3-.83h2l.36-1.1H5.27h.04Z" fill="currentColor"></path></svg>
Enterprise</a>
</li>
<li><a class="group flex items-center px-2 py-0.5 dark:text-gray-300 dark:hover:text-gray-100" href="/pricing">Pricing
</a></li>
<li><div class="relative group">
<button class="px-2 py-0.5 hover:text-gray-500 dark:hover:text-gray-600 flex items-center " type="button">
<svg class=" text-gray-500 w-5 group-hover:text-gray-400 dark:text-gray-300 dark:group-hover:text-gray-100" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 32 18" preserveAspectRatio="xMidYMid meet"><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 3.30221C14.4504 2.836 14.8284 2.45807 15.2946 2.45807H28.4933C28.9595 2.45807 29.3374 2.836 29.3374 3.30221C29.3374 3.76842 28.9595 4.14635 28.4933 4.14635H15.2946C14.8284 4.14635 14.4504 3.76842 14.4504 3.30221Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 9.00002C14.4504 8.53382 14.8284 8.15588 15.2946 8.15588H28.4933C28.9595 8.15588 29.3374 8.53382 29.3374 9.00002C29.3374 9.46623 28.9595 9.84417 28.4933 9.84417H15.2946C14.8284 9.84417 14.4504 9.46623 14.4504 9.00002Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M14.4504 14.6978C14.4504 14.2316 14.8284 13.8537 15.2946 13.8537H28.4933C28.9595 13.8537 29.3374 14.2316 29.3374 14.6978C29.3374 15.164 28.9595 15.542 28.4933 15.542H15.2946C14.8284 15.542 14.4504 15.164 14.4504 14.6978Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M1.94549 6.87377C2.27514 6.54411 2.80962 6.54411 3.13928 6.87377L6.23458 9.96907L9.32988 6.87377C9.65954 6.54411 10.194 6.54411 10.5237 6.87377C10.8533 7.20343 10.8533 7.73791 10.5237 8.06756L6.23458 12.3567L1.94549 8.06756C1.61583 7.73791 1.61583 7.20343 1.94549 6.87377Z" fill="currentColor"></path></svg>
</button>
</div></li>
<li><hr class="h-5 w-0.5 border-none bg-gray-100 dark:bg-gray-800"></li>
<li><a class="block cursor-pointer whitespace-nowrap px-2 py-0.5 hover:text-gray-500 dark:text-gray-300 dark:hover:text-gray-100" href="/login">Log In
</a></li>
<li><a class="whitespace-nowrap rounded-full border border-transparent bg-gray-900 px-3 py-1 leading-none text-white hover:border-black hover:bg-white hover:text-black" href="/join">Sign Up
</a></li></ul></nav></div></header></div>
<div class="SVELTE_HYDRATER contents" data-target="SSOBanner" data-props="{}"></div>
<main class="flex flex-1 flex-col">
<div class="SVELTE_HYDRATER contents" data-target="ModelHeader" data-props="{"activeTab":"files","author":{"_id":"685929add73d68816479addb","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/6417f46bfff753e7c158e23f/-p5xGWDWjWT7jZ_LqeA-u.png","fullname":"OpenVoiceOS","name":"OpenVoiceOS","type":"org","isHf":false,"isHfAdmin":false,"isMod":false,"isEnterprise":false,"followerCount":9},"canReadRepoSettings":false,"canWriteRepoContent":false,"canDisable":false,"model":{"author":"OpenVoiceOS","cardData":{"datasets":["TigreGotico/tts-train-synthetic-miro_ar-SA"],"language":["ar"],"pipeline_tag":"text-to-speech"},"cardExists":true,"createdAt":"2025-09-19T13:51:55.000Z","discussionsDisabled":false,"downloads":0,"downloadsAllTime":0,"id":"OpenVoiceOS/phoonnx_ar-SA_miro_espeak","isLikedByUser":false,"availableInferenceProviders":[],"inference":"","lastModified":"2025-09-21T19:46:40.000Z","likes":0,"pipeline_tag":"text-to-speech","librariesOther":[],"trackDownloads":false,"model-index":null,"private":false,"repoType":"model","gated":false,"pwcLink":{"error":"Unknown error, can't generate link to Papers With Code."},"tags":["onnx","text-to-speech","ar","dataset:TigreGotico/tts-train-synthetic-miro_ar-SA","region:us"],"tag_objs":[{"id":"text-to-speech","label":"Text-to-Speech","type":"pipeline_tag","subType":"audio"},{"id":"onnx","label":"ONNX","type":"library"},{"id":"dataset:TigreGotico/tts-train-synthetic-miro_ar-SA","label":"TigreGotico/tts-train-synthetic-miro_ar-SA","type":"dataset","extra":{"disabled":false}},{"id":"ar","label":"Arabic","type":"language"},{"type":"region","label":"🇺🇸 Region: US","id":"region:us"}],"hasBlockedOids":false,"region":"us","isQuantized":false,"xetEnabled":true},"discussionsStats":{"closed":0,"open":0,"total":0},"query":{},"inferenceContextData":{"billableEntities":[],"entityName2Providers":{}}}"><header class="bg-linear-to-t border-b border-gray-100 pt-6 sm:pt-9 from-purple-500/8 dark:from-purple-500/20 to-white to-70% dark:to-gray-950"><div class="container relative "><h1 class="flex flex-wrap items-center max-md:leading-tight mb-3 text-lg max-sm:gap-y-1.5 md:text-xl">
<div class="group flex flex-none items-center"><div class="relative mr-1 flex items-center">
<span class="inline-block "><span class="contents"><a href="/OpenVoiceOS" class="text-gray-400 hover:text-blue-600"><img alt="" class="size-3.5 rounded-sm flex-none" src="https://cdn-avatars.huggingface.co/v1/production/uploads/6417f46bfff753e7c158e23f/-p5xGWDWjWT7jZ_LqeA-u.png" crossorigin="anonymous"></a></span>
</span></div>
<span class="inline-block "><span class="contents"><a href="/OpenVoiceOS" class="text-gray-400 hover:text-blue-600">OpenVoiceOS</a></span>
</span>
<div class="mx-0.5 text-gray-300">/</div></div>
<div class="max-w-full "><a class="break-words font-mono font-semibold hover:text-blue-600 " href="/OpenVoiceOS/phoonnx_ar-SA_miro_espeak">phoonnx_ar-SA_miro_espeak</a>
<button class="text-sm mr-4 focus:outline-hidden inline-flex cursor-pointer items-center text-sm mx-0.5 text-gray-600 " title="Copy model name to clipboard" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg>
</button></div>
<div class="inline-flex items-center overflow-hidden whitespace-nowrap rounded-md border bg-white text-sm leading-none text-gray-500 mr-2"><button class="relative flex items-center overflow-hidden from-red-50 to-transparent dark:from-red-900 px-1.5 py-1 hover:bg-linear-to-t focus:outline-hidden" title="Like"><svg class="left-1.5 absolute" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32" fill="currentColor"><path d="M22.45,6a5.47,5.47,0,0,1,3.91,1.64,5.7,5.7,0,0,1,0,8L16,26.13,5.64,15.64a5.7,5.7,0,0,1,0-8,5.48,5.48,0,0,1,7.82,0L16,10.24l2.53-2.58A5.44,5.44,0,0,1,22.45,6m0-2a7.47,7.47,0,0,0-5.34,2.24L16,7.36,14.89,6.24a7.49,7.49,0,0,0-10.68,0,7.72,7.72,0,0,0,0,10.82L16,29,27.79,17.06a7.72,7.72,0,0,0,0-10.82A7.49,7.49,0,0,0,22.45,4Z"></path></svg>
<span class="ml-4 pl-0.5 ">like</span></button>
<button class="focus:outline-hidden flex items-center border-l px-1.5 py-1 text-gray-400 hover:bg-gray-50 focus:bg-gray-100 dark:hover:bg-gray-900 dark:focus:bg-gray-800" title="See users who liked this repository">0</button></div>
<div class="relative flex items-center gap-1.5 "><div class="mr-2 inline-flex h-6 items-center overflow-hidden whitespace-nowrap rounded-md border text-sm text-gray-500"><button class="focus:outline-hidden relative flex h-full max-w-56 items-center gap-1.5 overflow-hidden px-1.5 hover:bg-gray-50 focus:bg-gray-100 dark:hover:bg-gray-900 dark:focus:bg-gray-800" type="button" ><div class="flex h-full flex-1 items-center justify-center ">Follow</div>
<img alt="" class="rounded-xs size-3 flex-none" src="https://cdn-avatars.huggingface.co/v1/production/uploads/6417f46bfff753e7c158e23f/-p5xGWDWjWT7jZ_LqeA-u.png">
<span class="truncate">OpenVoiceOS</span></button>
<button class="focus:outline-hidden flex h-full items-center border-l pl-1.5 pr-1.5 text-gray-400 hover:bg-gray-50 focus:bg-gray-100 dark:hover:bg-gray-900 dark:focus:bg-gray-800" title="Show OpenVoiceOS's followers" type="button">9</button></div>
</div>
</h1>
<div class="mb-3 flex flex-wrap md:mb-4"><a class="mb-1 mr-1 md:mb-1.5 md:mr-1.5 rounded-lg" href="/models?pipeline_tag=text-to-speech"><div class="tag tag-white "><div class="tag-ico -ml-2 tag-ico-green"><svg class="-mr-0.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 18 18"><path fill-rule="evenodd" clip-rule="evenodd" d="M3.0625 3.0625L10 3.0625V2H3.0625C2.78071 2 2.51045 2.11194 2.3112 2.3112C2.11194 2.51046 2 2.78071 2 3.0625V11.5625C2 11.8443 2.11194 12.1145 2.3112 12.3138C2.51045 12.5131 2.78071 12.625 3.0625 12.625H7V11.5625H3.0625L3.0625 3.0625ZM5.78125 9.96875H6.84375V6.25H8.4375V5.1875H4.1875V6.25H5.78125V9.96875ZM12.5 13C13.163 13 13.7989 12.7366 14.2678 12.2678C14.7366 11.7989 15 11.163 15 10.5V5.5C15 4.83696 14.7366 4.20107 14.2678 3.73223C13.7989 3.26339 13.163 3 12.5 3C11.837 3 11.2011 3.26339 10.7322 3.73223C10.2634 4.20107 10 4.83696 10 5.5V10.5C10 11.163 10.2634 11.7989 10.7322 12.2678C11.2011 12.7366 11.837 13 12.5 13ZM11 5.5C11 5.10218 11.158 4.72064 11.4393 4.43934C11.7206 4.15804 12.1022 4 12.5 4C12.8978 4 13.2794 4.15804 13.5607 4.43934C13.842 4.72064 14 5.10218 14 5.5V10.5C14 10.8978 13.842 11.2794 13.5607 11.5607C13.2794 11.842 12.8978 12 12.5 12C12.1022 12 11.7206 11.842 11.4393 11.5607C11.158 11.2794 11 10.8978 11 10.5V5.5ZM16 9V10.5C16 11.4283 15.6313 12.3185 14.9749 12.9749C14.3185 13.6313 13.4283 14 12.5 14C11.5717 14 10.6815 13.6313 10.0251 12.9749C9.36875 12.3185 9 11.4283 9 10.5V9H8V10.5C8.00053 11.6065 8.40873 12.6741 9.14661 13.4987C9.88449 14.3232 10.9003 14.8471 12 14.97V16H10V17H15V16H13V14.97C14.0997 14.8471 15.1155 14.3232 15.8534 13.4987C16.5913 12.6741 16.9995 11.6065 17 10.5V9H16Z" fill="currentColor"></path></svg></div>
<span>Text-to-Speech</span>
</div></a><a class="mb-1 mr-1 md:mb-1.5 md:mr-1.5 rounded-lg" href="/models?library=onnx"><div class="tag tag-white "><svg class="text-black inline-block text-sm" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30.7,15h-.2L25.16,5.09a1.26,1.26,0,0,0,.15-.59A1.3,1.3,0,0,0,24,3.2a1.27,1.27,0,0,0-.93.4L12.43,1.48A1.29,1.29,0,1,0,10,2.24L1.7,14.13a1.21,1.21,0,0,0-.34-.05,1.3,1.3,0,0,0,0,2.59h0l4.5,11.07a1.38,1.38,0,0,0-.12.54,1.3,1.3,0,0,0,1.3,1.29,1.27,1.27,0,0,0,.94-.4l13.44,1.31A1.29,1.29,0,0,0,24,30.16a1.34,1.34,0,0,0-.31-.84l6.77-11.8h.22A1.3,1.3,0,0,0,32,16.24,1.28,1.28,0,0,0,30.7,15ZM23,5.29a1.26,1.26,0,0,0,.61.44l-2,15.37a1.29,1.29,0,0,0-.36.14L10,11.52a.93.93,0,0,0,.05-.34c0-.08,0-.17,0-.25Zm6.47,11.32-6.89,4.71a1.28,1.28,0,0,0-.17-.1L24.45,5.69h0l5.26,9.7a1.27,1.27,0,0,0-.32.86ZM8.57,9.9a1.28,1.28,0,0,0-1.09,1.28v.09L2.86,14,10,3.74ZM9,12.45a1.54,1.54,0,0,0,.47-.21l11.11,9.67a1.45,1.45,0,0,0-.08.47v.07l-12.4,5.1A1.3,1.3,0,0,0,7.34,27ZM20.87,23.24a1.22,1.22,0,0,0,.65.39L22.13,29a1.34,1.34,0,0,0-.59.61l-13-1.28Zm1.53.29a1.33,1.33,0,0,0,.72-1.16.83.83,0,0,0-.05-.32l6.3-4.33L23,28.8Zm.32-19.08L9.58,10.17l-.15-.1L11.1,2.91h.05a1.26,1.26,0,0,0,1.08-.59L22.72,4.41ZM2.66,15.38c0-.09,0-.17,0-.24l5.19-3.06a1.41,1.41,0,0,0,.36.27L6.5,26.86,2.22,16.34A1.28,1.28,0,0,0,2.66,15.38Z" fill="#333"></path><path d="M24.49,5.69l5.25,9.7a1.32,1.32,0,0,0-.32.86,1.49,1.49,0,0,0,0,.36l-6.88,4.71-.17-.1,2-15.53Z" fill="#dededd"></path><path d="M22.4,23.53a1.33,1.33,0,0,0,.72-1.16.83.83,0,0,0-.05-.32l6.3-4.33L23,28.8Z" fill="#b2b2b2"></path><path d="M20.87,23.24a1.22,1.22,0,0,0,.65.39L22.13,29a1.34,1.34,0,0,0-.59.61l-13-1.28Z" fill="#d1d1d1"></path><path d="M9,12.45a1.54,1.54,0,0,0,.47-.21l11.11,9.67a1.45,1.45,0,0,0-.08.47v.07l-12.4,5.1A1.3,1.3,0,0,0,7.34,27Z" fill="#f2f2f2"></path><path d="M2.66,15.38c0-.09,0-.17,0-.24l5.19-3.06a1.41,1.41,0,0,0,.36.27L6.5,26.86,2.22,16.34A1.28,1.28,0,0,0,2.66,15.38Z" fill="#d8d8d7"></path><path d="M8.57,9.9a1.28,1.28,0,0,0-1.09,1.28v.09L2.86,14,10,3.74Z" fill="#b2b2b2"></path><path d="M22.72,4.45,9.58,10.17l-.15-.1L11.1,2.91h.05a1.26,1.26,0,0,0,1.08-.59L22.72,4.41Z" fill="#d1d1d1"></path><path d="M23,5.29a1.26,1.26,0,0,0,.61.44l-2,15.37a1.29,1.29,0,0,0-.36.14L10,11.52a.93.93,0,0,0,.05-.34c0-.08,0-.17,0-.25Z" fill="#fff"></path></svg>
<span>ONNX</span>
</div></a>
<div class="relative inline-block ">
<button class="group mr-1 mb-1 md:mr-1.5 md:mb-1.5 rounded-lg rounded-br-none " type="button">
<div slot="button"><div class="tag tag-white relative rounded-br-none pr-2.5">
<svg class="-mr-1 text-red-500" style="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 25 25"><ellipse cx="12.5" cy="5" fill="currentColor" fill-opacity="0.25" rx="7.5" ry="2"></ellipse><path d="M12.5 15C16.6421 15 20 14.1046 20 13V20C20 21.1046 16.6421 22 12.5 22C8.35786 22 5 21.1046 5 20V13C5 14.1046 8.35786 15 12.5 15Z" fill="currentColor" opacity="0.5"></path><path d="M12.5 7C16.6421 7 20 6.10457 20 5V11.5C20 12.6046 16.6421 13.5 12.5 13.5C8.35786 13.5 5 12.6046 5 11.5V5C5 6.10457 8.35786 7 12.5 7Z" fill="currentColor" opacity="0.5"></path><path d="M5.23628 12C5.08204 12.1598 5 12.8273 5 13C5 14.1046 8.35786 15 12.5 15C16.6421 15 20 14.1046 20 13C20 12.8273 19.918 12.1598 19.7637 12C18.9311 12.8626 15.9947 13.5 12.5 13.5C9.0053 13.5 6.06886 12.8626 5.23628 12Z" fill="currentColor"></path></svg>
<span>TigreGotico/tts-train-synthetic-miro_ar-SA</span>
<div class="border-br-gray-200 absolute bottom-0.5 right-0.5 h-1 w-1 border-[3px] border-l-transparent border-t-transparent border-b-gray-200 border-r-gray-200 group-hover:border-b-gray-400 group-hover:border-r-gray-400 dark:border-b-gray-700 dark:border-r-gray-700 group-hover:dark:border-b-gray-400 group-hover:dark:border-r-gray-400"></div></div></div>
</button>
</div><a class="mb-1 mr-1 md:mb-1.5 md:mr-1.5 rounded-lg" href="/models?language=ar"><div class="tag tag-white ">
<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" class="text-green-600/80" preserveAspectRatio="xMidYMid meet" width="1em" height="1em" viewBox="0 0 10 10"><path fill-rule="evenodd" clip-rule="evenodd" d="M0.625 5C0.625 6.16032 1.08594 7.27312 1.90641 8.09359C2.72688 8.91406 3.83968 9.375 5 9.375C6.16032 9.375 7.27312 8.91406 8.09359 8.09359C8.91406 7.27312 9.375 6.16032 9.375 5C9.375 3.83968 8.91406 2.72688 8.09359 1.90641C7.27312 1.08594 6.16032 0.625 5 0.625C3.83968 0.625 2.72688 1.08594 1.90641 1.90641C1.08594 2.72688 0.625 3.83968 0.625 5ZM7.64365 7.48027C7.61734 7.50832 7.59054 7.53598 7.56326 7.56326C7.13828 7.98824 6.61864 8.2968 6.0539 8.46842C6.29802 8.11949 6.49498 7.64804 6.63475 7.09483C7.00845 7.18834 7.35014 7.3187 7.64365 7.48027ZM8.10076 6.87776C8.37677 6.42196 8.55005 5.90894 8.60556 5.37499H6.86808C6.85542 5.71597 6.82551 6.04557 6.77971 6.35841C7.25309 6.47355 7.68808 6.6414 8.062 6.85549C8.07497 6.86283 8.08789 6.87025 8.10076 6.87776ZM6.03795 6.22536C6.07708 5.95737 6.1044 5.67232 6.11705 5.37499H3.88295C3.89666 5.69742 3.92764 6.00542 3.9722 6.29287C4.37075 6.21726 4.79213 6.17749 5.224 6.17749C5.50054 6.17749 5.77294 6.19376 6.03795 6.22536ZM4.1261 7.02673C4.34894 7.84835 4.68681 8.375 5 8.375C5.32122 8.375 5.66839 7.82101 5.8908 6.963C5.67389 6.93928 5.45082 6.92699 5.224 6.92699C4.84316 6.92699 4.47332 6.96176 4.1261 7.02673ZM3.39783 7.21853C3.53498 7.71842 3.72038 8.14579 3.9461 8.46842C3.42141 8.30898 2.93566 8.03132 2.52857 7.65192C2.77253 7.48017 3.06711 7.33382 3.39783 7.21853ZM3.23916 6.48077C3.18263 6.13193 3.14625 5.76074 3.13192 5.37499H1.39444C1.4585 5.99112 1.67936 6.57938 2.03393 7.08403C2.3706 6.83531 2.78055 6.63162 3.23916 6.48077ZM1.39444 4.62499H3.13192C3.14615 4.24204 3.18211 3.87344 3.23794 3.52681C2.77814 3.37545 2.36731 3.17096 2.03024 2.92123C1.67783 3.42469 1.45828 4.011 1.39444 4.62499ZM2.5237 2.35262C2.76812 2.52552 3.06373 2.67281 3.39584 2.78875C3.53318 2.28573 3.71928 1.85578 3.9461 1.53158C3.41932 1.69166 2.93178 1.97089 2.5237 2.35262ZM3.97101 3.71489C3.92709 4.00012 3.89654 4.30547 3.88295 4.62499H6.11705C6.10453 4.33057 6.07761 4.04818 6.03909 3.78248C5.77372 3.81417 5.50093 3.83049 5.224 3.83049C4.79169 3.83049 4.3699 3.79065 3.97101 3.71489ZM5.8928 3.04476C5.67527 3.06863 5.45151 3.08099 5.224 3.08099C4.84241 3.08099 4.47186 3.04609 4.12405 2.98086C4.34686 2.1549 4.68584 1.625 5 1.625C5.32218 1.625 5.67048 2.18233 5.8928 3.04476ZM6.78083 3.6493C6.826 3.95984 6.85552 4.28682 6.86808 4.62499H8.60556C8.55029 4.09337 8.37827 3.58251 8.10436 3.1282C8.0903 3.1364 8.07618 3.14449 8.062 3.15249C7.68838 3.36641 7.25378 3.53417 6.78083 3.6493ZM7.64858 2.52499C7.35446 2.68754 7.0117 2.81868 6.63664 2.91268C6.49676 2.35623 6.29913 1.88209 6.0539 1.53158C6.61864 1.7032 7.13828 2.01176 7.56326 2.43674C7.59224 2.46572 7.62068 2.49514 7.64858 2.52499Z" fill="currentColor"></path></svg>
<span>Arabic</span>
</div></a></div>
<div class="flex flex-col-reverse lg:flex-row lg:items-center lg:justify-between"><div class="-mb-px flex h-12 items-center overflow-x-auto overflow-y-hidden ">
<a class="tab-alternate" href="/OpenVoiceOS/phoonnx_ar-SA_miro_espeak"><svg class="mr-1.5 text-gray-400 flex-none" style="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg>
Model card
</a><a class="tab-alternate active" href="/OpenVoiceOS/phoonnx_ar-SA_miro_espeak/tree/main"><svg class="mr-1.5 text-gray-400 flex-none" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-tertiary" d="M21 19h-8a1 1 0 0 1 0-2h8a1 1 0 0 1 0 2zm0-4h-8a1 1 0 0 1 0-2h8a1 1 0 0 1 0 2zm0-8h-8a1 1 0 0 1 0-2h8a1 1 0 0 1 0 2zm0 4h-8a1 1 0 0 1 0-2h8a1 1 0 0 1 0 2z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M9 19a1 1 0 0 1-1-1V6a1 1 0 0 1 2 0v12a1 1 0 0 1-1 1zm-6-4.333a1 1 0 0 1-.64-1.769L3.438 12l-1.078-.898a1 1 0 0 1 1.28-1.538l2 1.667a1 1 0 0 1 0 1.538l-2 1.667a.999.999 0 0 1-.64.231z" fill="currentColor"></path></svg>
<span class="xl:hidden">Files</span>
<span class="hidden xl:inline">Files and versions</span>
<span class="inline-block "><span class="contents"><div slot="anchor" class="shadow-purple-500/10 ml-2 inline-flex -translate-y-px items-center gap-0.5 rounded-md border bg-white px-1 py-0.5 align-middle text-xs font-semibold leading-none text-gray-800 shadow-sm dark:border-gray-700 dark:bg-gradient-to-b dark:from-gray-925 dark:to-gray-925 dark:text-gray-300"><svg class="size-3 " xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 12 12"><path fill-rule="evenodd" clip-rule="evenodd" d="M6.14 3.64 5.1 4.92 2.98 2.28h2.06l1.1 1.36Zm0 4.72-1.1 1.36H2.98l2.13-2.64 1.03 1.28Zm4.9 1.36L8.03 6l3-3.72H8.96L5.97 6l3 3.72h2.06Z" fill="#7875FF"></path><path d="M4.24 6 2.6 8.03.97 6 2.6 3.97 4.24 6Z" fill="#FF7F41" opacity="1"></path></svg>
<span>xet</span>
</div></span>
</span>
</a><a class="tab-alternate" href="/OpenVoiceOS/phoonnx_ar-SA_miro_espeak/discussions"><svg class="mr-1.5 text-gray-400 flex-none" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M20.6081 3C21.7684 3 22.8053 3.49196 23.5284 4.38415C23.9756 4.93678 24.4428 5.82749 24.4808 7.16133C24.9674 7.01707 25.4353 6.93643 25.8725 6.93643C26.9833 6.93643 27.9865 7.37587 28.696 8.17411C29.6075 9.19872 30.0124 10.4579 29.8361 11.7177C29.7523 12.3177 29.5581 12.8555 29.2678 13.3534C29.8798 13.8646 30.3306 14.5763 30.5485 15.4322C30.719 16.1032 30.8939 17.5006 29.9808 18.9403C30.0389 19.0342 30.0934 19.1319 30.1442 19.2318C30.6932 20.3074 30.7283 21.5229 30.2439 22.6548C29.5093 24.3704 27.6841 25.7219 24.1397 27.1727C21.9347 28.0753 19.9174 28.6523 19.8994 28.6575C16.9842 29.4379 14.3477 29.8345 12.0653 29.8345C7.87017 29.8345 4.8668 28.508 3.13831 25.8921C0.356375 21.6797 0.754104 17.8269 4.35369 14.1131C6.34591 12.058 7.67023 9.02782 7.94613 8.36275C8.50224 6.39343 9.97271 4.20438 12.4172 4.20438H12.4179C12.6236 4.20438 12.8314 4.2214 13.0364 4.25468C14.107 4.42854 15.0428 5.06476 15.7115 6.02205C16.4331 5.09583 17.134 4.359 17.7682 3.94323C18.7242 3.31737 19.6794 3 20.6081 3ZM20.6081 5.95917C20.2427 5.95917 19.7963 6.1197 19.3039 6.44225C17.7754 7.44319 14.8258 12.6772 13.7458 14.7131C13.3839 15.3952 12.7655 15.6837 12.2086 15.6837C11.1036 15.6837 10.2408 14.5497 12.1076 13.1085C14.9146 10.9402 13.9299 7.39584 12.5898 7.1776C12.5311 7.16799 12.4731 7.16355 12.4172 7.16355C11.1989 7.16355 10.6615 9.33114 10.6615 9.33114C10.6615 9.33114 9.0863 13.4148 6.38031 16.206C3.67434 18.998 3.5346 21.2388 5.50675 24.2246C6.85185 26.2606 9.42666 26.8753 12.0653 26.8753C14.8021 26.8753 17.6077 26.2139 19.1799 25.793C19.2574 25.7723 28.8193 22.984 27.6081 20.6107C27.4046 20.212 27.0693 20.0522 26.6471 20.0522C24.9416 20.0522 21.8393 22.6726 20.5057 22.6726C20.2076 22.6726 19.9976 22.5416 19.9116 22.222C19.3433 20.1173 28.552 19.2325 27.7758 16.1839C27.639 15.6445 27.2677 15.4256 26.746 15.4263C24.4923 15.4263 19.4358 19.5181 18.3759 19.5181C18.2949 19.5181 18.2368 19.4937 18.2053 19.4419C17.6743 18.557 17.9653 17.9394 21.7082 15.6009C25.4511 13.2617 28.0783 11.8545 26.5841 10.1752C26.4121 9.98141 26.1684 9.8956 25.8725 9.8956C23.6001 9.89634 18.2311 14.9403 18.2311 14.9403C18.2311 14.9403 16.7821 16.496 15.9057 16.496C15.7043 16.496 15.533 16.4139 15.4169 16.2112C14.7956 15.1296 21.1879 10.1286 21.5484 8.06535C21.7928 6.66715 21.3771 5.95917 20.6081 5.95917Z" fill="#FF9D00"></path><path d="M5.50686 24.2246C3.53472 21.2387 3.67446 18.9979 6.38043 16.206C9.08641 13.4147 10.6615 9.33111 10.6615 9.33111C10.6615 9.33111 11.2499 6.95933 12.59 7.17757C13.93 7.39581 14.9139 10.9401 12.1069 13.1084C9.29997 15.276 12.6659 16.7489 13.7459 14.713C14.8258 12.6772 17.7747 7.44316 19.304 6.44221C20.8326 5.44128 21.9089 6.00204 21.5484 8.06532C21.188 10.1286 14.795 15.1295 15.4171 16.2118C16.0391 17.2934 18.2312 14.9402 18.2312 14.9402C18.2312 14.9402 25.0907 8.49588 26.5842 10.1752C28.0776 11.8545 25.4512 13.2616 21.7082 15.6008C17.9646 17.9393 17.6744 18.557 18.2054 19.4418C18.7372 20.3266 26.9998 13.1351 27.7759 16.1838C28.5513 19.2324 19.3434 20.1173 19.9117 22.2219C20.48 24.3274 26.3979 18.2382 27.6082 20.6107C28.8193 22.9839 19.2574 25.7722 19.18 25.7929C16.0914 26.62 8.24723 28.3726 5.50686 24.2246Z" fill="#FFD21E"></path></svg>
Community
</a></div>
<div class="relative mb-1.5 flex flex-wrap gap-1.5 sm:flex-nowrap lg:mb-0"><div class="order-last sm:order-first"><div class="relative ">
<button class="btn px-1.5 py-1.5 " type="button">
<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" class="p-0.5" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><circle cx="16" cy="7" r="3" fill="currentColor"></circle><circle cx="16" cy="16" r="3" fill="currentColor"></circle><circle cx="16" cy="25" r="3" fill="currentColor"></circle></svg>
</button>
</div></div>
</div>
</div></div></header>
</div>
<div class="container relative flex flex-col md:grid md:space-y-0 w-full md:grid-cols-12 space-y-4 md:gap-6 mb-16"><section class="pt-8 border-gray-100 col-span-full"><div class="SVELTE_HYDRATER contents" data-target="ViewerHeader" data-props="{"context":{"repo":{"name":"OpenVoiceOS/phoonnx_ar-SA_miro_espeak","type":"model"},"rev":"main","path":"README.md","subpaths":[{"dir":"README.md"}]},"refs":{"branches":[{"name":"main","ref":"refs/heads/main","targetCommit":"43d48b71a1a21c00991ce98e70e1e731c0b3b6b2"}],"tags":[],"converts":[]},"view":"blob","isMac":false}"><header class="flex flex-wrap items-center justify-start pb-2 md:justify-end lg:flex-nowrap"><div class="grow max-md:flex max-md:w-full max-md:items-start max-md:justify-between"><div class="relative mr-4 flex min-w-0 basis-auto flex-wrap items-center gap-x-3 md:grow md:basis-full lg:basis-auto lg:flex-nowrap"><div class="relative mb-2">
<button class="text-sm md:text-base btn w-full cursor-pointer text-sm" type="button">
<svg class="mr-1.5 text-gray-700 dark:text-gray-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24" style="transform: rotate(360deg);"><path d="M13 14c-3.36 0-4.46 1.35-4.82 2.24C9.25 16.7 10 17.76 10 19a3 3 0 0 1-3 3a3 3 0 0 1-3-3c0-1.31.83-2.42 2-2.83V7.83A2.99 2.99 0 0 1 4 5a3 3 0 0 1 3-3a3 3 0 0 1 3 3c0 1.31-.83 2.42-2 2.83v5.29c.88-.65 2.16-1.12 4-1.12c2.67 0 3.56-1.34 3.85-2.23A3.006 3.006 0 0 1 14 7a3 3 0 0 1 3-3a3 3 0 0 1 3 3c0 1.34-.88 2.5-2.09 2.86C17.65 11.29 16.68 14 13 14m-6 4a1 1 0 0 0-1 1a1 1 0 0 0 1 1a1 1 0 0 0 1-1a1 1 0 0 0-1-1M7 4a1 1 0 0 0-1 1a1 1 0 0 0 1 1a1 1 0 0 0 1-1a1 1 0 0 0-1-1m10 2a1 1 0 0 0-1 1a1 1 0 0 0 1 1a1 1 0 0 0 1-1a1 1 0 0 0-1-1z" fill="currentColor"></path></svg>
main
<svg class="-mr-1 text-gray-500 " xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M16.293 9.293L12 13.586L7.707 9.293l-1.414 1.414L12 16.414l5.707-5.707z" fill="currentColor"></path></svg></button>
</div>
<div class="relative mb-2 flex flex-wrap items-center"><a class="truncate text-gray-800 hover:underline" href="/OpenVoiceOS/phoonnx_ar-SA_miro_espeak/tree/main">phoonnx_ar-SA_miro_espeak</a>
<span class="mx-1 text-gray-300">/</span>
<span class="dark:text-gray-300">README.md</span>
<button class="text-xs ml-2 focus:outline-hidden inline-flex cursor-pointer items-center text-sm mx-0.5 text-gray-600 " title="Copy path" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg>
</button></div>
</div>
</div>
</header></div>
<div class="SVELTE_HYDRATER contents" data-target="LastCommit" data-props="{"commitLast":{"date":"2025-09-19T14:13:09.000Z","verified":"verified","subject":"Create README.md","authors":[{"_id":"6417f46bfff753e7c158e23f","avatar":"/avatars/7b933962e68daa1fbafc58114b8a1c29.svg","isHf":false,"user":"Jarbas"}],"commit":{"id":"fae652ded26d7dd720075f8e598c1cc882214dce","parentIds":["a910969c37b980c3899d0ba7409a9a117f284160"]},"title":"Create README.md"},"repo":{"name":"OpenVoiceOS/phoonnx_ar-SA_miro_espeak","type":"model"}}"><div class="from-gray-100-to-white bg-linear-to-t flex flex-wrap items-baseline gap-y-1 rounded-t-lg border border-b-0 px-3 py-2 dark:border-gray-800"><img class="mr-2.5 mt-0.5 h-4 w-4 self-center rounded-full" alt="Jarbas's picture" src="/avatars/7b933962e68daa1fbafc58114b8a1c29.svg">
<div class="mr-4 flex flex-none items-center truncate"><a class="hover:underline" href="/Jarbas">Jarbas
</a>
</div>
<div class="mr-4 truncate font-mono text-xs text-gray-500 hover:prose-a:underline sm:text-sm"><!-- HTML_TAG_START -->Create README.md<!-- HTML_TAG_END --></div>
<a class="rounded-sm border bg-gray-50 px-1.5 text-sm hover:underline dark:border-gray-800 dark:bg-gray-900" href="/OpenVoiceOS/phoonnx_ar-SA_miro_espeak/commit/fae652ded26d7dd720075f8e598c1cc882214dce">fae652d</a>
<span class="mx-2 text-green-500 dark:text-green-600 px-1.5 border-green-100 dark:border-green-800 rounded-full border text-xs uppercase" title="This commit is signed and the signature is verified">verified</span>
<time class="ml-auto hidden flex-none truncate pl-2 text-gray-500 dark:text-gray-400 lg:block" datetime="2025-09-19T14:13:09" title="Fri, 19 Sep 2025 14:13:09 GMT">3 days ago</time></div></div>
<div class="relative flex flex-wrap items-center border px-3 py-1.5 text-sm text-gray-800 dark:border-gray-800 dark:bg-gray-900 "><div class="flex items-center gap-3 text-sm font-medium"><a class="rounded-md px-1.5 capitalize bg-gray-200 dark:bg-gray-800" href="/OpenVoiceOS/phoonnx_ar-SA_miro_espeak/blob/main/README.md">preview</a>
<a class="rounded-md px-1.5 capitalize " href="/OpenVoiceOS/phoonnx_ar-SA_miro_espeak/blob/main/README.md?code=true">code</a></div>
<div class="mx-4 text-gray-200">|</div>
<a class="my-1 mr-4 flex items-center hover:underline " href="/OpenVoiceOS/phoonnx_ar-SA_miro_espeak/raw/main/README.md"><svg class="mr-1.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32" style="transform: rotate(360deg);"><path d="M31 16l-7 7l-1.41-1.41L28.17 16l-5.58-5.59L24 9l7 7z" fill="currentColor"></path><path d="M1 16l7-7l1.41 1.41L3.83 16l5.58 5.59L8 23l-7-7z" fill="currentColor"></path><path d="M12.419 25.484L17.639 6l1.932.518L14.35 26z" fill="currentColor"></path></svg>
raw
</a><div class="SVELTE_HYDRATER contents" data-target="CopyButton" data-props="{"value":"https://huggingface.co/OpenVoiceOS/phoonnx_ar-SA_miro_espeak/resolve/main/README.md","style":"blank","label":"Copy download link","classNames":"my-1 mr-4 flex items-center no-underline hover:underline"}"><button class="my-1 mr-4 flex items-center no-underline hover:underline " title="Copy download link" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg>
<span class="ml-1.5 ">Copy download link</span></button></div><a class="my-1 mr-4 flex items-center hover:underline " href="/OpenVoiceOS/phoonnx_ar-SA_miro_espeak/commits/main/README.md"><svg class="mr-1.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32" style="transform: rotate(360deg);"><path d="M16 4C9.383 4 4 9.383 4 16s5.383 12 12 12s12-5.383 12-12S22.617 4 16 4zm0 2c5.535 0 10 4.465 10 10s-4.465 10-10 10S6 21.535 6 16S10.465 6 16 6zm-1 2v9h7v-2h-5V8z" fill="currentColor"></path></svg>
history
</a><a class="my-1 mr-4 flex items-center hover:underline " href="/OpenVoiceOS/phoonnx_ar-SA_miro_espeak/blame/main/README.md"><svg class="mr-1.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32" style="transform: rotate(360deg);"><path d="M16 2a14 14 0 1 0 14 14A14 14 0 0 0 16 2zm0 26a12 12 0 1 1 12-12a12 12 0 0 1-12 12z" fill="currentColor"></path><path d="M11.5 11a2.5 2.5 0 1 0 2.5 2.5a2.48 2.48 0 0 0-2.5-2.5z" fill="currentColor"></path><path d="M20.5 11a2.5 2.5 0 1 0 2.5 2.5a2.48 2.48 0 0 0-2.5-2.5z" fill="currentColor"></path></svg>
blame
</a><a class="my-1 mr-4 flex items-center hover:underline text-green-600 dark:text-green-500" href="/OpenVoiceOS/phoonnx_ar-SA_miro_espeak/edit/main/README.md"><svg class="mr-1.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M2 26h28v2H2z" fill="currentColor"></path><path d="M25.4 9c.8-.8.8-2 0-2.8l-3.6-3.6c-.8-.8-2-.8-2.8 0l-15 15V24h6.4l15-15zm-5-5L24 7.6l-3 3L17.4 7l3-3zM6 22v-3.6l10-10l3.6 3.6l-10 10H6z" fill="currentColor"></path></svg>
contribute
</a><a class="my-1 mr-4 flex items-center hover:underline " href="/OpenVoiceOS/phoonnx_ar-SA_miro_espeak/delete/main/README.md"><svg class="mr-1.5" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M12 12h2v12h-2z" fill="currentColor"></path><path d="M18 12h2v12h-2z" fill="currentColor"></path><path d="M4 6v2h2v20a2 2 0 0 0 2 2h16a2 2 0 0 0 2-2V8h2V6zm4 22V8h16v20z" fill="currentColor"></path><path d="M12 2h8v2h-8z" fill="currentColor"></path></svg>
delete
</a>
<div class="mr-4 flex items-center"><div class="SVELTE_HYDRATER contents" data-target="ScanStatusBadge" data-props="{"classNames":"mr-2","scanStatus":{"status":"safe","protectAiScan":{"status":"safe","message":"This file has no security findings.","reportLink":"https://protectai.com/insights/models/OpenVoiceOS/phoonnx_ar-SA_miro_espeak/43d48b71a1a21c00991ce98e70e1e731c0b3b6b2/files?blob-id=f382bd8d40ff95c8f1b5e17e6e315aba1da33873&utm_source=huggingface"},"avScan":{"status":"safe","version":"1.4.3/27766"},"pickleImportScan":{"status":"unscanned","pickleImports":[],"version":"0.0.0"},"jFrogScan":{"status":"unscanned","message":"Not a machine-learning model","reportLink":"","reportLabel":""}},"repo":{"name":"OpenVoiceOS/phoonnx_ar-SA_miro_espeak","type":"model"},"revision":"main","filePath":"README.md","openByDefault":false}"><div class="sm:relative mr-2"><button class="flex h-[1.125rem] select-none items-center gap-0.5 rounded border pl-0.5 pr-0.5 text-xs leading-tight text-gray-400 hover:cursor-pointer text-gray-400 hover:border-gray-200 hover:bg-gray-50 hover:text-gray-500 dark:border-gray-800 dark:hover:bg-gray-800 dark:hover:text-gray-200 "><svg class="flex-none" width="1em" height="1em" viewBox="0 0 22 28" fill="none" xmlns="http://www.w3.org/2000/svg"><path fill-rule="evenodd" clip-rule="evenodd" d="M15.3634 10.3639C15.8486 10.8491 15.8486 11.6357 15.3634 12.1209L10.9292 16.5551C10.6058 16.8785 10.0814 16.8785 9.7579 16.5551L7.03051 13.8277C6.54532 13.3425 6.54532 12.5558 7.03051 12.0707C7.51569 11.5855 8.30234 11.5855 8.78752 12.0707L9.7579 13.041C10.0814 13.3645 10.6058 13.3645 10.9292 13.041L13.6064 10.3639C14.0916 9.8787 14.8782 9.8787 15.3634 10.3639Z" fill="currentColor"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M10.6666 27.12C4.93329 25.28 0 19.2267 0 12.7867V6.52001C0 5.40001 0.693334 4.41334 1.73333 4.01334L9.73333 1.01334C10.3333 0.786673 11 0.786673 11.6 1.02667L19.6 4.02667C20.1083 4.21658 20.5465 4.55701 20.8562 5.00252C21.1659 5.44803 21.3324 5.97742 21.3333 6.52001V12.7867C21.3333 19.24 16.4 25.28 10.6666 27.12Z" fill="currentColor" fill-opacity="0.22"></path><path d="M10.0845 1.94967L10.0867 1.94881C10.4587 1.8083 10.8666 1.81036 11.2286 1.95515L11.2387 1.95919L11.2489 1.963L19.2489 4.963L19.25 4.96342C19.5677 5.08211 19.8416 5.29488 20.0351 5.57333C20.2285 5.85151 20.3326 6.18203 20.3333 6.52082C20.3333 6.52113 20.3333 6.52144 20.3333 6.52176L20.3333 12.7867C20.3333 18.6535 15.8922 24.2319 10.6666 26.0652C5.44153 24.2316 1 18.6409 1 12.7867V6.52001C1 5.82357 1.42893 5.20343 2.08883 4.94803L10.0845 1.94967Z" stroke="currentColor" stroke-opacity="0.30" stroke-width="2"></path></svg>
<span class="mr-0.5 max-sm:hidden">Safe</span></button>
</div></div>
</div>
<div class="flex items-center gap-x-3 dark:text-gray-300 sm:ml-auto">
106 Bytes</div></div>
<div class="relative min-h-[100px] overflow-hidden rounded-b-lg border border-t-0 leading-tight dark:border-gray-800 dark:bg-gray-925">
<div class="py-4 px-4 sm:px-6 prose hf-sanitized hf-sanitized-KucKjYYEK4j6DBt0YgMkG copiable-code-container"><div class="not-prose bg-linear-to-t -mx-6 -mt-4 mb-8 max-h-[300px] min-w-full overflow-auto border-b from-gray-50 px-6 pb-5 pt-4 font-mono text-xs transition-all dark:from-gray-900 dark:to-gray-950"><div class="mb-2 inline-block rounded-lg border px-2 py-1 font-mono text-xs leading-none">metadata</div>
<pre><!-- HTML_TAG_START --><span class="hljs-attr">datasets:</span>
<span class="hljs-bullet">-</span> <span class="hljs-string">TigreGotico/tts-train-synthetic-miro_ar-SA</span>
<span class="hljs-attr">language:</span>
<span class="hljs-bullet">-</span> <span class="hljs-string">ar</span>
<span class="hljs-attr">pipeline_tag:</span> <span class="hljs-string">text-to-speech</span>
<!-- HTML_TAG_END --></pre></div>
<!-- HTML_TAG_START --><!-- HTML_TAG_END --></div>
</div></section></div></main>
</div>
<script>
import("\/front\/build\/kube-0e1a2e5\/index.js");
window.moonSha = "kube-0e1a2e5\/";
window.__hf_deferred = {};
</script>
<!-- Stripe -->
<script>
if (["hf.co", "huggingface.co"].includes(window.location.hostname)) {
const script = document.createElement("script");
script.src = "https://js.stripe.com/v3/";
script.async = true;
document.head.appendChild(script);
}
</script>
</body>
</html>
See https://huggingface.co/OpenVoiceOS/phoonnx_ar-SA_miro_espeak
and https://github.com/OHF-Voice/piper1-gpl/discussions/27
# License
See also https://github.com/k2-fsa/sherpa-onnx/pull/2480
This model is licensed under the [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/).
- ✅ Always free for regular (non-commercial) users
- ❌ Commercial use is not allowed at this time
- 🔄 The author may relax the restrictions in the future (e.g., allow commercial use), but will not make them stricter
**Important:** You must include this license when redistributing the model or any derivatives.
|
caphe/paa11
|
caphe
| 2025-09-22T11:36:28Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-09-22T11:33:46Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
jnwulff/Qwen2-0.5B-GRPO-test
|
jnwulff
| 2025-09-22T11:35:58Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"grpo",
"dataset:AI-MO/NuminaMath-TIR",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2-0.5B-Instruct",
"base_model:finetune:Qwen/Qwen2-0.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-09-22T11:20:03Z |
---
base_model: Qwen/Qwen2-0.5B-Instruct
datasets: AI-MO/NuminaMath-TIR
library_name: transformers
model_name: Qwen2-0.5B-GRPO-test
tags:
- generated_from_trainer
- trl
- grpo
licence: license
---
# Model Card for Qwen2-0.5B-GRPO-test
This model is a fine-tuned version of [Qwen/Qwen2-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2-0.5B-Instruct) on the [AI-MO/NuminaMath-TIR](https://huggingface.co/datasets/AI-MO/NuminaMath-TIR) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="jnwulff/Qwen2-0.5B-GRPO-test", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.23.0
- Transformers: 4.56.1
- Pytorch: 2.8.0+cu126
- Datasets: 4.0.0
- Tokenizers: 0.22.0
## Citations
Cite GRPO as:
```bibtex
@article{shao2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
qualiaadmin/eef0eb84-9e56-4afc-9aae-05204b8cf5e2
|
qualiaadmin
| 2025-09-22T11:33:29Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"robotics",
"smolvla",
"dataset:Calvert0921/SmolVLA_LiftBlueCubeDouble_Franka_200",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-09-22T11:31:28Z |
---
base_model: lerobot/smolvla_base
datasets: Calvert0921/SmolVLA_LiftBlueCubeDouble_Franka_200
library_name: lerobot
license: apache-2.0
model_name: smolvla
pipeline_tag: robotics
tags:
- robotics
- smolvla
- lerobot
---
# Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
lerobot-train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
lerobot-record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
AXERA-TECH/YOLOv8-Seg
|
AXERA-TECH
| 2025-09-22T11:33:04Z | 5 | 0 | null |
[
"onnx",
"Ultralytics",
"YOLOv8",
"YOLOv8-Seg",
"object-detection",
"en",
"base_model:Ultralytics/YOLOv8",
"base_model:quantized:Ultralytics/YOLOv8",
"license:mit",
"region:us"
] |
object-detection
| 2025-01-11T16:23:41Z |
---
license: mit
language:
- en
base_model:
- Ultralytics/YOLOv8
pipeline_tag: object-detection
tags:
- Ultralytics
- YOLOv8
- YOLOv8-Seg
---
# YOLOv8-Seg
This version of YOLOv8-Seg has been converted to run on the Axera NPU using **w8a16** quantization.
This model has been optimized with the following LoRA:
Compatible with Pulsar2 version: 3.4
## Convert tools links:
For those who are interested in model conversion, you can try to export axmodel through
- [The repo of AXera Platform](https://github.com/AXERA-TECH/ax-samples), which you can get the detial of guide
- [Pulsar2 Link, How to Convert ONNX to axmodel](https://pulsar2-docs.readthedocs.io/en/latest/pulsar2/introduction.html)
## Support Platform
- AX650
- [M4N-Dock(爱芯派Pro)](https://wiki.sipeed.com/hardware/zh/maixIV/m4ndock/m4ndock.html)
- [M.2 Accelerator card](https://axcl-docs.readthedocs.io/zh-cn/latest/doc_guide_hardware.html)
- AX630C
- [爱芯派2](https://axera-pi-2-docs-cn.readthedocs.io/zh-cn/latest/index.html)
- [Module-LLM](https://docs.m5stack.com/zh_CN/module/Module-LLM)
- [LLM630 Compute Kit](https://docs.m5stack.com/zh_CN/core/LLM630%20Compute%20Kit)
|Chips|yolov8s-seg|
|--|--|
|AX650| 4.6 ms |
|AX630C| TBD ms |
## How to use
Download all files from this repository to the device
```
root@ax650:~/YOLOv8-Seg# tree
.
|-- ax650
| `-- yolov8s-seg.axmodel
|-- ax_yolov8_seg
|-- football.jpg
`-- yolov8_seg_out.jpg
```
### Inference
Input image:

#### Inference with AX650 Host, such as M4N-Dock(爱芯派Pro)
```
root@ax650:~/samples/AXERA-TECH/YOLOv8-Seg# ./ax_yolov8_seg -m ax650/yolov8s_seg.axmodel -i football.jpg
--------------------------------------
model file : ax650/yolov8s_seg.axmodel
image file : football.jpg
img_h, img_w : 640 640
--------------------------------------
Engine creating handle is done.
Engine creating context is done.
Engine get io info is done.
Engine alloc io is done.
Engine push input is done.
--------------------------------------
input size: 1
name: images [UINT8] [BGR]
1 x 640 x 640 x 3
output size: 7
name: /model.22/Concat_1_output_0 [FLOAT32]
1 x 80 x 80 x 144
name: /model.22/Concat_2_output_0 [FLOAT32]
1 x 40 x 40 x 144
name: /model.22/Concat_3_output_0 [FLOAT32]
1 x 20 x 20 x 144
name: /model.22/cv4.0/cv4.0.2/Conv_output_0 [FLOAT32]
1 x 80 x 80 x 32
name: /model.22/cv4.1/cv4.1.2/Conv_output_0 [FLOAT32]
1 x 40 x 40 x 32
name: /model.22/cv4.2/cv4.2.2/Conv_output_0 [FLOAT32]
1 x 20 x 20 x 32
name: output1 [FLOAT32]
1 x 32 x 160 x 160
post process cost time:16.21 ms
--------------------------------------
Repeat 1 times, avg time 4.69 ms, max_time 4.69 ms, min_time 4.69 ms
--------------------------------------
detection num: 8
0: 92%, [1354, 340, 1629, 1035], person
0: 91%, [ 5, 359, 314, 1108], person
0: 91%, [ 759, 220, 1121, 1153], person
0: 88%, [ 490, 476, 661, 999], person
32: 73%, [1233, 877, 1286, 923], sports ball
32: 63%, [ 772, 888, 828, 937], sports ball
32: 63%, [ 450, 882, 475, 902], sports ball
0: 55%, [1838, 690, 1907, 811], person
--------------------------------------
```
Output image:

#### Inference with M.2 Accelerator card
```
(base) axera@raspberrypi:~/lhj/YOLOv8-Seg $ ./axcl_aarch64/axcl_yolov8_seg -m ax650/yolov8s_seg.axmodel -i football.jpg
--------------------------------------
model file : ax650/yolov8s_seg.axmodel
image file : football.jpg
img_h, img_w : 640 640
--------------------------------------
axclrtEngineCreateContextt is done.
axclrtEngineGetIOInfo is done.
grpid: 0
input size: 1
name: images
1 x 640 x 640 x 3
output size: 7
name: /model.22/Concat_1_output_0
1 x 80 x 80 x 144
name: /model.22/Concat_2_output_0
1 x 40 x 40 x 144
name: /model.22/Concat_3_output_0
1 x 20 x 20 x 144
name: /model.22/cv4.0/cv4.0.2/Conv_output_0
1 x 80 x 80 x 32
name: /model.22/cv4.1/cv4.1.2/Conv_output_0
1 x 40 x 40 x 32
name: /model.22/cv4.2/cv4.2.2/Conv_output_0
1 x 20 x 20 x 32
name: output1
1 x 32 x 160 x 160
==================================================
Engine push input is done.
--------------------------------------
post process cost time:3.67 ms
--------------------------------------
Repeat 1 times, avg time 4.85 ms, max_time 4.85 ms, min_time 4.85 ms
--------------------------------------
detection num: 8
0: 92%, [1354, 340, 1629, 1035], person
0: 91%, [ 5, 359, 314, 1108], person
0: 91%, [ 759, 220, 1121, 1153], person
0: 88%, [ 490, 476, 661, 999], person
32: 73%, [1233, 877, 1286, 923], sports ball
32: 63%, [ 772, 888, 828, 937], sports ball
32: 63%, [ 450, 882, 475, 902], sports ball
0: 55%, [1838, 690, 1907, 811], person
--------------------------------------
```
Output image:

|
HaniBO/CBC_dem
|
HaniBO
| 2025-09-22T11:32:01Z | 57 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gguf",
"llama",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"unsloth",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-09-18T11:55:39Z |
---
base_model: unsloth/llama-3.1-8b-bnb-4bit
library_name: transformers
model_name: CBC_dem
tags:
- generated_from_trainer
- trl
- sft
- unsloth
licence: license
---
# Model Card for CBC_dem
This model is a fine-tuned version of [unsloth/llama-3.1-8b-bnb-4bit](https://huggingface.co/unsloth/llama-3.1-8b-bnb-4bit).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="HaniBO/CBC_dem", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.22.2
- Transformers: 4.55.4
- Pytorch: 2.8.0+cu126
- Datasets: 3.6.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
poolkiltzn/blockassist-bc-vigilant_alert_tuna_1758540491
|
poolkiltzn
| 2025-09-22T11:29:39Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"vigilant alert tuna",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-22T11:29:14Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- vigilant alert tuna
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
AXERA-TECH/YOLO11-Pose
|
AXERA-TECH
| 2025-09-22T11:28:19Z | 21 | 1 | null |
[
"onnx",
"Ultralytics",
"YOLO11",
"YOLO11-POSE",
"object-detection",
"en",
"base_model:Ultralytics/YOLO11",
"base_model:quantized:Ultralytics/YOLO11",
"license:mit",
"region:us"
] |
object-detection
| 2025-03-23T07:42:32Z |
---
license: mit
language:
- en
base_model:
- Ultralytics/YOLO11
pipeline_tag: object-detection
tags:
- Ultralytics
- YOLO11
- YOLO11-POSE
---
# YOLO11-POSE
This version of YOLO11-POSE has been converted to run on the Axera NPU using **w8a16** quantization.
This model has been optimized with the following LoRA:
Compatible with Pulsar2 version: 3.4
## Convert tools links:
For those who are interested in model conversion, you can try to export axmodel through
- [The repo of ax-samples](https://github.com/AXERA-TECH/ax-samples), which you can get the how to build the `ax_yolo11_pose`
- [The repo of axcl-samples](https://github.com/AXERA-TECH/axcl-samples), which you can get the how to build the `axcl_yolo11_pose`
- [Pulsar2 Link, How to Convert ONNX to axmodel](https://pulsar2-docs.readthedocs.io/en/latest/pulsar2/introduction.html)
## Support Platform
- AX650
- [M4N-Dock(爱芯派Pro)](https://wiki.sipeed.com/hardware/zh/maixIV/m4ndock/m4ndock.html)
- [M.2 Accelerator card](https://axcl-docs.readthedocs.io/zh-cn/latest/doc_guide_hardware.html)
- AX630C
- [爱芯派2](https://axera-pi-2-docs-cn.readthedocs.io/zh-cn/latest/index.html)
- [Module-LLM](https://docs.m5stack.com/zh_CN/module/Module-LLM)
- [LLM630 Compute Kit](https://docs.m5stack.com/zh_CN/core/LLM630%20Compute%20Kit)
|Chips|cost|
|--|--|
|AX650| 25 ms |
|AX630C| TBD ms |
## How to use
Download all files from this repository to the device
```
(axcl) axera@raspberrypi:~/samples/AXERA-TECH/YOLO11-Pose $ tree -L 2
.
├── ax620e
│ └── yolo11s-pose.axmodel
├── ax650
│ └── yolo11x-pose.axmodel
├── ax_aarch64
│ └── ax_yolo11_pose
├── axcl_aarch64
│ └── axcl_yolo11_pose
├── axcl_x86_64
│ └── axcl_yolo11_pose
├── config.json
├── football.jpg
├── README.md
├── yolo11_pose_config.json
├── yolo11_pose_out.jpg
├── yolo11s-pose-cut.onnx
└── yolo11s-pose.onnx
6 directories, 12 files
```
### Inference
Input image:

#### Inference with AX650 Host, such as M4N-Dock(爱芯派Pro)
```
root@ax650:~/YOLO11-Pose# ./ax_aarch64/ax_yolo11_pose -m ax650/yolo11x-pose.axmodel -i football.jpg
--------------------------------------
model file : ax650/yolo11x-pose.axmodel
image file : football.jpg
img_h, img_w : 640 640
--------------------------------------
Engine creating handle is done.
Engine creating context is done.
Engine get io info is done.
Engine alloc io is done.
Engine push input is done.
--------------------------------------
post process cost time:1.40 ms
--------------------------------------
Repeat 1 times, avg time 25.21 ms, max_time 25.21 ms, min_time 25.21 ms
--------------------------------------
detection num: 6
0: 94%, [1350, 337, 1632, 1036], person
0: 93%, [ 492, 477, 658, 1000], person
0: 92%, [ 756, 219, 1126, 1154], person
0: 91%, [ 0, 354, 314, 1108], person
0: 73%, [ 0, 530, 81, 1017], person
0: 54%, [ 142, 589, 239, 1013], person
--------------------------------------
```
Output image:

#### Inference with M.2 Accelerator card
```
(axcl) axera@raspberrypi:~/samples/AXERA-TECH/YOLO11-Pose $ chmod +x axcl_aarch64/axcl_yolo11_pose
(axcl) axera@raspberrypi:~/samples/AXERA-TECH/YOLO11-Pose $ ./axcl_aarch64/axcl_yolo11_pose -m ax650/yolo11x-pose.axmodel -i football.jpg
--------------------------------------
model file : ax650/yolo11x-pose.axmodel
image file : football.jpg
img_h, img_w : 640 640
--------------------------------------
axclrtEngineCreateContextt is done.
axclrtEngineGetIOInfo is done.
grpid: 0
input size: 1
name: images
1 x 640 x 640 x 3
output size: 6
name: /model.23/Concat_1_output_0
1 x 80 x 80 x 65
name: /model.23/Concat_2_output_0
1 x 40 x 40 x 65
name: /model.23/Concat_3_output_0
1 x 20 x 20 x 65
name: /model.23/cv4.0/cv4.0.2/Conv_output_0
1 x 80 x 80 x 51
name: /model.23/cv4.1/cv4.1.2/Conv_output_0
1 x 40 x 40 x 51
name: /model.23/cv4.2/cv4.2.2/Conv_output_0
1 x 20 x 20 x 51
==================================================
Engine push input is done.
--------------------------------------
post process cost time:0.43 ms
--------------------------------------
Repeat 1 times, avg time 25.05 ms, max_time 25.05 ms, min_time 25.05 ms
--------------------------------------
detection num: 6
0: 94%, [1350, 337, 1632, 1036], person
0: 93%, [ 492, 477, 658, 1000], person
0: 92%, [ 756, 219, 1126, 1154], person
0: 91%, [ 0, 354, 314, 1108], person
0: 73%, [ 0, 530, 81, 1017], person
0: 54%, [ 142, 589, 239, 1013], person
--------------------------------------
```
|
gaggi009/sbert-databricks
|
gaggi009
| 2025-09-22T11:26:48Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"new",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:5192",
"loss:MultipleNegativesRankingLoss",
"custom_code",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:Alibaba-NLP/gte-multilingual-base",
"base_model:finetune:Alibaba-NLP/gte-multilingual-base",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-09-22T11:26:29Z |
---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:5192
- loss:MultipleNegativesRankingLoss
base_model: Alibaba-NLP/gte-multilingual-base
widget:
- source_sentence: P Payor active configuration resolution
sentences:
- 'Reference Content
This record is designed for pediatric therapy. It contains the evaluating and
treating therapist, overview of the next progress report and re-evaluation dates,
and plan of care tracking.

In addition to the standard therapist data, the Provider Information section also
includes the comments, last update date, prescription end date, and IFSP end date
(if applicable). If changing the evaluating provider here, the case and other
active role records are updated as well.
The Tracking Information section provides an overview of the patient''s visits
by showing the date of the last visit and the last progress report note.
You can also view and edit the due dates for the next progress report and re-evaluation.
The dates are editable only when the automatic progress tracking is enabled in
the [Therapy Options](https://kb.raintreeinc.com/help-10-2-500=therapy-options
"These options define whether functional codes are required, how frequently progress
reports are due, and allow you to use a financial class or insurance specific
narrative for Plan of Care. These options are grouped into multiple tabs.") (insert-link)
for the insurance or financial class.
Finally, the POC Tracking section displays the certification period and plan frequency
from the Plan and Recommendations sections from the matching LTNOT note.
Internal
**Related concepts**
[Case Load tab](https://kb.raintreeinc.com/help-10-2-500=case-load-tab)
[Therapy Team record](https://kb.raintreeinc.com/help-10-2-500=therapy-team-record)
More Information
[[Role records|250326123804743]]
'
- "Issue/Question\nThe post payment shows a patient resp of $240 but there is no\
\ where on the ledger that shows the patient has a balance.\n\nCause\nThe patient\
\ had multiple P payors. One payor had a credit and one had a balance.\n\nResolution/Answer\n\
To Combine P payors you will got to Payors from the patients Main Menu and click\
\ on the Combine Insurances tab.\n\n1. Select the payor you want to merge into\
\ another one.\n\n After selecting a payor, the second list displays the payors\
\ that you can select as the target for the merge.\n2. Select the target payor\
\ in the second list.\n\n This is the payor that will remain after the merge.\n\
\n The **Combine** button becomes active. (If the Combine button does not become\
\ active you might need to update your user rights to include INSURMERGE=E.)\n\
3. Click **Combine**.\n\nRaintree combines the two payors. If any charges need\
\ to be reassigned, Raintree will do so automatically. After you combine the payors\
\ you will go to the patients Ledger View>Print>View Totals>Recalculate. This\
\ will update the totals on the Post Payment screen.\n\n"
- 'Issue/Question
When attempting to sign off on a patient note, the system displays an error message
indicating a missing payor. This occurs even when the **Financial Class** is correctly
set to ''A'' as the default payor setup. Users are unable to sign off notes despite
previous successful attempts and despite similar scenarios working in the training
database. The **Charge Recap** screen shows the message "Patient does not have
a P Payor."
Cause
This problem arose when a **Collect Payment** amount was entered on the **Charges**
tab of the note, and the patient did not have an active payor (specifically, a
payor designated as ''P'') configured to handle this collection.
Resolution/Answer
To resolve the issue preventing note sign-off due to a missing payor for the collected
payment amount, follow these steps:
1. **Navigate** to the **Charges** tab within the patient note.
2. **Locate** the **Collect Payment** section.
3. **Clear** the value in the **Amount** field, ensuring it is set to 0.00 or
is blank.
4. **Save** the changes to the note.
5. **Attempt** to sign off the note again.

'
- source_sentence: appointment confirmation directions calendar integration
sentences:
- "Lesson\nRCM specialists can quickly reverify a patient's benefits as needed.\
\ This process involves accessing the patient's file and using the eligibility\
\ check tool to confirm coverage details.\n\nReverifying Patient Benefits On Demand\n\
======================================\n\n1. Open the patient's file.\n2. Select\
\ the menu option **Waystar Eligibility check**.\n3. Enter the provider and location.\n\
4. Save to run the check.\n * The results appear. Everything looks good.\n\n\
Video\n\n\n"
- 'Lesson
Patients receive appointment reminders via text messages, which include a clickable
link to open the full reminder. From this reminder, patients can easily confirm
their attendance, get directions to the clinic, add their appointment to a calendar,
and even access the clinic''s social media pages. Raintree provides a convenient
and helpful experience for managing appointments. To manage appointments:
1. **Receive a text message**: Receive an appointment reminder via a text message
from the clinic.
2. **Open the reminder link**: Click on the link within the text message to open
the reminder.
3. **Confirm the appointment**: Click the green button to confirm your attendance
at the appointment.
4. **Access directions**: Use the provided option to get directions to the clinic.
5. **Explore clinic information**: Discover additional links, such as the clinic''s
social media page.
6. **Adding an appointment to a Calendar:** If you want to add your appointment
to a calendar, for example, Google or Outlook, click the calendar option and your
appointment is added to your calendar.
Video
'
- "Concept\nQuery Builder is a useful tool for creating and running SQL queries\
\ against the database. Although it is a versatile tool, it is important to be\
\ familiar with its capabilities and limitations in order to get correct and reliable\
\ results from the created queries.\n\n* [[Refining query results|250305165915093]]\
\ \n\n In general, Query Builder can be used for listing data from the database.\
\ There are a number of ways how to refine the results to make the output data\
\ more relevant to your needs. You can do this by using different filters.\n*\
\ [[Formatting query results|250328183921833]] \n You can arrange the output\
\ table of the query to make the query results more easier to use.\n* [[Recommendations\
\ for using Query Builder|250328184411903]]\n\n Although Query Builder is a useful\
\ tool for creating and running queries, you need to carefully plan the queries\
\ that you want to create and take into account the limitations that apply to\
\ this tool. There are some general recommendations for using Query Builder properly.\n\
\nMore Information\n[[|250305162245783]][[Query |250305162245783]][[Builder|250305162245783]]\n\
\n"
- source_sentence: Raintree supported devices Windows Android Apple Chromebook
sentences:
- "Reference Content\n### Windows Based Desktops/Laptops/Notebooks\n\n* Microsoft\
\ Windows Operating System (Microsoft Windows 8 or higher) (Windows S mode & RT\
\ not supported)\n* Two (2) GHz Intel-Based or AMD-Based Processor\n* Eight (8)\
\ GB of RAM or higher\n* Minimum 20 MB of free space for the Raintree Client software\
\ and associated files\n* Video hardware and monitor capable of minimum 1024x768\
\ resolution at 16-bit color\n* Microsoft Windows-compatible printers (directly-attached\
\ or network-attached)\n* Microphone and Speakers\n* Full read / write permission\
\ in the client (by default c:\\rtw) folder and subdirectories\n* Raintree Client\
\ folder (by default c:\\rtw) should be excluded from real-time protection by\
\ host-based security applications as it may impair the performance or ability\
\ to update the client when prompted\n* Webcam if using Telehealth services\n\n\
### RTWeb - Requires Raintree 2019.3 or newer\n\n* Microsoft Windows 10 or higher\n\
* iOS requires a device running on iOS 13.0 or later\n* Android 8 or higher\n\
* iPad devices require iPad OS 13.0 or later\n* macOS Catalina 10. 15.7\n* Chromebook\
\ device required Google Chrome OS version 79 or higher\n\n### Devices Recommended\
\ by Raintree\n\n* iPad Air (3rd generation), 10.5-inch display\n + iPadOS version\
\ 15.3.1\n* iPhone 11, 6.1-inch display\n + iOS version 15.3.1\n* MacBook Air\
\ (13-inch, 2017)\n + macOS Monterey, version 12.2.1\n* Microsoft Surface 3 tablet,\
\ 10.8-inch display\n + Microsoft Windows 10 Pro\n* Samsung Galaxy Tab S5e, 10.5-inch\
\ display\n + One UI version 3.1, Android version 11\n* Samsung Galaxy S20FE,\
\ 6.5-inch display\n + One UI version 4, Android version 12\n* HP Chromebook,\
\ 14-inch display\n + Chrome version 98.0.4758.107\n\n### Internet Bandwidth\
\ Requirements\n\n* 65 KBps per user\n\n"
- "Concept\nDevice Requirements\n===================\n\nThe Raintree RTWeb client\
\ enables providers to access their Raintree dashboard and documentation by logging\
\ in through a web browser. This means that providers can access Raintree from\
\ any device connected to the internet. However, the device used must meet the\
\ specified minimum requirements:\n\n**Windows devices**\n\n* Windows 11 Pro\n\
\n**Android devices**\n\n* Phone - One UI version 5.1, Android version 13 or above\n\
* Tablet - One UI version 6.1, Android version 14 or above\n\n**Apple devices**\n\
\n* MacBook - macOS Sequoia, version 15.3.2 or above\n* iPad - iPadOS version\
\ 18.4 or above\n* iPhone - iOS version 18.4 or above\n\n**Chromebook**\n\n* Chrome\
\ version 126.0.6478.222 or above\n\nSupported web browsers are up-to-date versions\
\ of Chrome, Edge, Firefox and Safari.\n\n*** **CAUTION:*****Internet\
\ Explorer is not supported.\n\nRecommended Devices\n===================\n\nThe\
\ following is a list of the devices Raintree recommends for your web client:\n\
\n* **iPad Air** (3rd generation), 10.5-inch display \n\n iPadOS version 18.4\n\
* **iPhone 11**, 6.1-inch display \n\n iOS version 18.4\n* **MacBook Air** (13-inch,\
\ 2024) \n\n macOS Sequoia, version 15.3.2\n* **Samsung Galaxy S9+**, 12.4-inch\
\ display \n\n One UI version 6.1.1, Android version 14\n* **Samsung Galaxy\
\ Tab S10+**, 12.4-inch display \n\n One UI version 6.1.1, Android version 14\n\
* **Samsung Galaxy S20FE**, 6.5-inch display \n\n One UI version 5.1, Android\
\ version 13\n* **HP Chromebook**, 14-inch display \n\n Chrome version 126.0.6478.222\n\
* **Surface Go 3** (Windows tablet), 10.5-inch display \n\n Windows 11 Pro\n\
\n"
- "Reference Content\nWhile the screen for each Quality measure is slightly different,\
\ there are some common fields and similar logic as well.\n\n **NOTE:** If\
\ you are displayed this page when pressing **F1** from the measure screen, then\
\ it means that the measure screen you had currently open does not have its own\
\ specific help page. See the bottom of this screen for the list of help pages\
\ for specific measures.\n\nThe measure record codes start with \"M\", followed\
\ by a number. For example, M110 is template for the measure \"Influenza Immunization\"\
, quality ID 110.\n\nMany measures have their own specific options as well. The\
\ following list describes options that are common for most measure screens.\n\
\n\n\
\nClick any highlighted region for details.\n\n**Measure description**\n\nIn the\
\ top of the screen, an overview of the measure is given. This includes population\
\ criteria (age, gender) and measure completion requirements.\n\n**Quality ID**\n\
\nQuality ID of the measure.\n\n**Reporting Code**\n\nCurrent reporting code based\
\ on the selection in the **Reporting Codes** section below. This may be either\
\ a code with number and letters, or `Exc`, indicating exclusion.\n\n```\nStatus\
\ \n \nCurrent status of the measure. This can be either Met (green), Not Eligible (black), Not\
\ Met (green), Incomplete (red), and Exception/Exclusion (green. Black color indicates\
\ that no action was needed from the provider, green indicates that action was\
\ needed and provider has already done everything as required, and red indicates\
\ the provider still needs to do something to complete this measure. See [[MIPS\
\ hoverbox status message types|250318120330660]] page for more information about\
\ Quality measure status types.\n```\n\n**Linked record widget**\n\nSelect the\
\ widget to attach a record to the measure. The record type differs for each measure.\
\ There may also be more than one widget to attach several records, or no widgets\
\ at all, depending on the measure.\n\n**Reporting Codes**\n\nOptions for the\
\ provider to choose from. Usually, there are options to take some action (such\
\ as assessing the patient and documenting a follow-up plan), to not take some\
\ action, but provide a reason (for example, the patient refused), and to not\
\ take some action and not provide a reason (it is not recommended to select this\
\ type of options). The options differ for each measure. For some options, if\
\ selected, additional fields appear. Based on the selection here, the information\
\ in the **Reporting Code** and **Status** fields above will also change.\n\n\
* + [[M130 (Current Medications) measure|250318115235760]]\n\n This MIPS Quality\
\ measure's objective is to obtain and review the patient's medication history.\
\ This measure is configured using the SM130 screen and reported using the M130\
\ screen.\n + [[M154 (Falls Risk Assessment) measure|250318133925757]]This MIPS\
\ Quality measure's objective is to assess the fall risk of patients over 65 years\
\ old. This measure is configured using the SM154 screen and reported using the\
\ M154 screen. Assessment of the fall risk is tied to the Falls Risk Assessment\
\ record (PQ034), available in systems that also have the rehab note available.\n\
\ + [[M217 (Functional Status Change for Patients with Knee Impairments) measure|250318151713350]]\
\ \n This MIPS Quality measure can be reported for patients with knee impairment.\
\ Assessment of the functional status is tied to the FOTO survey for this measure.\
\ It should be reported on Discharge Summary notes only. This measure is configured\
\ using the SM217 screen and reported using the M217 screen.\n\nInternal\n**Related\
\ concept**\n\n[Quality measures hoverbox](https://kb.raintreeinc.com/help-10-2-500=quality-measures-hoverbox)\n\
\n**Related tasks**\n\n[Document MIPS quality measures on a BTICM note](https://kb.raintreeinc.com/help-10-2-500=document-mips-quality-measures-on-a-bticm-note)\n\
\n[Document MIPS quality measures on an ENOTE](https://kb.raintreeinc.com/help-10-2-500=document-mips-quality-measures-on-an-enote)\n\
\n[Document MIPS quality measures on a LTNOT note](https://kb.raintreeinc.com/help-10-2-500=document-mips-quality-measures-on-a-ltnot-note)\n\
\n[Document MIPS quality measures on a TVIST note](https://kb.raintreeinc.com/help-10-2-500=document-mips-quality-measures-on-a-tvist-note)\n\
\n**Related reference**\n\n[MIPS Quality measure screen (configuration)](https://kb.raintreeinc.com/help-10-2-500=mips-quality-measure-screen-lpar-configuration-rpar)\n\
\nMore Information\n[[Clinical quality measures (CQM and eCQM)|250318130647220]]\n\
\n"
- source_sentence: CREDSTATUS=E credentialing applications filter
sentences:
- "Reference Content\nIn this tab you can specify the posting codes that are used\
\ for remittance posting. This tab consists of four panels.\n\n**Primary Billing\
\ Posting Codes**\n---------------------------------\n\nIn this panel you can\
\ define the payment/adjustment/transfer codes that will be used for posting primary\
\ payments.\n\n**Primary Billing Posting Codes**\n\n\n\
\n**Use Method**\n\nSelect this option if you want to use a specific payment method.\
\ You only need to specify a payment method code and an additional denial method\
\ code. All related information will be pulled from the [[Payment Methods table|250312141829220]] automatically.\n\
\n**Use Hardcoded Codes**\n\nSelect this option if you want to specify the payment\
\ codes manually. Press **Tab** in each field to select the code from the respective\
\ table.\n\n**Setting bills to paid**\n\nDefines the bill closing criteria in\
\ percents. If the percentage is equal or more than specified in this field. the\
\ bill is set to paid.\n\n**Other Codes**\n\nYou can define other codes that are\
\ not in the payment methods table, but which are still used.\n\n1. Press Tab in\
\ the Copay Code field. Select a copay code and press **Enter**.\n2. Define whether\
\ you want to use an adjustment or a transfer for Increase of Expected. There\
\ is also an option to not increase expected. Select this option if you want to\
\ keep the credit balance.\n3. Press **Tab** in the appropriate field to select\
\ the code.\n\n**Do not increase expected**\n\nSelect this option if you want\
\ to keep the credit balance.\n\nSecondary Billing Posting Codes\n-------------------------------\n\
\nIn this panel you can define the payment/adjustment/transfer codes for posting\
\ secondary payments.\n\n**Secondary Billing Posting Codes**\n\n\n\
\nThis panel is similar to the previous one, except that the following fields\
\ are not available:\n\n* Decrease Copay\n* Copay Code\n\n**Special Codes**\n\
-----------------\n\nIn this panel you can specify the codes for backout and late\
\ fee payments and also reason code specific posting codes.\n\n**Special Codes**\n\
\n\n\
\nPress**Tab** in the respective field to select the necessary code. Repeat these\
\ steps until you have filled in the necessary fields.\n\n**Error Supplier Note\
\ Code**\n\nDefines which note is posted to the ledger when the payment cannot\
\ be posted. You can enable posting the error notes on the [[General|250311190336583]] tab.\n\
\n**Adjustment code**\n\nDefines which adjustment code is used for secondary balance\
\ write offs.\n\n**Other Backout Payment codes**\n\nDefines other possible backout\
\ payment codes. The codes in this field must be separated by commas. For example, BOA,BOP,BOT.\n\
\n**Reason code specific posting codes**\n\nEach code in this field consists of\
\ three parts that are separated by pipes.\n\n* + Reason code\n + **T** for Transfer\
\ or **A** for Adjustment\n + The posting code you want to use\n\nFor example,\
\ the code 1|T|TDOS means: \"When posting a transfer with reason code 1, use the\
\ TDOS code\".\n\n**Special code for the second payment**\n\nDefines the payment\
\ code for the second payment that is posted when the **Post the difference between\
\ the total amount and the sum of all line item amounts as a separate auto-distributed\
\ payment** option is selected in the **Advanced - Posting** tab.\n\n**The code\
\ for the adjustment that offsets the difference payment**\n\nDefines the adjustment\
\ code that is posted to offset the difference payment specified in the previous\
\ option.\n\n**PLB payments**\n----------------\n\nIn this tab you can specify\
\ the **PLB payment** codes. **PLB Payments** are not associated with a specific\
\ patient, but with a provider instead. For example, increase in payment as a\
\ result of late payment is a PLB payment. This also means that the codes needed\
\ differ from the other payments.\n\n**PLB Payments**\n\n\n\
\n**General Account for PLB Payments**\n\nEnables you to define an account for\
\ PLB payments. The payments are posted to the selected account. This is required\
\ information for the PLB payment posting to function.\n\n**Codes for PLB payments/adjustments/charges**\n\
\nEnter payment, location, provider, referral, adjustment and charge codes used\
\ for PLB ledger items.\n\nIf a field in this section is blank, the respective\
\ code from the following list is used:\n\n| Field on the payment/adjustment/charge\
\ | Code used if not specified otherwise |\n| --- | --- |\n| Payment code | RTEMP\
\ |\n| Location code | 01 |\n| Provider code | 01 |\n| Referral code | 01 |\n\
| Adjustment code | RTEMP |\n| Charge code | DEMC |\n\n \n\nAdditionally, ATEMP\
\ is used as primary insurance and 00000 is used as patient diagnosis.\n\nUpon\
\ posting, if a default code from this list is used, but it is not present in\
\ the table (for example, there is 01 provider code on the payment, but no such\
\ code in the Providers table), it is automatically added there.\n\nInternal\n\
**Previous topic:**[Remittance Settings - PLB Reason Code Grouping](https://kb.raintreeinc.com/help-10-2-500=remittance-settings-plb-reason-code-grouping)\n\
\n**Next topic:**[Remittance Settings - Interface](https://kb.raintreeinc.com/help-10-2-500=remittance-settings-interface)\n\
\nMore Information\n[[Remittance settings|250311180255330]]\n\n"
- 'Document
44\_20250529064929\_Credentialing.pdf
'
- 'Reference Content
To access the dashboard, select **Credentialing** in the main menu. It is only
available if you have the DASH\_CRED=E security right.
The dashboard content depends on your permissions.
Provider view
=============
If you are a provider, you only see the **Your Items** tab of the credentialing
dashboard. There you can view and complete the items requested from you.
The items in this tab are color-coded:
* Completed items are colored green.
* Pending items that you still have to complete are colored yellow.
* Items that have expired or are no longer needed are colored gray.
You can filter the items by location, state, status, type, and master insurance.
Double-click an item to open and edit it.
If you have the CREDSTATUS=E right, the **Status** tab is also visible. In that
tab you can see the status of credentialing applications for other providers,
but you cannot edit them.
Credentialing team view
=======================
If you are a member of the credentialing team, you can see most tabs in the credentialing
dashboard.
Provider Requests
-----------------
The **Provider Requests** tab shows all items requested from providers. You can
view, manage and add new request items.
At the top of the tab are options for filtering the provider requests.

Below the filters are the provider requests. They are color-coded based on their
status:
* Completed requests are colored green.
* Overdue requests are colored red.
* Pending requests are colored yellow.
* Requests that have expired or are marked as no longer needed are colored gray.
There are several actions you can take on the requests:
* You can send the selected requests or a reminder for the selected requests to
the provider. To do that, select the items in the list and click **Send Request/Reminder**.
* If some items are no longer needed, you can mark them as such. To do that, select
the items in the list and click **No Longer Needed**.
* If you want to update the follow-up date for some requests, you can do that.
Select the items in the list, enter the new date in the **F/U Date** field and
click **Update F/U Date for Selected**. This overwrites the previous follow-up
date on the selected items.
If you want to add a new provider request item, click **Add Item** in the lower
right corner. This opens a screen where you can select the item to request.
License Verifications
---------------------
The **License Verifications** tab shows a list of automatic license verifications.
For each verification, its status is shown: pending, successful, or failed. The
page is refreshed each hour. You can see the last time it was updated based on
the date time stamp at the top of the page.
Applications
------------
The **Applications** tab shows credentialing and re-credentialing applications
for both providers and locations.
At the top of the tab are options for filtering the applications.

Below the filters are the applications. They are color-coded based on their status:
* Completed applications are colored green.
* Overdue applications are colored red.
* Pending applications are colored yellow.
* Applications that have expired or are marked as no longer needed are colored
gray.
There are several actions you can take on the applications:
* If some items are no longer needed, you can mark them as such. To do that, select
the items in the list and click **No Longer Needed**.
* If you want to update the follow-up date for some applications, you can do that.
Select the items in the list, enter the new date in the **F/U Date** field and
click **Update F/U Date for Selected**. This overwrites the previous follow-up
date on the selected items.
In the lower right corner are additional buttons:
* **Generate Location Roster** exports a location roster. When you click this
button, you are prompted to download a CSV file with the selected items.
* **Generate Provider Roster** exports a provider roster. When you click this
button, you are prompted to choose a roster layout to use for exporting the selected
items.
* **Add Item** adds a new application. When you click this button, you are prompted
to choose which kind of application to create.
The options to generate a roster are available only after you have selected some
items in the list.
Providers / Provider Details
----------------------------
The **Providers** tab shows a list of all providers and their credentialing data.
Double-click a provider to view their details, documents and applications. When
you are viewing a provider, the tab''s name changes to **Provider Details**.
Above the list are options to filter the list. Below the list is the **Export**
button for [[exporting a provider roster|250512094406620]].
Locations / Location Detail
---------------------------
The **Locations** tab shows a list of all locations and their credentialing data.
Double-click a location to view its details, documents and applications. When
you are viewing a location, the tab''s name changes to **Location Detail**.
Master Insurances
-----------------
The **Master Insurances** tab shows a list of master insurances. You can view
and edit which insurances are mapped to which master insurance. Double-click an
insurance to view the credentialing applications for it. You can also add new
documents for the master insurance directly from this tab.
Import Tools
------------
In the **Import Tools** tab you can import providers'' CAQH and application data.
Setup
-----
In the **Setup** tab you can configure various aspects related to credentialing.
Read more about [[setting up credentialing|250606071952953]].
'
- source_sentence: flag deselect patient records Collection Worksheet
sentences:
- 'Issue/Question
The patient would like to have a refund amount applied to a different credit card
than the one they originally made the payment with.
Cause
The patient would like to have a refund amount applied to a different credit card
than the one they originally made the payment with.
Resolution/Answer
Raintree does not support unlinked refunds so the amount would need to be refunded
back to the original credit card.
'
- 'Reference Content
The **"meaning of life"** refers to the concept of an individual''s existence,
or existence in general, having inherent significance or a philosophical purpose.
There is no single, universally agreed-upon definition, as it is a deeply personal
and philosophical question often explored through various religious, spiritual,
and secular perspectives.
<https://en.wikipedia.org/wiki/Meaning_of_life>
'
- "Prerequisites and Steps\nWorksheet records that are flagged hidden are not displayed\
\ in the Collection Worksheet list. These records are also not deleted from the\
\ Collection Worksheet list. You can view the hidden worksheets to change flags\
\ if necessary.\n\nFor example, you are working within the **Collection Worksheet** for\
\ a specific payor that shows patient records (not the worksheet that displays\
\ all insurances) and you need to change the flags to display previously hidden\
\ records.\n\nFrom the Collection Worksheets list:\n\n1. Press **F**+**Arrow Up**.\
\ The list of flags appears. \n \n \n\
2. Deselect the necessary flags to display the records. \n\n For example, if\
\ you deselect the **Completed** option, all patients flagged as complete are\
\ shown.\n\nInternal\n**[[|250305114544823]]Related information[[|250305120611630]]**\n\
\n[[Creating collection worksheets|250305120611630]]\n\nMore Information\n[[Introduction\
\ to collection worksheets|250305114544823]]\n\n"
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy
model-index:
- name: SentenceTransformer based on Alibaba-NLP/gte-multilingual-base
results:
- task:
type: triplet
name: Triplet
dataset:
name: Raintree Triplet eval
type: Raintree_Triplet_eval
metrics:
- type: cosine_accuracy
value: 0.9202772974967957
name: Cosine Accuracy
---
# SentenceTransformer based on Alibaba-NLP/gte-multilingual-base
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Alibaba-NLP/gte-multilingual-base](https://huggingface.co/Alibaba-NLP/gte-multilingual-base) on the json dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [Alibaba-NLP/gte-multilingual-base](https://huggingface.co/Alibaba-NLP/gte-multilingual-base) <!-- at revision 9bbca17d9273fd0d03d5725c7a4b0f6b45142062 -->
- **Maximum Sequence Length:** 1280 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- json
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 1280, 'do_lower_case': False}) with Transformer model: NewModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sbert-databricks")
# Run inference
sentences = [
'flag deselect patient records Collection Worksheet',
'Prerequisites and Steps\nWorksheet records that are flagged hidden are not displayed in the Collection Worksheet list. These records are also not deleted from the Collection Worksheet list. You can view the hidden worksheets to change flags if necessary.\n\nFor example, you are working within the\xa0**Collection Worksheet**\xa0for a specific payor that shows patient records (not the worksheet that displays all insurances) and you need to change the flags to display previously hidden records.\n\nFrom the Collection Worksheets list:\n\n1. Press\xa0**F**+**Arrow Up**. The list of flags appears. \n \n \n2. Deselect the necessary flags to display the records. \n\n For example, if you deselect the\xa0**Completed**\xa0option, all patients flagged as complete are shown.\n\nInternal\n**[[|250305114544823]]Related information[[|250305120611630]]**\n\n[[Creating collection worksheets|250305120611630]]\n\nMore Information\n[[Introduction to collection worksheets|250305114544823]]\n\n',
'Reference Content\nThe **"meaning of life"** refers to the concept of an individual\'s existence, or existence in general, having inherent significance or a philosophical purpose. There is no single, universally agreed-upon definition, as it is a deeply personal and philosophical question often explored through various religious, spiritual, and secular perspectives.\n\n<https://en.wikipedia.org/wiki/Meaning_of_life>\n\n',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Triplet
* Dataset: `Raintree_Triplet_eval`
* Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| **cosine_accuracy** | **0.9203** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### json
* Dataset: json
* Size: 5,192 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 5 tokens</li><li>mean: 10.49 tokens</li><li>max: 23 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 680.97 tokens</li><li>max: 1280 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 443.89 tokens</li><li>max: 1280 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:----------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>rebill screen follow-up action codes</code> | <code>Prerequisites and Steps<br>You can rebill a claim in the ledger. This will print a claim using the previous claim's settings.<br><br>To rebill a claim:<br><br>1. Focus on the desired billbar.<br>2. Press **Enter**.<br>3. Select **Rebill this claim**.<br>4. Make changes in the Rebill Screen, if needed. For example, you can rebill using a different form than previously.<br>5. Press **Ctrl+S**.<br>6. A duplicate bill will be generated, which rebills the exact same charges using the same Billing Group that was used with original billing.<br>7. A **REBILL ledger note** will appear in patient's ledger. <br> **REBILL ledger note** <br><br> <br><br>This option duplicates the bill as it was originally printed. If rebilling was not successful, then verify the billing grou...</code> | <code>Reference Content<br>Follow-up actions allow you to speed up your workflow when handling billing issues. You can perform an action (for example, send additional medical documents to the payor or write off all balance on billbar) from the follow-up note, and have additional automated notes to track whether your actions were successful.<br><br>This feature is part of Raintree Rev-Edition (advanced Revenue Cycle Management tools). [[Read more|250304155001873]].<br><br>You can perform an action from either on the**Follow-up Note**screen or in the **Claim Follow-ups** follow-up note list. You can perform actions only on follow-up notes that have not been completed yet.<br><br>The available actions depend on the follow-up note code. For example, you cannot transfer the balance to another payor if there is already an overpayment, but there are other actions more applicable to the situation, such as closing the claim and adding a comment about the reason, or refunding the overpaid amount. Each action also has to m...</code> |
| <code>HL7 dashboard interface configuration</code> | <code>Reference Content<br>This screen is displayed when you add or edit an interface in the HL7 dashboard.<br><br>. This will be needed for step 7.<br><br><br><br>Adding the CPT code to the Picklist.<br>------------------------------------<br><br>1. Access any patient's note. For this example, we are using a test patient, Amy Raintree.<br>2. Go to the **Treatment** **Plan** tab > click on the **Activity Log** tab.<br>3. Click the plus symbol in the bottom left...</code> | <code>Reference Content<br>Charge table stores charges for services rendered at the facility. The RVS/CPT codes, charge descriptions, and fees for the practice are listed here. Similarly to the Provider table, the same charge may be added several times.<br><br>However, multiple entries for a charge are only necessary when:<br><br>* The charge billed/expected amounts are different based on a specific financial class.<br>* The POS (Place of Service) code is different based on a financial class.<br>* The TOS (Type of Service) code is different based on a financial class.<br>* A specific provider's fee is different than the other providers.<br><br>* **NOTE:***You can use BLOCK as an RVS/CPT code. If the best match for a particular charge is a blocked charge, then the result for charge lookup gives an empty response - you cannot add this charge to the ledger.<br><br>You can add, edit, delete and print the Charge table entries, if you have the required security...</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Evaluation Dataset
#### json
* Dataset: json
* Size: 577 evaluation samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 577 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 6 tokens</li><li>mean: 10.34 tokens</li><li>max: 22 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 638.71 tokens</li><li>max: 1280 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 421.83 tokens</li><li>max: 1280 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:-------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>billing code breakdown statistics</code> | <code>Reference Content<br>This daysheet report creates a summary of the current period's ledger entries.<br><br>You can see the selected filters below the report title. If no ledger entries match the selected filters, then the respective report values are left blank. <br>**Current period's ledger**<br><br><br><br>If several users need to view the report with the same filters, you may want to schedule it as a quick report so it can be generated outside office hours. The pre-compiled report will then be available in the **Quick Reports**tab of the dashboard.<br><br>**Ledger items summary**<br>------------------------<br><br>The upper part of the Summary Totals report displays the ledger summary broken down by ledger items. You can see the numbers and amounts of cha...</code> | <code>Reference Content<br>You can select the daysheet/stats breakdowns to print in the **Select Reports** screen.<br><br>* [Ledger Detail (insert-link)](https://kb.raintreeinc.com/help-10-2-500=ledger-detail)<br><br> This daysheet report is a printout of the entire ledger activity for the specified date range. By default, the report includes all entries that have been recorded in the ledger, but you can use the general report options to filter the entries that will be included in the report, and the additional breakdown options to break down the report by different time periods.<br>* [Summary Totals](https://kb.raintreeinc.com/help-10-2-500=summary-totals) [(insert-link)](https://kb.raintreeinc.com/help-10-2-500=ledger-detail)<br><br> This daysheet report creates a summary of the current period's ledger entries.<br>* [Charge Breakdown](https://kb.raintreeinc.com/help-10-2-500=charge-breakdown) [(insert-link)](https://kb.raintreeinc.com/help-10-2-500=ledger-detail)<br><br> This daysheet report breaks down the ledger acti...</code> |
| <code>Raintree license expiration alert configuration</code> | <code>Concept<br>License expiring<br>----------------<br><br>When a provider's license is about to expire, Raintree can alert them via email and text, and on login. The alert includes the license name, the state where the license is expiring, and the license end date. Example:<br><br>Your Provider License for state of California will expire on 03-31-25. Please renew your license as soon as possible to avoid disruption. If you have any questions, please contact the Credentialing Department.<br><br>The credentialing team can configure the following options:<br><br>* Whether to send email and text alerts to providers. <br><br> Email/text alerts are sent to providers:<br><br> + 60 days before license expiration.<br> + 45 days before license expiration.<br> + Daily when there are 30 or fewer days until expiration.<br>* Whether to show alerts to providers on login.<br>* How long (in days) before the license expiration to start showing the alerts.<br>* The alert text that is sent to providers.<br><br>No credentials for payor<br>------------------------<br><br>When...</code> | <code>Reference Content<br>The **"meaning of life"** refers to the concept of an individual's existence, or existence in general, having inherent significance or a philosophical purpose. There is no single, universally agreed-upon definition, as it is a deeply personal and philosophical question often explored through various religious, spiritual, and secular perspectives.<br><br><https://en.wikipedia.org/wiki/Meaning_of_life><br><br></code> |
| <code>Credentialing dashboard add item procedure</code> | <code>Prerequisites and Steps<br>1. From the main menu, select **Credentialing**. <br><br> The Credentialing dashboard opens.<br>2. Select the **Applications** tab.<br>3. Click **Add Item**.<br>4. Select the application type. <br><br> You can choose between credentialing and re-credentialing for providers or locations.<br><br> <br><br> The respective **Credentialing** screen is displayed.<br>5. Fill in the required information.<br><br> Required fields have a red asterisk next to them.<br>6. To add any additional documents, click in the **Documents** section and select **Add**. <br><br> For example, you can add an application letter here.<br>7. Optional: Click **Download Application** to save the application as a PDF.<br><br> This option is only available when an application form has been created for the selected master insurance.<br>8. Save your changes.<br><br>Results and Troubleshooting/Tips<br>The application is created. You can now track its status in the Credentialing dashboard.<br><br> with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_train_batch_size`: 4
- `per_device_eval_batch_size`: 1
- `gradient_accumulation_steps`: 8
- `torch_empty_cache_steps`: 16
- `learning_rate`: 1e-05
- `weight_decay`: 0.01
- `warmup_ratio`: 0.01
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 4
- `per_device_eval_batch_size`: 1
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 8
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: 16
- `learning_rate`: 1e-05
- `weight_decay`: 0.01
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 3
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.01
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss | Raintree_Triplet_eval_cosine_accuracy |
|:-----:|:----:|:-------------:|:---------------:|:-------------------------------------:|
| 1.0 | 163 | 3.2614 | 0.2535 | 0.8977 |
| 2.0 | 326 | 2.0932 | 0.2375 | 0.9116 |
| 3.0 | 489 | 1.6929 | 0.2330 | 0.9203 |
### Framework Versions
- Python: 3.11.13
- Sentence Transformers: 4.1.0
- Transformers: 4.52.4
- PyTorch: 2.6.0+cu124
- Accelerate: 1.8.1
- Datasets: 3.6.0
- Tokenizers: 0.21.2
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
OuassimZm/Whisper-small-fine-tuned-Bigouden-colab
|
OuassimZm
| 2025-09-22T11:26:34Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-22T11:26:27Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
MattBou00/llama-3-2-1b-detox_v1f_RRETRT_Again_ROUND3-checkpoint-epoch-60
|
MattBou00
| 2025-09-22T11:25:28Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"ppo",
"reinforcement-learning",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
reinforcement-learning
| 2025-09-22T11:24:31Z |
---
license: apache-2.0
library_name: transformers
tags:
- trl
- ppo
- transformers
- reinforcement-learning
---
# TRL Model
This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for text generation.
## Usage
To use this model for inference, first install the TRL library:
```bash
python -m pip install trl
```
You can then generate text as follows:
```python
from transformers import pipeline
generator = pipeline("text-generation", model="MattBou00//content/IRL-Bayesian/outputs/2025-09-22_11-15-41/checkpoints/checkpoint-epoch-60")
outputs = generator("Hello, my llama is cute")
```
If you want to use the model for training or to obtain the outputs from the value head, load the model as follows:
```python
from transformers import AutoTokenizer
from trl import AutoModelForCausalLMWithValueHead
tokenizer = AutoTokenizer.from_pretrained("MattBou00//content/IRL-Bayesian/outputs/2025-09-22_11-15-41/checkpoints/checkpoint-epoch-60")
model = AutoModelForCausalLMWithValueHead.from_pretrained("MattBou00//content/IRL-Bayesian/outputs/2025-09-22_11-15-41/checkpoints/checkpoint-epoch-60")
inputs = tokenizer("Hello, my llama is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
```
|
rimpang/ato
|
rimpang
| 2025-09-22T11:25:16Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"flux",
"lora",
"template:sd-lora",
"fluxgym",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-09-22T11:24:10Z |
---
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- fluxgym
widget:
- output:
url: sample/ato_000300_03_20250525013102_99.png
text: ato The man is holding a power drill --d 99
- output:
url: sample/ato_000600_03_20250525021454_99.png
text: ato Two crossed arms, holding a yellow measuring level and a green electric
drill. --d 99
- output:
url: sample/ato_000900_03_20250525050850_99.png
text: ato A dark brown rat is trapped in a simple wooden mouse trap. --d 99
- output:
url: sample/ato_001200_03_20250525070641_99.png
text: ato a zombie hand wielding a bloody knife, slicing through another zombie
arm. --d 99
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: ato
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# ato
A Flux LoRA trained on a local computer with [Fluxgym](https://github.com/cocktailpeanut/fluxgym)
<Gallery />
## Trigger words
You should use `ato` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, Forge, etc.
Weights for this model are available in Safetensors format.
|
mradermacher/Tanzania-0.5B-i1-GGUF
|
mradermacher
| 2025-09-22T11:24:45Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"creative",
"roleplay",
"story-telling",
"story-writing",
"en",
"dataset:practical-dreamer/RPGPT_PublicDomain-ShareGPT",
"dataset:Gryphe/Opus-WritingPrompts",
"base_model:XeTute/Tanzania-0.5B",
"base_model:quantized:XeTute/Tanzania-0.5B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2025-09-22T11:01:40Z |
---
base_model: XeTute/Tanzania-0.5B
datasets:
- practical-dreamer/RPGPT_PublicDomain-ShareGPT
- Gryphe/Opus-WritingPrompts
language:
- en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- creative
- roleplay
- story-telling
- story-writing
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
weighted/imatrix quants of https://huggingface.co/XeTute/Tanzania-0.5B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Tanzania-0.5B-i1-GGUF).***
static quants are available at https://huggingface.co/mradermacher/Tanzania-0.5B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Tanzania-0.5B-i1-GGUF/resolve/main/Tanzania-0.5B.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) |
| [GGUF](https://huggingface.co/mradermacher/Tanzania-0.5B-i1-GGUF/resolve/main/Tanzania-0.5B.i1-IQ1_S.gguf) | i1-IQ1_S | 0.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Tanzania-0.5B-i1-GGUF/resolve/main/Tanzania-0.5B.i1-IQ1_M.gguf) | i1-IQ1_M | 0.4 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Tanzania-0.5B-i1-GGUF/resolve/main/Tanzania-0.5B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Tanzania-0.5B-i1-GGUF/resolve/main/Tanzania-0.5B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Tanzania-0.5B-i1-GGUF/resolve/main/Tanzania-0.5B.i1-IQ2_S.gguf) | i1-IQ2_S | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Tanzania-0.5B-i1-GGUF/resolve/main/Tanzania-0.5B.i1-IQ2_M.gguf) | i1-IQ2_M | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Tanzania-0.5B-i1-GGUF/resolve/main/Tanzania-0.5B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 0.4 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Tanzania-0.5B-i1-GGUF/resolve/main/Tanzania-0.5B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Tanzania-0.5B-i1-GGUF/resolve/main/Tanzania-0.5B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 0.4 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Tanzania-0.5B-i1-GGUF/resolve/main/Tanzania-0.5B.i1-IQ3_S.gguf) | i1-IQ3_S | 0.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Tanzania-0.5B-i1-GGUF/resolve/main/Tanzania-0.5B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Tanzania-0.5B-i1-GGUF/resolve/main/Tanzania-0.5B.i1-Q2_K.gguf) | i1-Q2_K | 0.4 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Tanzania-0.5B-i1-GGUF/resolve/main/Tanzania-0.5B.i1-IQ3_M.gguf) | i1-IQ3_M | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Tanzania-0.5B-i1-GGUF/resolve/main/Tanzania-0.5B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Tanzania-0.5B-i1-GGUF/resolve/main/Tanzania-0.5B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 0.5 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Tanzania-0.5B-i1-GGUF/resolve/main/Tanzania-0.5B.i1-Q4_0.gguf) | i1-Q4_0 | 0.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Tanzania-0.5B-i1-GGUF/resolve/main/Tanzania-0.5B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 0.5 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Tanzania-0.5B-i1-GGUF/resolve/main/Tanzania-0.5B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 0.5 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Tanzania-0.5B-i1-GGUF/resolve/main/Tanzania-0.5B.i1-Q4_1.gguf) | i1-Q4_1 | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Tanzania-0.5B-i1-GGUF/resolve/main/Tanzania-0.5B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 0.5 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Tanzania-0.5B-i1-GGUF/resolve/main/Tanzania-0.5B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 0.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Tanzania-0.5B-i1-GGUF/resolve/main/Tanzania-0.5B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Tanzania-0.5B-i1-GGUF/resolve/main/Tanzania-0.5B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Tanzania-0.5B-i1-GGUF/resolve/main/Tanzania-0.5B.i1-Q6_K.gguf) | i1-Q6_K | 0.6 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/Qwen3-ST-The-Next-Generation-II-FreakStorm-E32-v1-256k-ctx-6B-i1-GGUF
|
mradermacher
| 2025-09-22T11:24:17Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"programming",
"code generation",
"code",
"coding",
"coder",
"chat",
"brainstorm",
"qwen",
"qwen3",
"qwencoder",
"brainstorm 20x",
"creative",
"all uses cases",
"Jan-V1",
"float32",
"horror",
"32 bit precision",
"science fiction",
"fantasy",
"Star Trek",
"finetune",
"thinking",
"reasoning",
"unsloth",
"en",
"dataset:progs2002/star-trek-tng-scripts",
"base_model:DavidAU/Qwen3-ST-The-Next-Generation-II-FreakStorm-E32-v1-256k-ctx-6B",
"base_model:quantized:DavidAU/Qwen3-ST-The-Next-Generation-II-FreakStorm-E32-v1-256k-ctx-6B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-09-22T10:40:42Z |
---
base_model: DavidAU/Qwen3-ST-The-Next-Generation-II-FreakStorm-E32-v1-256k-ctx-6B
datasets:
- progs2002/star-trek-tng-scripts
language:
- en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- programming
- code generation
- code
- coding
- coder
- chat
- code
- chat
- brainstorm
- qwen
- qwen3
- qwencoder
- brainstorm 20x
- creative
- all uses cases
- Jan-V1
- float32
- horror
- 32 bit precision
- science fiction
- fantasy
- Star Trek
- finetune
- thinking
- reasoning
- unsloth
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
weighted/imatrix quants of https://huggingface.co/DavidAU/Qwen3-ST-The-Next-Generation-II-FreakStorm-E32-v1-256k-ctx-6B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Qwen3-ST-The-Next-Generation-II-FreakStorm-E32-v1-256k-ctx-6B-i1-GGUF).***
static quants are available at https://huggingface.co/mradermacher/Qwen3-ST-The-Next-Generation-II-FreakStorm-E32-v1-256k-ctx-6B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen3-ST-The-Next-Generation-II-FreakStorm-E32-v1-256k-ctx-6B-i1-GGUF/resolve/main/Qwen3-ST-The-Next-Generation-II-FreakStorm-E32-v1-256k-ctx-6B.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-ST-The-Next-Generation-II-FreakStorm-E32-v1-256k-ctx-6B-i1-GGUF/resolve/main/Qwen3-ST-The-Next-Generation-II-FreakStorm-E32-v1-256k-ctx-6B.i1-IQ1_S.gguf) | i1-IQ1_S | 1.7 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-ST-The-Next-Generation-II-FreakStorm-E32-v1-256k-ctx-6B-i1-GGUF/resolve/main/Qwen3-ST-The-Next-Generation-II-FreakStorm-E32-v1-256k-ctx-6B.i1-IQ1_M.gguf) | i1-IQ1_M | 1.8 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-ST-The-Next-Generation-II-FreakStorm-E32-v1-256k-ctx-6B-i1-GGUF/resolve/main/Qwen3-ST-The-Next-Generation-II-FreakStorm-E32-v1-256k-ctx-6B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-ST-The-Next-Generation-II-FreakStorm-E32-v1-256k-ctx-6B-i1-GGUF/resolve/main/Qwen3-ST-The-Next-Generation-II-FreakStorm-E32-v1-256k-ctx-6B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.2 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-ST-The-Next-Generation-II-FreakStorm-E32-v1-256k-ctx-6B-i1-GGUF/resolve/main/Qwen3-ST-The-Next-Generation-II-FreakStorm-E32-v1-256k-ctx-6B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-ST-The-Next-Generation-II-FreakStorm-E32-v1-256k-ctx-6B-i1-GGUF/resolve/main/Qwen3-ST-The-Next-Generation-II-FreakStorm-E32-v1-256k-ctx-6B.i1-IQ2_M.gguf) | i1-IQ2_M | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-ST-The-Next-Generation-II-FreakStorm-E32-v1-256k-ctx-6B-i1-GGUF/resolve/main/Qwen3-ST-The-Next-Generation-II-FreakStorm-E32-v1-256k-ctx-6B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 2.4 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-ST-The-Next-Generation-II-FreakStorm-E32-v1-256k-ctx-6B-i1-GGUF/resolve/main/Qwen3-ST-The-Next-Generation-II-FreakStorm-E32-v1-256k-ctx-6B.i1-Q2_K.gguf) | i1-Q2_K | 2.6 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-ST-The-Next-Generation-II-FreakStorm-E32-v1-256k-ctx-6B-i1-GGUF/resolve/main/Qwen3-ST-The-Next-Generation-II-FreakStorm-E32-v1-256k-ctx-6B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-ST-The-Next-Generation-II-FreakStorm-E32-v1-256k-ctx-6B-i1-GGUF/resolve/main/Qwen3-ST-The-Next-Generation-II-FreakStorm-E32-v1-256k-ctx-6B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-ST-The-Next-Generation-II-FreakStorm-E32-v1-256k-ctx-6B-i1-GGUF/resolve/main/Qwen3-ST-The-Next-Generation-II-FreakStorm-E32-v1-256k-ctx-6B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-ST-The-Next-Generation-II-FreakStorm-E32-v1-256k-ctx-6B-i1-GGUF/resolve/main/Qwen3-ST-The-Next-Generation-II-FreakStorm-E32-v1-256k-ctx-6B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-ST-The-Next-Generation-II-FreakStorm-E32-v1-256k-ctx-6B-i1-GGUF/resolve/main/Qwen3-ST-The-Next-Generation-II-FreakStorm-E32-v1-256k-ctx-6B.i1-IQ3_M.gguf) | i1-IQ3_M | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-ST-The-Next-Generation-II-FreakStorm-E32-v1-256k-ctx-6B-i1-GGUF/resolve/main/Qwen3-ST-The-Next-Generation-II-FreakStorm-E32-v1-256k-ctx-6B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.3 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-ST-The-Next-Generation-II-FreakStorm-E32-v1-256k-ctx-6B-i1-GGUF/resolve/main/Qwen3-ST-The-Next-Generation-II-FreakStorm-E32-v1-256k-ctx-6B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.5 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-ST-The-Next-Generation-II-FreakStorm-E32-v1-256k-ctx-6B-i1-GGUF/resolve/main/Qwen3-ST-The-Next-Generation-II-FreakStorm-E32-v1-256k-ctx-6B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-ST-The-Next-Generation-II-FreakStorm-E32-v1-256k-ctx-6B-i1-GGUF/resolve/main/Qwen3-ST-The-Next-Generation-II-FreakStorm-E32-v1-256k-ctx-6B.i1-Q4_0.gguf) | i1-Q4_0 | 3.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-ST-The-Next-Generation-II-FreakStorm-E32-v1-256k-ctx-6B-i1-GGUF/resolve/main/Qwen3-ST-The-Next-Generation-II-FreakStorm-E32-v1-256k-ctx-6B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 3.8 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-ST-The-Next-Generation-II-FreakStorm-E32-v1-256k-ctx-6B-i1-GGUF/resolve/main/Qwen3-ST-The-Next-Generation-II-FreakStorm-E32-v1-256k-ctx-6B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 3.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-ST-The-Next-Generation-II-FreakStorm-E32-v1-256k-ctx-6B-i1-GGUF/resolve/main/Qwen3-ST-The-Next-Generation-II-FreakStorm-E32-v1-256k-ctx-6B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-ST-The-Next-Generation-II-FreakStorm-E32-v1-256k-ctx-6B-i1-GGUF/resolve/main/Qwen3-ST-The-Next-Generation-II-FreakStorm-E32-v1-256k-ctx-6B.i1-Q4_1.gguf) | i1-Q4_1 | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-ST-The-Next-Generation-II-FreakStorm-E32-v1-256k-ctx-6B-i1-GGUF/resolve/main/Qwen3-ST-The-Next-Generation-II-FreakStorm-E32-v1-256k-ctx-6B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-ST-The-Next-Generation-II-FreakStorm-E32-v1-256k-ctx-6B-i1-GGUF/resolve/main/Qwen3-ST-The-Next-Generation-II-FreakStorm-E32-v1-256k-ctx-6B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-ST-The-Next-Generation-II-FreakStorm-E32-v1-256k-ctx-6B-i1-GGUF/resolve/main/Qwen3-ST-The-Next-Generation-II-FreakStorm-E32-v1-256k-ctx-6B.i1-Q6_K.gguf) | i1-Q6_K | 5.3 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/ICDRec-SFT-Qwen3-4B-Thinking-2507-zh-GGUF
|
mradermacher
| 2025-09-22T11:24:12Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:Johnnyfans/ICDRec-SFT-Qwen3-4B-Thinking-2507-zh",
"base_model:quantized:Johnnyfans/ICDRec-SFT-Qwen3-4B-Thinking-2507-zh",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-22T10:51:34Z |
---
base_model: Johnnyfans/ICDRec-SFT-Qwen3-4B-Thinking-2507-zh
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/Johnnyfans/ICDRec-SFT-Qwen3-4B-Thinking-2507-zh
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#ICDRec-SFT-Qwen3-4B-Thinking-2507-zh-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/ICDRec-SFT-Qwen3-4B-Thinking-2507-zh-GGUF/resolve/main/ICDRec-SFT-Qwen3-4B-Thinking-2507-zh.Q2_K.gguf) | Q2_K | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/ICDRec-SFT-Qwen3-4B-Thinking-2507-zh-GGUF/resolve/main/ICDRec-SFT-Qwen3-4B-Thinking-2507-zh.Q3_K_S.gguf) | Q3_K_S | 2.2 | |
| [GGUF](https://huggingface.co/mradermacher/ICDRec-SFT-Qwen3-4B-Thinking-2507-zh-GGUF/resolve/main/ICDRec-SFT-Qwen3-4B-Thinking-2507-zh.Q3_K_M.gguf) | Q3_K_M | 2.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/ICDRec-SFT-Qwen3-4B-Thinking-2507-zh-GGUF/resolve/main/ICDRec-SFT-Qwen3-4B-Thinking-2507-zh.Q3_K_L.gguf) | Q3_K_L | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/ICDRec-SFT-Qwen3-4B-Thinking-2507-zh-GGUF/resolve/main/ICDRec-SFT-Qwen3-4B-Thinking-2507-zh.IQ4_XS.gguf) | IQ4_XS | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/ICDRec-SFT-Qwen3-4B-Thinking-2507-zh-GGUF/resolve/main/ICDRec-SFT-Qwen3-4B-Thinking-2507-zh.Q4_K_S.gguf) | Q4_K_S | 2.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ICDRec-SFT-Qwen3-4B-Thinking-2507-zh-GGUF/resolve/main/ICDRec-SFT-Qwen3-4B-Thinking-2507-zh.Q4_K_M.gguf) | Q4_K_M | 2.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ICDRec-SFT-Qwen3-4B-Thinking-2507-zh-GGUF/resolve/main/ICDRec-SFT-Qwen3-4B-Thinking-2507-zh.Q5_K_S.gguf) | Q5_K_S | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/ICDRec-SFT-Qwen3-4B-Thinking-2507-zh-GGUF/resolve/main/ICDRec-SFT-Qwen3-4B-Thinking-2507-zh.Q5_K_M.gguf) | Q5_K_M | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/ICDRec-SFT-Qwen3-4B-Thinking-2507-zh-GGUF/resolve/main/ICDRec-SFT-Qwen3-4B-Thinking-2507-zh.Q6_K.gguf) | Q6_K | 3.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/ICDRec-SFT-Qwen3-4B-Thinking-2507-zh-GGUF/resolve/main/ICDRec-SFT-Qwen3-4B-Thinking-2507-zh.Q8_0.gguf) | Q8_0 | 4.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/ICDRec-SFT-Qwen3-4B-Thinking-2507-zh-GGUF/resolve/main/ICDRec-SFT-Qwen3-4B-Thinking-2507-zh.f16.gguf) | f16 | 8.9 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
MattBou00/llama-3-2-1b-detox_v1f_RRETRT_Again_ROUND3-checkpoint-epoch-40
|
MattBou00
| 2025-09-22T11:22:15Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"ppo",
"reinforcement-learning",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
reinforcement-learning
| 2025-09-22T11:21:18Z |
---
license: apache-2.0
library_name: transformers
tags:
- trl
- ppo
- transformers
- reinforcement-learning
---
# TRL Model
This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for text generation.
## Usage
To use this model for inference, first install the TRL library:
```bash
python -m pip install trl
```
You can then generate text as follows:
```python
from transformers import pipeline
generator = pipeline("text-generation", model="MattBou00//content/IRL-Bayesian/outputs/2025-09-22_11-15-41/checkpoints/checkpoint-epoch-40")
outputs = generator("Hello, my llama is cute")
```
If you want to use the model for training or to obtain the outputs from the value head, load the model as follows:
```python
from transformers import AutoTokenizer
from trl import AutoModelForCausalLMWithValueHead
tokenizer = AutoTokenizer.from_pretrained("MattBou00//content/IRL-Bayesian/outputs/2025-09-22_11-15-41/checkpoints/checkpoint-epoch-40")
model = AutoModelForCausalLMWithValueHead.from_pretrained("MattBou00//content/IRL-Bayesian/outputs/2025-09-22_11-15-41/checkpoints/checkpoint-epoch-40")
inputs = tokenizer("Hello, my llama is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
```
|
nnilayy/dreamer_stride_256-binary-arousal-Kfold-5-stride_256
|
nnilayy
| 2025-09-22T11:21:39Z | 0 | 0 | null |
[
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-09-22T11:21:33Z |
---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed]
|
noirchan/Llama-3-8B-JaCode-TIES-v1
|
noirchan
| 2025-09-22T11:21:30Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2306.01708",
"base_model:alfredplpl/Llama-3-8B-Instruct-Ja",
"base_model:merge:alfredplpl/Llama-3-8B-Instruct-Ja",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:merge:meta-llama/Meta-Llama-3-8B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-22T11:20:49Z |
---
base_model:
- alfredplpl/Llama-3-8B-Instruct-Ja
- meta-llama/Meta-Llama-3-8B-Instruct
library_name: transformers
tags:
- mergekit
- merge
---
# ties_v1
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) as a base.
### Models Merged
The following models were included in the merge:
* [alfredplpl/Llama-3-8B-Instruct-Ja](https://huggingface.co/alfredplpl/Llama-3-8B-Instruct-Ja)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
merge_method: ties
base_model: meta-llama/Meta-Llama-3-8B-Instruct
models:
- model: meta-llama/Meta-Llama-3-8B-Instruct
parameters:
weight: 0.4
density: 0.7
- model: alfredplpl/Llama-3-8B-Instruct-Ja
parameters:
weight: 0.6
density: 0.8
parameters:
normalize: false
dtype: bfloat16
tokenizer_source: union
```
|
AXERA-TECH/YOLO11
|
AXERA-TECH
| 2025-09-22T11:21:01Z | 26 | 0 | null |
[
"onnx",
"Ultralytics",
"YOLO11",
"object-detection",
"en",
"base_model:Ultralytics/YOLO11",
"base_model:quantized:Ultralytics/YOLO11",
"license:mit",
"region:us"
] |
object-detection
| 2025-01-11T16:18:52Z |
---
license: mit
language:
- en
base_model:
- Ultralytics/YOLO11
pipeline_tag: object-detection
tags:
- Ultralytics
- YOLO11
---
# YOLO11
This version of YOLO11 has been converted to run on the Axera NPU using **w8a16** quantization.
This model has been optimized with the following LoRA:
Compatible with Pulsar2 version: 3.4
## Convert tools links:
For those who are interested in model conversion, you can try to export axmodel through
- [The repo of ax-samples](https://github.com/AXERA-TECH/ax-samples), which you can get the how to build the `ax_yolo11`
- [The repo of axcl-samples](https://github.com/AXERA-TECH/axcl-samples), which you can get the how to build the `axcl_yolo11`
- [Pulsar2 Link, How to Convert ONNX to axmodel](https://pulsar2-docs.readthedocs.io/en/latest/pulsar2/introduction.html)
## Support Platform
- AX650
- [M4N-Dock(爱芯派Pro)](https://wiki.sipeed.com/hardware/zh/maixIV/m4ndock/m4ndock.html)
- [M.2 Accelerator card](https://axcl-docs.readthedocs.io/zh-cn/latest/doc_guide_hardware.html)
- AX630C
- [爱芯派2](https://axera-pi-2-docs-cn.readthedocs.io/zh-cn/latest/index.html)
- [Module-LLM](https://docs.m5stack.com/zh_CN/module/Module-LLM)
- [LLM630 Compute Kit](https://docs.m5stack.com/zh_CN/core/LLM630%20Compute%20Kit)
|Chips|cost|
|--|--|
|AX650| 25 ms |
|AX630C| TBD ms |
## How to use
Download all files from this repository to the device
```
(axcl) axera@raspberrypi:~/samples/AXERA-TECH/YOLO11 $ tree -L 2
.
├── ax620e
│ └── yolo11s.axmodel.onnx
├── ax650
│ ├── yolo11s.axmodel
│ └── yolo11x.axmodel
├── ax_aarch64
│ └── ax_yolo11
├── axcl_aarch64
│ └── axcl_yolo11
├── axcl_x86_64
│ └── axcl_yolo11
├── config.json
├── cut-onnx.py
├── football.jpg
├── README.md
├── ssd_horse.jpg
├── yolo11_config.json
├── yolo11_out.jpg
├── yolo11s-cut.onnx
└── yolo11-test.py
6 directories, 15 files
```
### Inference
Input image:

#### Inference with AX650 Host, such as M4N-Dock(爱芯派Pro)
```
root@ax650:~/samples/AXERA-TECH/YOLO11# ./ax_aarch64/ax_yolo11 -m ax650/yolo11x.axmodel -i football.jpg
--------------------------------------
model file : ax650/yolo11x.axmodel
image file : football.jpg
img_h, img_w : 640 640
--------------------------------------
Engine creating handle is done.
Engine creating context is done.
Engine get io info is done.
Engine alloc io is done.
Engine push input is done.
--------------------------------------
post process cost time:4.20 ms
--------------------------------------
Repeat 1 times, avg time 24.56 ms, max_time 24.56 ms, min_time 24.56 ms
--------------------------------------
detection num: 9
0: 94%, [ 757, 220, 1127, 1154], person
0: 94%, [ 0, 357, 314, 1112], person
0: 93%, [1353, 339, 1629, 1037], person
0: 91%, [ 494, 476, 659, 1001], person
32: 86%, [1231, 877, 1281, 922], sports ball
32: 73%, [ 774, 887, 828, 938], sports ball
32: 66%, [1012, 882, 1051, 927], sports ball
0: 54%, [ 0, 543, 83, 1000], person
0: 46%, [1837, 696, 1877, 814], person
--------------------------------------
```
Output image:

#### Inference with M.2 Accelerator card
```
(axcl) axera@raspberrypi:~/samples/AXERA-TECH/YOLO11 $ ./axcl_aarch64/axcl_yolo11 -m ax650/yolo11x.axmodel -i football.jpg
--------------------------------------
model file : ax650/yolo11x.axmodel
image file : football.jpg
img_h, img_w : 640 640
--------------------------------------
axclrtEngineCreateContextt is done.
axclrtEngineGetIOInfo is done.
grpid: 0
input size: 1
name: images
1 x 640 x 640 x 3
output size: 3
name: /model.23/Concat_output_0
1 x 80 x 80 x 144
name: /model.23/Concat_1_output_0
1 x 40 x 40 x 144
name: /model.23/Concat_2_output_0
1 x 20 x 20 x 144
==================================================
Engine push input is done.
--------------------------------------
post process cost time:1.38 ms
--------------------------------------
Repeat 1 times, avg time 24.73 ms, max_time 24.73 ms, min_time 24.73 ms
--------------------------------------
detection num: 9
0: 94%, [ 757, 220, 1127, 1154], person
0: 94%, [ 0, 357, 314, 1112], person
0: 93%, [1353, 339, 1629, 1037], person
0: 91%, [ 494, 476, 659, 1001], person
32: 86%, [1231, 877, 1281, 922], sports ball
32: 73%, [ 774, 887, 828, 938], sports ball
32: 66%, [1012, 882, 1051, 927], sports ball
0: 54%, [ 0, 543, 83, 1000], person
0: 46%, [1837, 696, 1877, 814], person
--------------------------------------
```
|
DevforMM/tmp_trainer
|
DevforMM
| 2025-09-22T11:13:12Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"base_model:ntu-spml/distilhubert",
"base_model:finetune:ntu-spml/distilhubert",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2025-09-22T11:13:07Z |
---
library_name: transformers
license: apache-2.0
base_model: ntu-spml/distilhubert
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
model-index:
- name: ntu-spml/distilhubert-finetuned-gtzan
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ntu-spml/distilhubert-finetuned-gtzan
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.55.3
- Pytorch 2.8.0+cu128
- Datasets 3.6.0
- Tokenizers 0.21.2
|
aamijar/Llama-2-7b-hf-dora-r8-boolq-epochs0
|
aamijar
| 2025-09-22T11:12:30Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-22T11:12:28Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
adalberto-temp/energy_dpo_V0.2_Instruct_ref
|
adalberto-temp
| 2025-09-22T11:12:20Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-22T11:06:12Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Gardeviance/MS-Gardventure-MW-V1-22B-IQ4_NL-GGUF
|
Gardeviance
| 2025-09-22T11:09:47Z | 66 | 0 | null |
[
"gguf",
"text-generation",
"quantized",
"mit",
"mistral",
"llamacpp",
"en",
"base_model:TheDrummer/UnslopSmall-22B-v1",
"base_model:merge:TheDrummer/UnslopSmall-22B-v1",
"base_model:hf-100/Mistral-Small-Spellbound-StoryWriter-22B-instruct-0.2-chkpt-200-16-bit",
"base_model:merge:hf-100/Mistral-Small-Spellbound-StoryWriter-22B-instruct-0.2-chkpt-200-16-bit",
"base_model:mistralai/Mistral-Small-Instruct-2409",
"base_model:merge:mistralai/Mistral-Small-Instruct-2409",
"base_model:nbeerbower/Mistral-Small-Drummer-22B",
"base_model:merge:nbeerbower/Mistral-Small-Drummer-22B",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] |
text-generation
| 2025-09-17T01:49:44Z |
---
language:
- en
tags:
- text-generation
- gguf
- quantized
- mit
- mistral
- llamacpp
license: mit
base_model:
- mistralai/Mistral-Small-Instruct-2409
- TheDrummer/UnslopSmall-22B-v1
- nbeerbower/Mistral-Small-Drummer-22B
- hf-100/Mistral-Small-Spellbound-StoryWriter-22B-instruct-0.2-chkpt-200-16-bit
base_model_relation: merge
pipeline_tag: text-generation
---
# MS-Gardventure-MW-V1-22B-IQ4_NL-GGUF
It's pretty good at AI Dungeon style gameplay using KoboldAI Lite. There's an example scenario in the repo.
Made with some handmade training data.
## Credits
* UnslopSmall-22B-v1
* Mistral-Small-Drummer-22B
* Mistral-Small-Spellbound-StoryWriter-22B-instruct-0.2-chkpt-200-16-bit
## Full Model Release
I can release full FP16 HF format model if you want, but it's a pain to do.
|
poolkiltzn/blockassist-bc-vigilant_alert_tuna_1758539259
|
poolkiltzn
| 2025-09-22T11:09:09Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"vigilant alert tuna",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-22T11:08:44Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- vigilant alert tuna
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
TAUR-dev/M-multitask_sftdata_cd3_lm3_ac4_lc4-sft
|
TAUR-dev
| 2025-09-22T11:03:58Z | 0 | 0 | null |
[
"safetensors",
"qwen2",
"region:us"
] | null | 2025-09-22T11:03:25Z |
# M-multitask_sftdata_cd3_lm3_ac4_lc4-sft
This model was created as part of the **multitask_sftdata_cd3_lm3_ac4_lc4** experiment using the SkillFactory experiment management system.
## Model Details
- **Training Method**: LLaMAFactory SFT (Supervised Fine-Tuning)
- **Stage Name**: sft
- **Experiment**: multitask_sftdata_cd3_lm3_ac4_lc4
## Training Configuration
{"model_name_or_path": "Qwen/Qwen2.5-1.5B-Instruct", "trust_remote_code": true, "stage": "sft", "do_train": true, "finetuning_type": "full", "deepspeed": "/home/ubuntu/skill-factory/thirdparty/LLaMA-Factory/examples/deepspeed/ds_z2_config.json", "dataset": "TAUR_dev__multitask_sftdata_cd3_lm3_ac4_lc4", "template": "qwen", "cutoff_len": 16384, "max_samples": 1000000, "overwrite_cache": true, "preprocessing_num_workers": 1, "dataloader_num_workers": 0, "disable_tqdm": false, "output_dir": "/data4/tmp/sedrick/skillfactory/temp/llamafactory/checkpoints", "logging_steps": 10, "save_steps": 100000, "plot_loss": true, "overwrite_output_dir": true, "per_device_train_batch_size": 1, "gradient_accumulation_steps": 1, "learning_rate": 1e-06, "num_train_epochs": 1, "lr_scheduler_type": "cosine", "warmup_ratio": 0.05, "weight_decay": 0.0001, "adam_beta1": 0.9, "adam_beta2": 0.95, "bf16": true, "ddp_timeout": 180000000, "gradient_checkpointing": true, "save_only_model": true, "enable_masked_ranges": false, "save_strategy": "steps", "save_total_limit": 5, "sf_tracker_dataset_id": "TAUR-dev/D-ExpTracker__multitask_sftdata_cd3_lm3_ac4_lc4__v1", "sf_eval_before_training": false, "sf_wandb_project": "multitask_sftdata_cd3_lm3_ac4_lc4_sft", "sf_eval_steps": null, "run_name": "multitask_sftdata_cd3_lm3_ac4_lc4_sft"}
## Experiment Tracking
🔗 **View complete experiment details**: [Experiment Tracker Dataset](https://huggingface.co/datasets/TAUR-dev/D-ExpTracker__multitask_sftdata_cd3_lm3_ac4_lc4__v1)
## Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("TAUR-dev/M-multitask_sftdata_cd3_lm3_ac4_lc4-sft")
model = AutoModelForCausalLM.from_pretrained("TAUR-dev/M-multitask_sftdata_cd3_lm3_ac4_lc4-sft")
```
|
MattBou00/llama-3-2-1b-detox_v1f_RRETRT_Again_ROUND5-checkpoint-epoch-100
|
MattBou00
| 2025-09-22T11:03:20Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"ppo",
"reinforcement-learning",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
reinforcement-learning
| 2025-09-22T11:02:18Z |
---
license: apache-2.0
library_name: transformers
tags:
- trl
- ppo
- transformers
- reinforcement-learning
---
# TRL Model
This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for text generation.
## Usage
To use this model for inference, first install the TRL library:
```bash
python -m pip install trl
```
You can then generate text as follows:
```python
from transformers import pipeline
generator = pipeline("text-generation", model="MattBou00//content/IRL-Bayesian/outputs/2025-09-22_10-46-42/checkpoints/checkpoint-epoch-100")
outputs = generator("Hello, my llama is cute")
```
If you want to use the model for training or to obtain the outputs from the value head, load the model as follows:
```python
from transformers import AutoTokenizer
from trl import AutoModelForCausalLMWithValueHead
tokenizer = AutoTokenizer.from_pretrained("MattBou00//content/IRL-Bayesian/outputs/2025-09-22_10-46-42/checkpoints/checkpoint-epoch-100")
model = AutoModelForCausalLMWithValueHead.from_pretrained("MattBou00//content/IRL-Bayesian/outputs/2025-09-22_10-46-42/checkpoints/checkpoint-epoch-100")
inputs = tokenizer("Hello, my llama is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
```
|
Accountable-SA/gemma-3-270m-it-base-Q4_K_M-GGUF
|
Accountable-SA
| 2025-09-22T11:03:17Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:Accountable-SA/gemma-3-270m-it-base",
"base_model:quantized:Accountable-SA/gemma-3-270m-it-base",
"endpoints_compatible",
"region:us"
] | null | 2025-09-22T11:03:13Z |
---
library_name: transformers
tags:
- llama-cpp
- gguf-my-repo
base_model: Accountable-SA/gemma-3-270m-it-base
---
# massimogiuseppe/gemma-3-270m-it-base-Q4_K_M-GGUF
This model was converted to GGUF format from [`Accountable-SA/gemma-3-270m-it-base`](https://huggingface.co/Accountable-SA/gemma-3-270m-it-base) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Accountable-SA/gemma-3-270m-it-base) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo massimogiuseppe/gemma-3-270m-it-base-Q4_K_M-GGUF --hf-file gemma-3-270m-it-base-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo massimogiuseppe/gemma-3-270m-it-base-Q4_K_M-GGUF --hf-file gemma-3-270m-it-base-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo massimogiuseppe/gemma-3-270m-it-base-Q4_K_M-GGUF --hf-file gemma-3-270m-it-base-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo massimogiuseppe/gemma-3-270m-it-base-Q4_K_M-GGUF --hf-file gemma-3-270m-it-base-q4_k_m.gguf -c 2048
```
|
felixZzz/h0slmlq1-step_00400
|
felixZzz
| 2025-09-22T11:00:26Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-22T10:58:19Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
shahabas9/faq-model
|
shahabas9
| 2025-09-22T10:59:49Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-22T09:55:27Z |
---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** shahabas9
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
poolkiltzn/blockassist-bc-vigilant_alert_tuna_1758538641
|
poolkiltzn
| 2025-09-22T10:58:30Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"vigilant alert tuna",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-22T10:58:22Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- vigilant alert tuna
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
felixZzz/q04m4jep-step_00500
|
felixZzz
| 2025-09-22T10:58:06Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-22T10:56:08Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
MattBou00/llama-3-2-1b-detox_v1f_RRETRT_Again_ROUND5-checkpoint-epoch-60
|
MattBou00
| 2025-09-22T10:56:47Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"ppo",
"reinforcement-learning",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
reinforcement-learning
| 2025-09-22T10:55:48Z |
---
license: apache-2.0
library_name: transformers
tags:
- trl
- ppo
- transformers
- reinforcement-learning
---
# TRL Model
This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for text generation.
## Usage
To use this model for inference, first install the TRL library:
```bash
python -m pip install trl
```
You can then generate text as follows:
```python
from transformers import pipeline
generator = pipeline("text-generation", model="MattBou00//content/IRL-Bayesian/outputs/2025-09-22_10-46-42/checkpoints/checkpoint-epoch-60")
outputs = generator("Hello, my llama is cute")
```
If you want to use the model for training or to obtain the outputs from the value head, load the model as follows:
```python
from transformers import AutoTokenizer
from trl import AutoModelForCausalLMWithValueHead
tokenizer = AutoTokenizer.from_pretrained("MattBou00//content/IRL-Bayesian/outputs/2025-09-22_10-46-42/checkpoints/checkpoint-epoch-60")
model = AutoModelForCausalLMWithValueHead.from_pretrained("MattBou00//content/IRL-Bayesian/outputs/2025-09-22_10-46-42/checkpoints/checkpoint-epoch-60")
inputs = tokenizer("Hello, my llama is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
```
|
Accountable-SA/gemma-3-270m-it-base-Q3_K_M-GGUF
|
Accountable-SA
| 2025-09-22T10:56:00Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:Accountable-SA/gemma-3-270m-it-base",
"base_model:quantized:Accountable-SA/gemma-3-270m-it-base",
"endpoints_compatible",
"region:us"
] | null | 2025-09-22T10:55:56Z |
---
library_name: transformers
tags:
- llama-cpp
- gguf-my-repo
base_model: Accountable-SA/gemma-3-270m-it-base
---
# massimogiuseppe/gemma-3-270m-it-base-Q3_K_M-GGUF
This model was converted to GGUF format from [`Accountable-SA/gemma-3-270m-it-base`](https://huggingface.co/Accountable-SA/gemma-3-270m-it-base) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Accountable-SA/gemma-3-270m-it-base) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo massimogiuseppe/gemma-3-270m-it-base-Q3_K_M-GGUF --hf-file gemma-3-270m-it-base-q3_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo massimogiuseppe/gemma-3-270m-it-base-Q3_K_M-GGUF --hf-file gemma-3-270m-it-base-q3_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo massimogiuseppe/gemma-3-270m-it-base-Q3_K_M-GGUF --hf-file gemma-3-270m-it-base-q3_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo massimogiuseppe/gemma-3-270m-it-base-Q3_K_M-GGUF --hf-file gemma-3-270m-it-base-q3_k_m.gguf -c 2048
```
|
thegdpranavl/Qwen3_8B_Bespoke
|
thegdpranavl
| 2025-09-22T10:54:19Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"base_model:unsloth/Qwen3-8B-Base-unsloth-bnb-4bit",
"base_model:finetune:unsloth/Qwen3-8B-Base-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-22T10:54:02Z |
---
base_model: unsloth/Qwen3-8B-Base-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** thegdpranavl
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-8B-Base-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ttr1007/EdwardFisher-Replicate3
|
ttr1007
| 2025-09-22T10:53:44Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-22T10:53:44Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: Edward
---
# Edwardfisher Replicate3
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `Edward` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "Edward",
"lora_weights": "https://huggingface.co/ttr1007/EdwardFisher-Replicate3/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('ttr1007/EdwardFisher-Replicate3', weight_name='lora.safetensors')
image = pipeline('Edward').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 3512
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/ttr1007/EdwardFisher-Replicate3/discussions) to add images that show off what you’ve made with this LoRA.
|
felixZzz/2xtvgc3k-step_00500
|
felixZzz
| 2025-09-22T10:53:44Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-22T10:51:50Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
MattBou00/llama-3-2-1b-detox_v1f_RRETRT_Again_ROUND5-checkpoint-epoch-40
|
MattBou00
| 2025-09-22T10:53:32Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"ppo",
"reinforcement-learning",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
reinforcement-learning
| 2025-09-22T10:52:33Z |
---
license: apache-2.0
library_name: transformers
tags:
- trl
- ppo
- transformers
- reinforcement-learning
---
# TRL Model
This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for text generation.
## Usage
To use this model for inference, first install the TRL library:
```bash
python -m pip install trl
```
You can then generate text as follows:
```python
from transformers import pipeline
generator = pipeline("text-generation", model="MattBou00//content/IRL-Bayesian/outputs/2025-09-22_10-46-42/checkpoints/checkpoint-epoch-40")
outputs = generator("Hello, my llama is cute")
```
If you want to use the model for training or to obtain the outputs from the value head, load the model as follows:
```python
from transformers import AutoTokenizer
from trl import AutoModelForCausalLMWithValueHead
tokenizer = AutoTokenizer.from_pretrained("MattBou00//content/IRL-Bayesian/outputs/2025-09-22_10-46-42/checkpoints/checkpoint-epoch-40")
model = AutoModelForCausalLMWithValueHead.from_pretrained("MattBou00//content/IRL-Bayesian/outputs/2025-09-22_10-46-42/checkpoints/checkpoint-epoch-40")
inputs = tokenizer("Hello, my llama is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
```
|
qualiaadmin/405dc14c-9a7e-4ea7-96b7-548548a9c0c7
|
qualiaadmin
| 2025-09-22T10:52:15Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"robotics",
"smolvla",
"dataset:Calvert0921/SmolVLA_LiftBlueCubeDouble_Franka_200",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-09-22T10:50:32Z |
---
base_model: lerobot/smolvla_base
datasets: Calvert0921/SmolVLA_LiftBlueCubeDouble_Franka_200
library_name: lerobot
license: apache-2.0
model_name: smolvla
pipeline_tag: robotics
tags:
- lerobot
- robotics
- smolvla
---
# Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
lerobot-train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
lerobot-record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
mradermacher/salamandra-2b-fft-trenes-v2-GGUF
|
mradermacher
| 2025-09-22T10:51:40Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"generated_from_trainer",
"sft",
"unsloth",
"trl",
"en",
"base_model:fabriciocarraro/salamandra-2b-fft-trenes-v2",
"base_model:quantized:fabriciocarraro/salamandra-2b-fft-trenes-v2",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-22T10:41:06Z |
---
base_model: fabriciocarraro/salamandra-2b-fft-trenes-v2
language:
- en
library_name: transformers
model_name: fft-2b-v2
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- generated_from_trainer
- sft
- unsloth
- trl
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/fabriciocarraro/salamandra-2b-fft-trenes-v2
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#salamandra-2b-fft-trenes-v2-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/salamandra-2b-fft-trenes-v2-GGUF/resolve/main/salamandra-2b-fft-trenes-v2.Q2_K.gguf) | Q2_K | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/salamandra-2b-fft-trenes-v2-GGUF/resolve/main/salamandra-2b-fft-trenes-v2.Q3_K_S.gguf) | Q3_K_S | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/salamandra-2b-fft-trenes-v2-GGUF/resolve/main/salamandra-2b-fft-trenes-v2.Q3_K_M.gguf) | Q3_K_M | 1.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/salamandra-2b-fft-trenes-v2-GGUF/resolve/main/salamandra-2b-fft-trenes-v2.Q3_K_L.gguf) | Q3_K_L | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/salamandra-2b-fft-trenes-v2-GGUF/resolve/main/salamandra-2b-fft-trenes-v2.IQ4_XS.gguf) | IQ4_XS | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/salamandra-2b-fft-trenes-v2-GGUF/resolve/main/salamandra-2b-fft-trenes-v2.Q4_K_S.gguf) | Q4_K_S | 1.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/salamandra-2b-fft-trenes-v2-GGUF/resolve/main/salamandra-2b-fft-trenes-v2.Q4_K_M.gguf) | Q4_K_M | 1.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/salamandra-2b-fft-trenes-v2-GGUF/resolve/main/salamandra-2b-fft-trenes-v2.Q5_K_S.gguf) | Q5_K_S | 1.7 | |
| [GGUF](https://huggingface.co/mradermacher/salamandra-2b-fft-trenes-v2-GGUF/resolve/main/salamandra-2b-fft-trenes-v2.Q5_K_M.gguf) | Q5_K_M | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/salamandra-2b-fft-trenes-v2-GGUF/resolve/main/salamandra-2b-fft-trenes-v2.Q6_K.gguf) | Q6_K | 2.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/salamandra-2b-fft-trenes-v2-GGUF/resolve/main/salamandra-2b-fft-trenes-v2.Q8_0.gguf) | Q8_0 | 2.5 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/salamandra-2b-fft-trenes-v2-GGUF/resolve/main/salamandra-2b-fft-trenes-v2.f16.gguf) | f16 | 4.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
felixZzz/2xtvgc3k-step_00400
|
felixZzz
| 2025-09-22T10:51:37Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-22T10:49:41Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
tomerz14/distilhubert-finetuned-gtzan
|
tomerz14
| 2025-09-22T10:51:10Z | 18 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"base_model:ntu-spml/distilhubert",
"base_model:finetune:ntu-spml/distilhubert",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2025-09-20T07:49:44Z |
---
library_name: transformers
license: apache-2.0
base_model: ntu-spml/distilhubert
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: distilhubert-finetuned-gtzan
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: GTZAN
type: marsyas/gtzan
config: all
split: train
args: all
metrics:
- name: Accuracy
type: accuracy
value: 0.74
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-gtzan
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1216
- Accuracy: 0.74
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.9417 | 1.0 | 29 | 1.7636 | 0.4 |
| 1.1584 | 2.0 | 58 | 1.2854 | 0.59 |
| 0.9493 | 3.0 | 87 | 1.1907 | 0.57 |
| 0.5895 | 4.0 | 116 | 1.4273 | 0.62 |
| 0.3732 | 5.0 | 145 | 0.9427 | 0.74 |
| 0.3519 | 6.0 | 174 | 1.4957 | 0.63 |
| 0.2988 | 7.0 | 203 | 1.4078 | 0.67 |
| 0.1915 | 8.0 | 232 | 1.1216 | 0.74 |
### Framework versions
- Transformers 4.56.1
- Pytorch 2.8.0+cu128
- Datasets 3.6.0
- Tokenizers 0.22.0
|
its-zion-18/sign-image-autogluon-predictor
|
its-zion-18
| 2025-09-22T10:50:59Z | 0 | 0 | null |
[
"autogluon",
"image-classification",
"multimodal",
"sign-identification",
"en",
"license:mit",
"region:us"
] |
image-classification
| 2025-09-20T17:04:42Z |
---
license: mit
language: en
tags:
- autogluon
- image-classification
- multimodal
- sign-identification
---
# AutoGluon Sign Identification Predictor
This repository contains a trained MultiModalPredictor from the AutoGluon library, which was trained to identify signs from images. Which can also be found in the files and versions section under AutoML_for_Neural_Networks
# Dataset
The model was trained on the ecopus/sign_identification dataset. The augmented split was used for training and validation, while the original split was used for the final evaluation of the model's performance.
# Evaluation Results
The final performance of the best model on the original dataset is as follows:
- **Accuracy**: `1.0000`
- **Weighted F1**: `1.0000`
# Files in this Repository
- `autogluon_image_predictor.pkl`: The trained `MultiModalPredictor` pickled using `cloudpickle`.
- `autogluon_image_predictor_dir.zip`: The zipped native AutoGluon predictor directory for portability.
# Potential Errors
The augmented split in the ecopus/sign_identification dataset is specifically designed to be an artificially expanded version of the original split.
The images in the augmented set are simple transformations—like rotations, flips, or slight color changes—of the images in the original set.
The code then trains the model on a portion of the augmented data (df_aug_train) and evaluates it on the original data (df_orig).
Because the model was trained on data that is derived directly from the evaluation data, it's not actually seeing truly "new" information during the final test.
Which could be leading to data leakage and overfitting
|
Poorvaja/Byt5_Telugu
|
Poorvaja
| 2025-09-22T10:49:52Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-09-22T10:48:43Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
poolkiltzn/blockassist-bc-vigilant_alert_tuna_1758538017
|
poolkiltzn
| 2025-09-22T10:48:24Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"vigilant alert tuna",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-22T10:47:53Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- vigilant alert tuna
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
rinogeek/EcoMind
|
rinogeek
| 2025-09-22T10:47:44Z | 0 | 1 | null |
[
"safetensors",
"gpt2",
"finance",
"french",
"llm",
"text-generation",
"fr",
"license:mit",
"region:us"
] |
text-generation
| 2025-09-22T00:11:13Z |
---
license: mit
language:
- fr
pipeline_tag: text-generation
tags:
- finance
- french
- llm
---
|
justpluso/turn-detection
|
justpluso
| 2025-09-22T10:47:43Z | 28 | 0 | null |
[
"safetensors",
"gemma3_text",
"turn-detection",
"text-classification",
"zh",
"en",
"base_model:google/gemma-3-270m-it",
"base_model:finetune:google/gemma-3-270m-it",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2025-09-05T10:54:42Z |
---
license: apache-2.0
language:
- zh
- en
base_model:
- google/gemma-3-270m-it
pipeline_tag: text-classification
tags:
- turn-detection
---
👉👉👉👉👉 [github](https://github.com/justplus/turn-detection)
Turn Detection(对话轮次检测)是一个用于人机对话系统中的关键技术,主要用于:
- **对话边界识别**:准确判断用户何时结束当前发言,避免对话系统过早或过晚响应
- **多轮对话管理**:在连续对话中识别每个对话轮次的开始和结束,提升对话体验
- **实时交互优化**:通过精准的轮次检测,实现更自然流畅的人机交互
- **语音助手增强**:为语音助手、客服机器人等应用提供更智能的对话控制
## 2. 主要特点
### 🔄 支持多轮对话
- 能够处理复杂的多轮对话场景
- 准确识别对话中的停顿、思考和真正的轮次结束
- 支持上下文感知的轮次判断
支持多轮对话的重要性:
```
user: 我们来个成语接龙吧?
assistant: 那我先来,杞人忧天。该你了
user: 天天向上
```
非多轮对话下单一的"天天向上"是不完整的,但是放在上下文中则应该是完整的。
### 🚀 轻量化推理
- 模型参数仅270M,资源占用低
- 支持CPU推理,无需GPU即可部署
- 推理速度快,满足实时对话需求
- 适合边缘设备和资源受限环境
### 🌍 多语言支持
- 原生支持中文和英文对话检测
- 模型架构支持微调扩展到其他语言
- 跨语言泛化能力强
### 🛠️ 可定制化
- 提供完整的微调框架
- 支持针对特定领域和语言的定制训练
- 灵活的数据处理和训练流程
### 🙅♂️ 支持等待状态
- 0 (不完整):用户话语未说完,需要等待继续输入
- 1 (完整):用户话语表达完整,可以进行回复
- 2 (要求等待):用户要求暂停或打断AI回复
|
thefirstgoku/22SEP_intergated_v32_21
|
thefirstgoku
| 2025-09-22T10:46:38Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-09-22T10:45:24Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
tommycik/prova4
|
tommycik
| 2025-09-22T10:41:49Z | 6 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"safetensors",
"flux",
"flux-diffusers",
"text-to-image",
"controlnet",
"diffusers-training",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-08-24T15:17:33Z |
---
base_model: black-forest-labs/FLUX.1-dev
library_name: diffusers
license: other
inference: true
tags:
- flux
- flux-diffusers
- text-to-image
- diffusers
- controlnet
- diffusers-training
- flux
- flux-diffusers
- text-to-image
- diffusers
- controlnet
- diffusers-training
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# controlnet-tommycik/prova4
These are controlnet weights trained on black-forest-labs/FLUX.1-dev with new type of conditioning.
You can find some example images below.
prompt: transparent glass on white background, the bottom part of the glass presents light grooves

## License
Please adhere to the licensing terms as described [here](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md)
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
thefirstgoku/22SEP_intergated_v32_12
|
thefirstgoku
| 2025-09-22T10:41:24Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-09-22T10:40:09Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
dezineinnovation/commercialinteriordesign
|
dezineinnovation
| 2025-09-22T10:39:24Z | 0 | 0 | null |
[
"text-classification",
"license:bigscience-openrail-m",
"region:us"
] |
text-classification
| 2025-09-22T10:36:46Z |
---
license: bigscience-openrail-m
pipeline_tag: text-classification
---
|
poolkiltzn/blockassist-bc-vigilant_alert_tuna_1758537408
|
poolkiltzn
| 2025-09-22T10:38:07Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"vigilant alert tuna",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-22T10:37:48Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- vigilant alert tuna
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
BKM1804/d3f907ad-4c78-4906-af59-b353aeb75e0f
|
BKM1804
| 2025-09-22T10:37:27Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-22T10:37:20Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
samhitmantrala/smish_final
|
samhitmantrala
| 2025-09-22T10:37:07Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-09-22T10:31:48Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: smish_final
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smish_final
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0618
- Accuracy: 0.9847
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 442 | 0.0608 | 0.9858 |
| 0.0749 | 2.0 | 884 | 0.0618 | 0.9847 |
### Framework versions
- Transformers 4.56.1
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.22.0
|
trumtrum/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-silky_howling_jellyfish
|
trumtrum
| 2025-09-22T10:36:22Z | 150 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am silky_howling_jellyfish",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-14T10:08:40Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am silky_howling_jellyfish
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/LongPAI-8B-i1-GGUF
|
mradermacher
| 2025-09-22T10:34:05Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:jylins/LongPAI-8B",
"base_model:quantized:jylins/LongPAI-8B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-09-22T09:43:50Z |
---
base_model: jylins/LongPAI-8B
language:
- en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
weighted/imatrix quants of https://huggingface.co/jylins/LongPAI-8B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#LongPAI-8B-i1-GGUF).***
static quants are available at https://huggingface.co/mradermacher/LongPAI-8B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/LongPAI-8B-i1-GGUF/resolve/main/LongPAI-8B.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) |
| [GGUF](https://huggingface.co/mradermacher/LongPAI-8B-i1-GGUF/resolve/main/LongPAI-8B.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/LongPAI-8B-i1-GGUF/resolve/main/LongPAI-8B.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/LongPAI-8B-i1-GGUF/resolve/main/LongPAI-8B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/LongPAI-8B-i1-GGUF/resolve/main/LongPAI-8B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/LongPAI-8B-i1-GGUF/resolve/main/LongPAI-8B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/LongPAI-8B-i1-GGUF/resolve/main/LongPAI-8B.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/LongPAI-8B-i1-GGUF/resolve/main/LongPAI-8B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 3.1 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/LongPAI-8B-i1-GGUF/resolve/main/LongPAI-8B.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/LongPAI-8B-i1-GGUF/resolve/main/LongPAI-8B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/LongPAI-8B-i1-GGUF/resolve/main/LongPAI-8B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/LongPAI-8B-i1-GGUF/resolve/main/LongPAI-8B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/LongPAI-8B-i1-GGUF/resolve/main/LongPAI-8B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/LongPAI-8B-i1-GGUF/resolve/main/LongPAI-8B.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/LongPAI-8B-i1-GGUF/resolve/main/LongPAI-8B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/LongPAI-8B-i1-GGUF/resolve/main/LongPAI-8B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/LongPAI-8B-i1-GGUF/resolve/main/LongPAI-8B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/LongPAI-8B-i1-GGUF/resolve/main/LongPAI-8B.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/LongPAI-8B-i1-GGUF/resolve/main/LongPAI-8B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.8 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/LongPAI-8B-i1-GGUF/resolve/main/LongPAI-8B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/LongPAI-8B-i1-GGUF/resolve/main/LongPAI-8B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/LongPAI-8B-i1-GGUF/resolve/main/LongPAI-8B.i1-Q4_1.gguf) | i1-Q4_1 | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/LongPAI-8B-i1-GGUF/resolve/main/LongPAI-8B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/LongPAI-8B-i1-GGUF/resolve/main/LongPAI-8B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/LongPAI-8B-i1-GGUF/resolve/main/LongPAI-8B.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/LongPAI-8B-GGUF
|
mradermacher
| 2025-09-22T10:33:02Z | 0 | 1 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:jylins/LongPAI-8B",
"base_model:quantized:jylins/LongPAI-8B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-22T09:28:08Z |
---
base_model: jylins/LongPAI-8B
language:
- en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/jylins/LongPAI-8B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#LongPAI-8B-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/LongPAI-8B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/LongPAI-8B-GGUF/resolve/main/LongPAI-8B.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/LongPAI-8B-GGUF/resolve/main/LongPAI-8B.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/LongPAI-8B-GGUF/resolve/main/LongPAI-8B.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/LongPAI-8B-GGUF/resolve/main/LongPAI-8B.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/LongPAI-8B-GGUF/resolve/main/LongPAI-8B.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/LongPAI-8B-GGUF/resolve/main/LongPAI-8B.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/LongPAI-8B-GGUF/resolve/main/LongPAI-8B.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/LongPAI-8B-GGUF/resolve/main/LongPAI-8B.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/LongPAI-8B-GGUF/resolve/main/LongPAI-8B.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/LongPAI-8B-GGUF/resolve/main/LongPAI-8B.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/LongPAI-8B-GGUF/resolve/main/LongPAI-8B.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/LongPAI-8B-GGUF/resolve/main/LongPAI-8B.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
TiMOld/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-roaring_smooth_ibis
|
TiMOld
| 2025-09-22T10:29:26Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am roaring_smooth_ibis",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-22T09:37:39Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am roaring_smooth_ibis
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
hunarbatra/spatialthinker_10k_baseline_option_text_75_7b
|
hunarbatra
| 2025-09-22T10:22:36Z | 20 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_5_vl",
"image-to-text",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2025-09-16T01:01:11Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
abhi099k/BBI-ai-text-detecto-v4
|
abhi099k
| 2025-09-22T10:17:16Z | 36 | 0 |
transformers
|
[
"transformers",
"safetensors",
"deberta-v2",
"generated_from_trainer",
"text-classification",
"base_model:desklib/ai-text-detector-v1.01",
"base_model:finetune:desklib/ai-text-detector-v1.01",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-09-19T04:08:26Z |
---
library_name: transformers
license: mit
base_model: desklib/ai-text-detector-v1.01
tags:
- generated_from_trainer
model-index:
- name: BBI-ai-text-detecto-v4
results: []
pipeline_tag: text-classification
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BBI-ai-text-detecto-v4
This model is a fine-tuned version of [desklib/ai-text-detector-v1.01](https://huggingface.co/desklib/ai-text-detector-v1.01) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 1.0276
- eval_model_preparation_time: 0.0058
- eval_accuracy: 0.5686
- eval_f1: 0.6963
- eval_runtime: 213.2311
- eval_samples_per_second: 49.242
- eval_steps_per_second: 6.158
- step: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.56.1
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.22.0
|
Reihaneh/wav2vec2_fy_nl_best_frisian_1
|
Reihaneh
| 2025-09-22T10:17:11Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-22T10:17:10Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/Orpheus-TTS-pl-v3.0-GGUF
|
mradermacher
| 2025-09-22T10:16:48Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"en",
"base_model:TeeZee/Orpheus-TTS-pl-v3.0",
"base_model:quantized:TeeZee/Orpheus-TTS-pl-v3.0",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-22T08:55:00Z |
---
base_model: TeeZee/Orpheus-TTS-pl-v3.0
language:
- en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- llama
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/TeeZee/Orpheus-TTS-pl-v3.0
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Orpheus-TTS-pl-v3.0-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Orpheus-TTS-pl-v3.0-GGUF/resolve/main/Orpheus-TTS-pl-v3.0.Q2_K.gguf) | Q2_K | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/Orpheus-TTS-pl-v3.0-GGUF/resolve/main/Orpheus-TTS-pl-v3.0.Q3_K_S.gguf) | Q3_K_S | 1.7 | |
| [GGUF](https://huggingface.co/mradermacher/Orpheus-TTS-pl-v3.0-GGUF/resolve/main/Orpheus-TTS-pl-v3.0.Q3_K_M.gguf) | Q3_K_M | 1.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Orpheus-TTS-pl-v3.0-GGUF/resolve/main/Orpheus-TTS-pl-v3.0.Q3_K_L.gguf) | Q3_K_L | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/Orpheus-TTS-pl-v3.0-GGUF/resolve/main/Orpheus-TTS-pl-v3.0.IQ4_XS.gguf) | IQ4_XS | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/Orpheus-TTS-pl-v3.0-GGUF/resolve/main/Orpheus-TTS-pl-v3.0.Q4_K_S.gguf) | Q4_K_S | 2.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Orpheus-TTS-pl-v3.0-GGUF/resolve/main/Orpheus-TTS-pl-v3.0.Q4_K_M.gguf) | Q4_K_M | 2.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Orpheus-TTS-pl-v3.0-GGUF/resolve/main/Orpheus-TTS-pl-v3.0.Q5_K_S.gguf) | Q5_K_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Orpheus-TTS-pl-v3.0-GGUF/resolve/main/Orpheus-TTS-pl-v3.0.Q5_K_M.gguf) | Q5_K_M | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/Orpheus-TTS-pl-v3.0-GGUF/resolve/main/Orpheus-TTS-pl-v3.0.Q6_K.gguf) | Q6_K | 2.8 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Orpheus-TTS-pl-v3.0-GGUF/resolve/main/Orpheus-TTS-pl-v3.0.Q8_0.gguf) | Q8_0 | 3.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Orpheus-TTS-pl-v3.0-GGUF/resolve/main/Orpheus-TTS-pl-v3.0.f16.gguf) | f16 | 6.7 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Gemma-2b-Uncensored-v1-GGUF
|
mradermacher
| 2025-09-22T10:16:48Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:sirev/Gemma-2b-Uncensored-v1",
"base_model:quantized:sirev/Gemma-2b-Uncensored-v1",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-22T09:44:24Z |
---
base_model: sirev/Gemma-2b-Uncensored-v1
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/sirev/Gemma-2b-Uncensored-v1
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Gemma-2b-Uncensored-v1-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Gemma-2b-Uncensored-v1-GGUF/resolve/main/Gemma-2b-Uncensored-v1.Q2_K.gguf) | Q2_K | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2b-Uncensored-v1-GGUF/resolve/main/Gemma-2b-Uncensored-v1.Q3_K_S.gguf) | Q3_K_S | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2b-Uncensored-v1-GGUF/resolve/main/Gemma-2b-Uncensored-v1.Q3_K_M.gguf) | Q3_K_M | 1.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2b-Uncensored-v1-GGUF/resolve/main/Gemma-2b-Uncensored-v1.Q3_K_L.gguf) | Q3_K_L | 1.7 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2b-Uncensored-v1-GGUF/resolve/main/Gemma-2b-Uncensored-v1.IQ4_XS.gguf) | IQ4_XS | 1.7 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2b-Uncensored-v1-GGUF/resolve/main/Gemma-2b-Uncensored-v1.Q4_K_S.gguf) | Q4_K_S | 1.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2b-Uncensored-v1-GGUF/resolve/main/Gemma-2b-Uncensored-v1.Q4_K_M.gguf) | Q4_K_M | 1.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2b-Uncensored-v1-GGUF/resolve/main/Gemma-2b-Uncensored-v1.Q5_K_S.gguf) | Q5_K_S | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2b-Uncensored-v1-GGUF/resolve/main/Gemma-2b-Uncensored-v1.Q5_K_M.gguf) | Q5_K_M | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2b-Uncensored-v1-GGUF/resolve/main/Gemma-2b-Uncensored-v1.Q6_K.gguf) | Q6_K | 2.3 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2b-Uncensored-v1-GGUF/resolve/main/Gemma-2b-Uncensored-v1.Q8_0.gguf) | Q8_0 | 2.9 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2b-Uncensored-v1-GGUF/resolve/main/Gemma-2b-Uncensored-v1.f16.gguf) | f16 | 5.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
enuma-elis/gemma_27b_4bit
|
enuma-elis
| 2025-09-22T10:16:14Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/gemma-3-27b-it-unsloth-bnb-4bit",
"base_model:quantized:unsloth/gemma-3-27b-it-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
image-text-to-text
| 2025-09-22T10:14:48Z |
---
base_model: unsloth/gemma-3-27b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** enuma-elis
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-27b-it-unsloth-bnb-4bit
This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Ziad177/whisper-large-v3-qlora_
|
Ziad177
| 2025-09-22T10:14:46Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-22T10:14:41Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
FL33TW00D-HF/dots.ocr.ne
|
FL33TW00D-HF
| 2025-09-22T10:14:44Z | 0 | 0 |
coreml
|
[
"coreml",
"base_model:rednote-hilab/dots.ocr",
"base_model:quantized:rednote-hilab/dots.ocr",
"license:mit",
"region:us"
] | null | 2025-09-22T10:11:21Z |
---
license: mit
base_model:
- rednote-hilab/dots.ocr
library_name: coreml
---
# dots.ocr.ne
CoreML conversions of [dots.ocr](https://huggingface.co/rednote-hilab/dots.ocr) by RedNote
|
Alicia22/22SAT_KK10_l13
|
Alicia22
| 2025-09-22T10:13:43Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-09-22T10:10:36Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
aleksa-codes/flux-ghibsky-illustration
|
aleksa-codes
| 2025-09-22T10:11:58Z | 6,635 | 295 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"image-generation",
"flux",
"replicate",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2024-08-20T13:59:25Z |
---
tags:
- text-to-image
- diffusers
- lora
- template:sd-lora
- image-generation
- flux
- replicate
pipeline_tag: text-to-image
thumbnail: >-
https://tjzk.replicate.delivery/models_models_cover_image/e5bc70de-c6ae-497f-bf2c-7e81b1183f05/out-0.jpg
widget:
- text: >-
GHIBSKY style, a cat on a windowsill gazing out at a starry night sky and
distant city lights
output:
url: images/example1.jpg
- text: >-
GHIBSKY style, a fisherman casting a line into a peaceful village lake
surrounded by quaint cottages
output:
url: images/example2.jpg
- text: >-
GHIBSKY style, cozy mountain cabin covered in snow, with smoke curling from
the chimney and a warm, inviting light spilling through the windows
output:
url: images/example3.jpg
- text: GHIBSKY style, Mykonos
output:
url: images/example4.jpg
- text: >-
GHIBSKY style, an orange Lamborghini driving down a hill road at night with
a beautiful ocean view in the background, side view, no text
output:
url: images/example5.jpg
- text: >-
GHIBSKY style, a small Yorkie on a windowsill during a snowy winter night,
with a warm, cozy glow from inside and soft snowflakes drifting outside
output:
url: images/example6.jpg
- text: >-
GHIBSKY style, serene Japanese garden with a koi pond and a traditional tea
house, nestled under a canopy of cherry blossoms in full bloom
output:
url: images/example7.jpg
- text: GHIBSKY style, the most beautiful place in the universe
output:
url: images/example8.jpg
- text: GHIBSKY style painting, sign saying "Flux Ghibsky"
output:
url: images/example_dj4xgd39e.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: GHIBSKY style
license: other
license_name: flux-dev-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# Flux Ghibsky Illustration: Create Serene and Enchanting Landscapes
<Gallery />
## Model Description
The Flux Ghibsky Illustration model generates landscapes that blend serene, surreal skies with intricate, Ghibli-inspired details. This fusion of styles creates enchanting scenes that capture the essence of both Ghibli's whimsical charm and Makoto Shinkai's atmospheric beauty. Perfect for creating dreamy visuals. You can also run the model on Replicate. Feedback is welcome!
[Replicate Model Page](https://replicate.com/aleksa-codes/flux-ghibsky-illustration)
## Trigger Words
Use `GHIBSKY style` to invoke the model’s unique aesthetic. It’s best to start your prompt with the trigger word, followed by descriptions of your scene, such as nature, skies, houses, roads, villages, etc.
If you are getting too realistic images, try adding `painting` to your prompt, for example: `GHIBSKY style painting`.
## Training Details
- **Trained Using**: [Flux LoRA Fast Training on fal.ai](https://fal.ai/models/fal-ai/flux-lora-fast-training) and [Flux LoRA Trainer on Replicate](https://replicate.com/ostris/flux-dev-lora-trainer/train)
- **Number of Images**: 35
- **Trigger Word**: `GHIBSKY`
- **Auto-captioning**: Enabled
- **Auto-captioning Prefix**: `""`
- **Auto-captioning Suffix**: `", GHIBSKY style"`
- **Training Steps**: 1000
- **Learning Rate**: 0.0004
- **Batch Size**: 1
- **LoRA Rank**: 16
## Download Model
[Download the *.safetensors LoRA](https://huggingface.co/aleksa-codes/flux-ghibsky-illustration/tree/main) in the Files & versions tab.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```python
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained("black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16).to('cuda')
pipeline.load_lora_weights('aleksa-codes/flux-ghibsky-illustration', weight_name='lora.safetensors')
image = pipeline('GHIBSKY style, a serene lakeside village with colorful houses and towering mountains under a dreamy sky').images[0]
```
For more details, including weighting, merging, and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters).
# Related Tools
* **[UnYellowGPT](https://unyellowgpt.com/):** Noticing a yellow or sepia tint in your AI-generated images? This one-click tool intelligently removes unwanted color casts, restoring the natural white balance and vibrancy to your visuals.
* **[GPT Image Captioner](https://gptcaptioner.aleksa.codes/):** If you're training your own LoRA model, this open-source tool I created is a great replacement for standard auto-captioning. It generates high-quality descriptive `.txt` files for your images, supporting both OpenAI and local inference with Ollama.
## License
Please adhere to the licensing terms as described [here](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md).
|
sayannath/gpt-oss-20b-medical-qa
|
sayannath
| 2025-09-22T10:10:08Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:openai/gpt-oss-20b",
"base_model:finetune:openai/gpt-oss-20b",
"endpoints_compatible",
"region:us"
] | null | 2025-09-21T19:03:18Z |
---
base_model: openai/gpt-oss-20b
library_name: transformers
model_name: gpt-oss-20b-medical-qa
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for gpt-oss-20b-medical-qa
This model is a fine-tuned version of [openai/gpt-oss-20b](https://huggingface.co/openai/gpt-oss-20b).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="sayannath/gpt-oss-20b-medical-qa", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/sayannath235/LLM-Recipe/runs/q1iyzxbm)
This model was trained with SFT.
### Framework versions
- TRL: 0.23.0
- Transformers: 4.56.2
- Pytorch: 2.8.0
- Datasets: 4.1.0
- Tokenizers: 0.22.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
MattBou00/llama-3-2-1b-detox_RETRY_SAMPLING_scale10_Round3-checkpoint-epoch-80
|
MattBou00
| 2025-09-22T10:08:53Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"ppo",
"reinforcement-learning",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
reinforcement-learning
| 2025-09-22T10:07:54Z |
---
license: apache-2.0
library_name: transformers
tags:
- trl
- ppo
- transformers
- reinforcement-learning
---
# TRL Model
This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for text generation.
## Usage
To use this model for inference, first install the TRL library:
```bash
python -m pip install trl
```
You can then generate text as follows:
```python
from transformers import pipeline
generator = pipeline("text-generation", model="MattBou00//content/IRL-Bayesian/outputs/2025-09-22_09-55-45/checkpoints/checkpoint-epoch-80")
outputs = generator("Hello, my llama is cute")
```
If you want to use the model for training or to obtain the outputs from the value head, load the model as follows:
```python
from transformers import AutoTokenizer
from trl import AutoModelForCausalLMWithValueHead
tokenizer = AutoTokenizer.from_pretrained("MattBou00//content/IRL-Bayesian/outputs/2025-09-22_09-55-45/checkpoints/checkpoint-epoch-80")
model = AutoModelForCausalLMWithValueHead.from_pretrained("MattBou00//content/IRL-Bayesian/outputs/2025-09-22_09-55-45/checkpoints/checkpoint-epoch-80")
inputs = tokenizer("Hello, my llama is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
```
|
nnilayy/dreamer_stride_256-binary-arousal-Kfold-3-stride_256
|
nnilayy
| 2025-09-22T10:07:44Z | 0 | 0 | null |
[
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-09-22T10:07:38Z |
---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed]
|
martijn75/token_voc
|
martijn75
| 2025-09-22T10:06:57Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-01-10T10:40:19Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
quablab/SmolLM3-Custom-SFT
|
quablab
| 2025-09-22T10:03:41Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"smollm3",
"text-generation",
"generated_from_trainer",
"smol-course",
"instruction-tuning",
"sft",
"hf_jobs",
"trl",
"conversational",
"base_model:HuggingFaceTB/SmolLM3-3B",
"base_model:finetune:HuggingFaceTB/SmolLM3-3B",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-22T09:53:52Z |
---
base_model: HuggingFaceTB/SmolLM3-3B
library_name: transformers
model_name: SmolLM3-Custom-SFT
tags:
- generated_from_trainer
- smol-course
- instruction-tuning
- sft
- hf_jobs
- trl
licence: license
---
# Model Card for SmolLM3-Custom-SFT
This model is a fine-tuned version of [HuggingFaceTB/SmolLM3-3B](https://huggingface.co/HuggingFaceTB/SmolLM3-3B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="quablab/SmolLM3-Custom-SFT", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.23.0
- Transformers: 4.56.2
- Pytorch: 2.5.1
- Datasets: 4.1.1
- Tokenizers: 0.22.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
John6666/illustrious-pixel-art-from-hades-v4-series-v-40-sdxl
|
John6666
| 2025-09-22T10:03:33Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"pixel art",
"2D",
"retro",
"indie",
"clean lines",
"sharp detail",
"consistent palettes",
"adherence",
"perspective",
"poses",
"consistency",
"game assets",
"visual fidelity",
"illustrious",
"en",
"base_model:OnomaAIResearch/Illustrious-xl-early-release-v0",
"base_model:finetune:OnomaAIResearch/Illustrious-xl-early-release-v0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2025-09-22T09:51:57Z |
---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- pixel art
- 2D
- retro
- indie
- clean lines
- sharp detail
- consistent palettes
- adherence
- perspective
- poses
- consistency
- game assets
- visual fidelity
- illustrious
base_model: OnomaAIResearch/Illustrious-xl-early-release-v0
---
Original model is [here](https://civitai.com/models/1732312/illustrious-pixelart-from-hades?modelVersionId=2239694).
This model created by [DeViLDoNia](https://civitai.com/user/DeViLDoNia).
|
MattBou00/llama-3-2-1b-detox_RETRY_SAMPLING_scale10_Round3-checkpoint-epoch-40
|
MattBou00
| 2025-09-22T10:02:25Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"ppo",
"reinforcement-learning",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
reinforcement-learning
| 2025-09-22T10:01:17Z |
---
license: apache-2.0
library_name: transformers
tags:
- trl
- ppo
- transformers
- reinforcement-learning
---
# TRL Model
This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for text generation.
## Usage
To use this model for inference, first install the TRL library:
```bash
python -m pip install trl
```
You can then generate text as follows:
```python
from transformers import pipeline
generator = pipeline("text-generation", model="MattBou00//content/IRL-Bayesian/outputs/2025-09-22_09-55-45/checkpoints/checkpoint-epoch-40")
outputs = generator("Hello, my llama is cute")
```
If you want to use the model for training or to obtain the outputs from the value head, load the model as follows:
```python
from transformers import AutoTokenizer
from trl import AutoModelForCausalLMWithValueHead
tokenizer = AutoTokenizer.from_pretrained("MattBou00//content/IRL-Bayesian/outputs/2025-09-22_09-55-45/checkpoints/checkpoint-epoch-40")
model = AutoModelForCausalLMWithValueHead.from_pretrained("MattBou00//content/IRL-Bayesian/outputs/2025-09-22_09-55-45/checkpoints/checkpoint-epoch-40")
inputs = tokenizer("Hello, my llama is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
```
|
MattBou00/llama-3-2-1b-detox_RETRY_SAMPLING_scale10_Round3-checkpoint-epoch-20
|
MattBou00
| 2025-09-22T09:59:03Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"ppo",
"reinforcement-learning",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
reinforcement-learning
| 2025-09-22T09:58:05Z |
---
license: apache-2.0
library_name: transformers
tags:
- trl
- ppo
- transformers
- reinforcement-learning
---
# TRL Model
This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for text generation.
## Usage
To use this model for inference, first install the TRL library:
```bash
python -m pip install trl
```
You can then generate text as follows:
```python
from transformers import pipeline
generator = pipeline("text-generation", model="MattBou00//content/IRL-Bayesian/outputs/2025-09-22_09-55-45/checkpoints/checkpoint-epoch-20")
outputs = generator("Hello, my llama is cute")
```
If you want to use the model for training or to obtain the outputs from the value head, load the model as follows:
```python
from transformers import AutoTokenizer
from trl import AutoModelForCausalLMWithValueHead
tokenizer = AutoTokenizer.from_pretrained("MattBou00//content/IRL-Bayesian/outputs/2025-09-22_09-55-45/checkpoints/checkpoint-epoch-20")
model = AutoModelForCausalLMWithValueHead.from_pretrained("MattBou00//content/IRL-Bayesian/outputs/2025-09-22_09-55-45/checkpoints/checkpoint-epoch-20")
inputs = tokenizer("Hello, my llama is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
```
|
sayouzone25/gemma-3-12b-trans-en-ko
|
sayouzone25
| 2025-09-22T09:58:53Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/gemma-3-12b-pt",
"base_model:finetune:google/gemma-3-12b-pt",
"endpoints_compatible",
"region:us"
] | null | 2025-09-18T09:32:09Z |
---
base_model: google/gemma-3-12b-pt
library_name: transformers
model_name: gemma-3-12b-trans-en-ko
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for gemma-3-12b-trans-en-ko
This model is a fine-tuned version of [google/gemma-3-12b-pt](https://huggingface.co/google/gemma-3-12b-pt).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="sayouzone25/gemma-3-12b-trans-en-ko", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.15.2
- Transformers: 4.55.2
- Pytorch: 2.8.0
- Datasets: 3.3.2
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.