modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
list | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
Khoa/masan-group-bert-multi-label-0925
|
Khoa
| 2025-09-23T04:49:53Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-09-23T04:42:08Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ChenWu98/numina_qwen_2.5_3b_sft_numina_10k_cluster2_split_1
|
ChenWu98
| 2025-09-23T04:44:58Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:Qwen/Qwen2.5-3B",
"base_model:finetune:Qwen/Qwen2.5-3B",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T04:42:43Z |
---
base_model: Qwen/Qwen2.5-3B
library_name: transformers
model_name: numina_qwen_2.5_3b_sft_numina_10k_cluster2_split_1
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for numina_qwen_2.5_3b_sft_numina_10k_cluster2_split_1
This model is a fine-tuned version of [Qwen/Qwen2.5-3B](https://huggingface.co/Qwen/Qwen2.5-3B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="None", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/chenwu/huggingface/runs/x2651zy3)
This model was trained with SFT.
### Framework versions
- TRL: 0.19.1
- Transformers: 4.51.1
- Pytorch: 2.7.0
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
AzzamAhmed/whisper-arabic-msa-peft-42
|
AzzamAhmed
| 2025-09-23T04:43:33Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T04:43:28Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
zenlm/zen-1
|
zenlm
| 2025-09-23T04:42:05Z | 0 | 0 | null |
[
"safetensors",
"qwen2",
"zen",
"hanzo",
"fine-tuned",
"mcp",
"general",
"text-generation",
"conversational",
"en",
"code",
"base_model:Qwen/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-0.5B-Instruct",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-09-23T04:41:43Z |
---
license: apache-2.0
base_model: Qwen/Qwen2.5-0.5B-Instruct
tags:
- zen
- hanzo
- fine-tuned
- mcp
- general
language:
- en
- code
pipeline_tag: text-generation
---
# Zen-1
Base fine-tuned model of Zen-1, fine-tuned for advanced language understanding.
## Key Features
- **Enhanced Reasoning**: Improved chain-of-thought capabilities
- **Code Generation**: Strong performance on programming tasks
- **Instruction Following**: Precise adherence to user instructions
- **Multi-turn Dialogue**: Coherent conversation handling
- **Technical Knowledge**: Deep understanding of ML/AI concepts
## Installation
### With Transformers
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("zenlm/zen-1")
tokenizer = AutoTokenizer.from_pretrained("zenlm/zen-1")
# Generate
inputs = tokenizer("Explain gradient descent", return_tensors="pt")
outputs = model.generate(**inputs, max_length=100)
print(tokenizer.decode(outputs[0]))
```
### With Ollama
```bash
ollama run zenlm/zen-1
```
## Training Details
- **Method**: LoRA fine-tuning
- **Hardware**: Apple Silicon (M-series)
- **Base Model**: Qwen/Qwen2.5-0.5B-Instruct
- **Training Data**: High-quality instruction and reasoning datasets
## License
Apache 2.0
## Citation
```bibtex
@misc{zen1-2024,
title={Zen-1: Advanced Language Model},
author={Zen Team},
year={2024},
publisher={HuggingFace},
url={https://huggingface.co/zenlm/zen-1}
}
```
|
mradermacher/zeta-30b-a3b-i1-GGUF
|
mradermacher
| 2025-09-23T04:38:23Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"qwen3_moe",
"en",
"dataset:zed-industries/zeta",
"base_model:Woutermans/zeta-30b-a3b",
"base_model:quantized:Woutermans/zeta-30b-a3b",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-09-22T20:25:15Z |
---
base_model: Woutermans/zeta-30b-a3b
datasets:
- zed-industries/zeta
language:
- en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3_moe
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
weighted/imatrix quants of https://huggingface.co/Woutermans/zeta-30b-a3b
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#zeta-30b-a3b-i1-GGUF).***
static quants are available at https://huggingface.co/mradermacher/zeta-30b-a3b-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/zeta-30b-a3b-i1-GGUF/resolve/main/zeta-30b-a3b.imatrix.gguf) | imatrix | 0.2 | imatrix file (for creating your own qwuants) |
| [GGUF](https://huggingface.co/mradermacher/zeta-30b-a3b-i1-GGUF/resolve/main/zeta-30b-a3b.i1-IQ1_S.gguf) | i1-IQ1_S | 6.5 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/zeta-30b-a3b-i1-GGUF/resolve/main/zeta-30b-a3b.i1-IQ1_M.gguf) | i1-IQ1_M | 7.2 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/zeta-30b-a3b-i1-GGUF/resolve/main/zeta-30b-a3b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 8.3 | |
| [GGUF](https://huggingface.co/mradermacher/zeta-30b-a3b-i1-GGUF/resolve/main/zeta-30b-a3b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 9.2 | |
| [GGUF](https://huggingface.co/mradermacher/zeta-30b-a3b-i1-GGUF/resolve/main/zeta-30b-a3b.i1-IQ2_S.gguf) | i1-IQ2_S | 9.4 | |
| [GGUF](https://huggingface.co/mradermacher/zeta-30b-a3b-i1-GGUF/resolve/main/zeta-30b-a3b.i1-IQ2_M.gguf) | i1-IQ2_M | 10.3 | |
| [GGUF](https://huggingface.co/mradermacher/zeta-30b-a3b-i1-GGUF/resolve/main/zeta-30b-a3b.i1-Q2_K_S.gguf) | i1-Q2_K_S | 10.6 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/zeta-30b-a3b-i1-GGUF/resolve/main/zeta-30b-a3b.i1-Q2_K.gguf) | i1-Q2_K | 11.4 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/zeta-30b-a3b-i1-GGUF/resolve/main/zeta-30b-a3b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 11.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/zeta-30b-a3b-i1-GGUF/resolve/main/zeta-30b-a3b.i1-IQ3_XS.gguf) | i1-IQ3_XS | 12.7 | |
| [GGUF](https://huggingface.co/mradermacher/zeta-30b-a3b-i1-GGUF/resolve/main/zeta-30b-a3b.i1-Q3_K_S.gguf) | i1-Q3_K_S | 13.4 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/zeta-30b-a3b-i1-GGUF/resolve/main/zeta-30b-a3b.i1-IQ3_S.gguf) | i1-IQ3_S | 13.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/zeta-30b-a3b-i1-GGUF/resolve/main/zeta-30b-a3b.i1-IQ3_M.gguf) | i1-IQ3_M | 13.6 | |
| [GGUF](https://huggingface.co/mradermacher/zeta-30b-a3b-i1-GGUF/resolve/main/zeta-30b-a3b.i1-Q3_K_M.gguf) | i1-Q3_K_M | 14.8 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/zeta-30b-a3b-i1-GGUF/resolve/main/zeta-30b-a3b.i1-Q3_K_L.gguf) | i1-Q3_K_L | 16.0 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/zeta-30b-a3b-i1-GGUF/resolve/main/zeta-30b-a3b.i1-IQ4_XS.gguf) | i1-IQ4_XS | 16.5 | |
| [GGUF](https://huggingface.co/mradermacher/zeta-30b-a3b-i1-GGUF/resolve/main/zeta-30b-a3b.i1-Q4_0.gguf) | i1-Q4_0 | 17.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/zeta-30b-a3b-i1-GGUF/resolve/main/zeta-30b-a3b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 17.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/zeta-30b-a3b-i1-GGUF/resolve/main/zeta-30b-a3b.i1-Q4_K_M.gguf) | i1-Q4_K_M | 18.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/zeta-30b-a3b-i1-GGUF/resolve/main/zeta-30b-a3b.i1-Q4_1.gguf) | i1-Q4_1 | 19.3 | |
| [GGUF](https://huggingface.co/mradermacher/zeta-30b-a3b-i1-GGUF/resolve/main/zeta-30b-a3b.i1-Q5_K_S.gguf) | i1-Q5_K_S | 21.2 | |
| [GGUF](https://huggingface.co/mradermacher/zeta-30b-a3b-i1-GGUF/resolve/main/zeta-30b-a3b.i1-Q5_K_M.gguf) | i1-Q5_K_M | 21.8 | |
| [GGUF](https://huggingface.co/mradermacher/zeta-30b-a3b-i1-GGUF/resolve/main/zeta-30b-a3b.i1-Q6_K.gguf) | i1-Q6_K | 25.2 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
nikhil958/my-tesing-model
|
nikhil958
| 2025-09-23T04:37:14Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T04:37:14Z |
---
license: apache-2.0
---
|
seraphimzzzz/1170445
|
seraphimzzzz
| 2025-09-23T04:33:52Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-23T04:33:54Z |
[View on Civ Archive](https://civarchive.com/models/1125770?modelVersionId=1265362)
|
Mts-07/MamataGreen_01
|
Mts-07
| 2025-09-23T04:33:42Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T04:33:42Z |
---
license: apache-2.0
---
|
ultratopaz/86121
|
ultratopaz
| 2025-09-23T04:33:18Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-23T04:33:20Z |
[View on Civ Archive](https://civarchive.com/models/111307?modelVersionId=120077)
|
amethyst9/1221856
|
amethyst9
| 2025-09-23T04:32:40Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-23T04:32:40Z |
[View on Civ Archive](https://civarchive.com/models/897219?modelVersionId=1317830)
|
ultratopaz/987605
|
ultratopaz
| 2025-09-23T04:32:25Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-23T04:32:26Z |
[View on Civ Archive](https://civarchive.com/models/44600?modelVersionId=1082621)
|
crystalline7/55795
|
crystalline7
| 2025-09-23T04:32:18Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-23T04:32:20Z |
[View on Civ Archive](https://civarchive.com/models/76646?modelVersionId=81420)
|
ultratopaz/47731
|
ultratopaz
| 2025-09-23T04:32:10Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-23T04:32:11Z |
[View on Civ Archive](https://civarchive.com/models/63682?modelVersionId=68243)
|
amethyst9/1817042
|
amethyst9
| 2025-09-23T04:31:55Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-23T04:31:56Z |
[View on Civ Archive](https://civarchive.com/models/897219?modelVersionId=1918693)
|
crystalline7/1075997
|
crystalline7
| 2025-09-23T04:31:37Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-23T04:31:38Z |
[View on Civ Archive](https://civarchive.com/models/897219?modelVersionId=1170688)
|
sumukha2002/carnatic-raga-classifier-lgbm
|
sumukha2002
| 2025-09-23T04:31:31Z | 0 | 0 | null |
[
"joblib",
"audio-classification",
"music",
"carnatic-music",
"raga-identification",
"lightgbm",
"license:mit",
"region:us"
] |
audio-classification
| 2025-09-23T04:31:26Z |
---
license: mit
tags: [audio-classification, music, carnatic-music, raga-identification, lightgbm]
---
# Carnatic Raga Identification Model
This is a LightGBM model trained to classify 15 Carnatic Ragas from statistical features derived from pitch contours.
- **Model Type:** LightGBM
- **Accuracy:** Achieved an average cross-validation accuracy of **84.44%**.
- **Ragas:** Behāg, Bhairavi, Bēgaḍa, Kamās, Kāmavardani, Madhyamāvati, Mukhāri, Mōhanaṁ, Sindhubhairavi, Suraṭi, Sāvēri, Tōḍi, Varāḷi, Ānandabhairavi, Ṣanmukhapriya
|
ultratopaz/1192063
|
ultratopaz
| 2025-09-23T04:31:28Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-23T04:31:27Z |
[View on Civ Archive](https://civarchive.com/models/76646?modelVersionId=1287568)
|
ultratopaz/1225833
|
ultratopaz
| 2025-09-23T04:31:07Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-23T04:31:07Z |
[View on Civ Archive](https://civarchive.com/models/1125770?modelVersionId=1321909)
|
crystalline7/1199519
|
crystalline7
| 2025-09-23T04:30:47Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-23T04:30:49Z |
[View on Civ Archive](https://civarchive.com/models/76646?modelVersionId=1295270)
|
jhuapl-bio/microbert
|
jhuapl-bio
| 2025-09-23T04:30:44Z | 0 | 0 | null |
[
"joblib",
"safetensors",
"metagenomics",
"taxonomic-classification",
"antimicrobial-resistance",
"pathogen-detection",
"text-classification",
"en",
"base_model:InstaDeepAI/nucleotide-transformer-v2-50m-multi-species",
"base_model:finetune:InstaDeepAI/nucleotide-transformer-v2-50m-multi-species",
"license:mit",
"region:us"
] |
text-classification
| 2025-09-11T01:30:50Z |
---
license: mit
language:
- en
base_model:
- LongSafari/hyenadna-large-1m-seqlen-hf
- zhihan1996/DNABERT-2-117M
- InstaDeepAI/nucleotide-transformer-v2-50m-multi-species
pipeline_tag: text-classification
tags:
- metagenomics
- taxonomic-classification
- antimicrobial-resistance
- pathogen-detection
---
# Genomic Language Models for Metagenomic Sequence Analysis
We provide genomic language models fine-tuned for the following tasks:
- **Taxonomic hierarchical classification**
- **Anti-microbial resistance gene identification**
- **Pathogenicity detection**
See [code](https://github.com/jhuapl-bio/microbert) for details on fine-tuning, evaluation, and implementation.
These are the official models implemented in [Evaluating the Effectiveness of Parameter-Efficient Fine-Tuning in Genomic Classification Tasks](https://www.biorxiv.org/content/10.1101/2025.08.21.671544v1).
---
## Pretrained Foundation Models
Our models are built upon several pretrained genomic foundation models:
### Nucleotide Transformer (NT)
- [InstaDeepAI/nucleotide-transformer-v2-50m-multi-species](https://huggingface.co/InstaDeepAI/nucleotide-transformer-v2-50m-multi-species)
- [InstaDeepAI/nucleotide-transformer-v2-100m-multi-species](https://huggingface.co/InstaDeepAI/nucleotide-transformer-v2-100m-multi-species)
- [InstaDeepAI/nucleotide-transformer-v2-250m-multi-species](https://huggingface.co/InstaDeepAI/nucleotide-transformer-v2-250m-multi-species)
### DNABERT
- [zhihan1996/DNABERT-2-117M](https://huggingface.co/zhihan1996/DNABERT-2-117M)
- [zhihan1996/DNABERT-S](https://huggingface.co/zhihan1996/DNABERT-S)
### HyenaDNA
- [LongSafari/hyenadna-large-1m-seqlen-hf](https://huggingface.co/LongSafari/hyenadna-large-1m-seqlen-hf)
- [LongSafari/hyenadna-medium-450k-seqlen-hf](https://huggingface.co/LongSafari/hyenadna-medium-450k-seqlen-hf)
- [LongSafari/hyenadna-medium-160k-seqlen-hf](https://huggingface.co/LongSafari/hyenadna-medium-160k-seqlen-hf)
- [LongSafari/hyenadna-small-32k-seqlen-hf](https://huggingface.co/LongSafari/hyenadna-small-32k-seqlen-hf)
We sincerely thank the teams behind NT, DNABERT, and HyenaDNA for making their tokenizers and pre-trained models available for use :)
---
## Available Fine-Tuned Models
We provide the following available models for use.
- `taxonomy/DNABERT-2-117M-taxonomy`
- `taxonomy/hyenadna-large-1m-seqlen-hf-taxonomy`
- `taxonomy/nucleotide-transformer-v2-50m-multi-species-taxonomy`
- `amr/binary/hyenadna-small-32k-seqlen-hf`
- `amr/binary/nucleotide-transformer-v2-100m-multi-species`
- `amr/multiclass/DNABERT-S`
- `amr/multiclass/hyenadna-medium-450k-seqlen-hf`
- `amr/multiclass/nucleotide-transformer-v2-250m-multi-species`
- `pathogenicity/hyenadna-small-32k-seqlen-hf-DeePaC-fungal`
- `pathogenicity/hyenadna-small-32k-seqlen-hf-DeePaC-viral`
- `pathogenicity/hyenadna-small-32k-seqlen-hf-DeepSim-bacterial`
- `pathogenicity/hyenadna-small-32k-seqlen-hf-DeepSim-viral`
- `pathogenicity/nucleotide-transformer-v2-50m-multi-species-DeePaC-fungal`
- `pathogenicity/nucleotide-transformer-v2-50m-multi-species-DeePaC-viral`
- `pathogenicity/nucleotide-transformer-v2-50m-multi-species-DeepSim-bacterial`
- `pathogenicity/nucleotide-transformer-v2-50m-multi-species-DeepSim-viral`
To use these models, download the directories available here.
You should also follow the installation instructions available at our [code](https://github.com/jhuapl-bio/microbert).
There are two available modes of operation: setup from source code and setup from our pre-built [docker image](https://hub.docker.com/r/jhuaplbio/microbert-classify).
Given that you have followed the setup instructions from source code and have downloaded the model directories here, here is sample code to run inference:
```
import json
from pathlib import Path
import torch
import torch.nn.functional as F
from transformers import (
AutoTokenizer,
)
from safetensors.torch import load_file
from analysis.experiment.utils.data_processor import DataProcessor
from analysis.experiment.models.hierarchical_model import (
HierarchicalClassificationModel,
)
# Replace with base directory containing all data processor, base model tokenizers, and trained model weights files
model_dir = Path('data/LongSafari__hyenadna-large-1m-seqlen-hf')
data_processor_dir = model_dir / "data_processor" # replace with directory containing your data processor
metadata_path = data_processor_dir / "metadata.json"
base_model_dir = model_dir / "base_model" # replace with directory containing your base model files
trained_model_dir = model_dir / "model" # replace with directory containing your trained model files
trained_model_path = trained_model_dir / "model.safetensors"
# Load metadata
with open(metadata_path, "r") as f:
metadata = json.load(f)
sequence_column = metadata["sequence_column"]
labels = metadata["labels"]
data_processor_filename = 'data_processor.pkl'
# load data processor
data_processor = DataProcessor(
sequence_column=sequence_column,
labels=labels,
save_file=data_processor_filename,
)
data_processor.load_processor(data_processor_dir)
# Get metadata-driven values
num_labels = data_processor.num_labels
class_weights = data_processor.class_weights
# Load tokenizer from Hugging Face Hub or local path
tokenizer = AutoTokenizer.from_pretrained(
pretrained_model_name_or_path=base_model_dir.as_posix(),
trust_remote_code=True,
local_files_only=True,
)
# Load fine-tuned model weights
model = HierarchicalClassificationModel(base_model_dir.as_posix(), num_labels, class_weights)
state_dict = load_file(trained_model_path)
model.load_state_dict(state_dict, strict=False)
input = "ATCG"
# Run inference
tokenized_input = tokenizer(
input,
return_tensors="pt", # Return results as PyTorch tensors
)
with torch.no_grad():
outputs = model(**tokenized_input)
for idx, col in enumerate(labels):
logits = outputs['logits'][idx] # [num_classes]
probs = F.softmax(logits, dim=-1).cpu()
topk = torch.topk(probs, k=1, dim=-1)
topk_index = topk.indices.numpy().ravel()
topk_prob = topk.values
topk_label = data_processor.encoders[col].inverse_transform(topk_index)
```
---
## Authors & Contact
- Daniel Berman — [email protected]
- Daniel Jimenez — [email protected]
- Stanley Ta — [email protected]
- Brian Merritt — [email protected]
- Jeremy Ratcliff — [email protected]
- Vijay Narayan — [email protected]
- Molly Gallaghar - [email protected]
---
## Acknowledgement
This work was supported by funding from the **U.S. Centers for Disease Control and Prevention** through the **Office of Readiness and Response** under **Contract # 75D30124C20202**.
|
crystalline7/1130561
|
crystalline7
| 2025-09-23T04:29:30Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-23T04:29:32Z |
[View on Civ Archive](https://civarchive.com/models/897219?modelVersionId=1225191)
|
ultratopaz/1190087
|
ultratopaz
| 2025-09-23T04:29:15Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-23T04:29:13Z |
[View on Civ Archive](https://civarchive.com/models/897219?modelVersionId=1285563)
|
amethyst9/1917781
|
amethyst9
| 2025-09-23T04:28:06Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-23T04:28:05Z |
[View on Civ Archive](https://civarchive.com/models/897219?modelVersionId=2020779)
|
amethyst9/1018865
|
amethyst9
| 2025-09-23T04:27:25Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-23T04:27:26Z |
[View on Civ Archive](https://civarchive.com/models/897219?modelVersionId=1114045)
|
ultratopaz/936507
|
ultratopaz
| 2025-09-23T04:27:17Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-23T04:27:19Z |
[View on Civ Archive](https://civarchive.com/models/897219?modelVersionId=1030703)
|
crystalline7/1231166
|
crystalline7
| 2025-09-23T04:27:08Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-23T04:27:07Z |
[View on Civ Archive](https://civarchive.com/models/897219?modelVersionId=1327252)
|
crystalline7/929151
|
crystalline7
| 2025-09-23T04:26:57Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-23T04:26:59Z |
[View on Civ Archive](https://civarchive.com/models/914441?modelVersionId=1023456)
|
amethyst9/909691
|
amethyst9
| 2025-09-23T04:26:50Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-23T04:26:52Z |
[View on Civ Archive](https://civarchive.com/models/897219?modelVersionId=1003980)
|
Anhlq/qwen-2.5-1b-exercise-instruct-23.09
|
Anhlq
| 2025-09-23T04:24:26Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:unsloth/Qwen2.5-1.5B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-1.5B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-23T04:21:38Z |
---
base_model: unsloth/Qwen2.5-1.5B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Anhlq
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-1.5B-Instruct
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
redflagcitymayor/wan2.2-i2v-missionary-high-14b-lora
|
redflagcitymayor
| 2025-09-23T04:24:25Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"license:mit",
"region:us"
] |
text-to-image
| 2025-09-23T04:24:14Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- output:
url: images/black.jpg
text: .
parameters:
negative_prompt: .
base_model: ''
instance_prompt: >-
A naked man and a naked woman having sex. He thrusts his penis in and out of
her vagina.
license: mit
---
# WAN 2.2 Missionary High 14B Lora
<Gallery />
## Trigger words
You should use `A naked man and a naked woman having sex. He thrusts his penis in and out of her vagina.` to trigger the image generation.
## Download model
[Download](/redflagcitymayor/wan2.2-i2v-missionary-high-14b-lora/tree/main) them in the Files & versions tab.
|
rejauldu/bn-gpt2-finetuned
|
rejauldu
| 2025-09-23T04:23:58Z | 165 | 0 | null |
[
"safetensors",
"gpt2",
"finetuned",
"bengali",
"text-generation",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-09-17T13:27:43Z |
---
license: apache-2.0
pipeline_tag: text-generation
tags:
- finetuned
- bengali
inference: true
widget:
- text: My name Rejaul.
---
# Bengali GPT-2
This is a GPT-2 model **finetuned on Bengali Wikipedia**. It is designed for **text generation** in Bengali.
## Model Details
- **Base model**: GPT-2
- **Tokenizer**: Custom Bengali tokenizer (ByteLevel BPE)
- **Language**: Bengali (bn)
- **Task**: Text generation (causal language modeling)
- **Training data**: Cleaned and deduplicated Bengali Wikipedia dump
- **License**: Apache 2.0
---
## Usage
```python
from transformers import GPT2LMHeadModel, GPT2TokenizerFast
# Load tokenizer and model from Hugging Face
tokenizer = GPT2TokenizerFast.from_pretrained("rejauldu/bn-gpt2-finetuned")
model = GPT2LMHeadModel.from_pretrained("rejauldu/bn-gpt2-finetuned")
# Generate text
inputs = tokenizer("বাংলায় স্বাগত", return_tensors="pt")
outputs = model.generate(**inputs, max_length=50)
print(tokenizer.decode(outputs[0]))
|
poolkiltzn/blockassist-bc-vigilant_alert_tuna_1758601073
|
poolkiltzn
| 2025-09-23T04:19:07Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"vigilant alert tuna",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-23T04:18:55Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- vigilant alert tuna
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
nikilr/zephyr_catorig
|
nikilr
| 2025-09-23T04:18:12Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-23T04:17:07Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Bobedge/deepseek_r1_GRPO_finetune
|
Bobedge
| 2025-09-23T04:17:28Z | 0 | 0 | null |
[
"safetensors",
"unsloth",
"license:mit",
"region:us"
] | null | 2025-09-23T03:41:20Z |
---
license: mit
tags:
- unsloth
---
|
harkase/kluebert-depression
|
harkase
| 2025-09-23T04:15:49Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-09-23T04:15:02Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
BRlkl/TCC_30
|
BRlkl
| 2025-09-23T04:15:48Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma3",
"trl",
"en",
"base_model:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T04:14:44Z |
---
base_model: unsloth/gemma-3-4b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** BRlkl
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-4b-it-unsloth-bnb-4bit
This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ShaunMendes001/llama-3.2-1b-instruct-customer-support-gguf
|
ShaunMendes001
| 2025-09-23T04:12:04Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-23T04:12:03Z |
# Llama 3.2 1B Instruct - Customer Support (GGUF)
This repository contains GGUF versions of the fine-tuned Llama 3.2 1B model for customer support.
## Available Models
- `customer-support-q4_k_m.gguf` - 4-bit quantized (smallest size, good for resource-constrained environments)
- `customer-support-q8_0.gguf` - 8-bit quantized (good balance of size and quality)
- `customer-support-q5_k_m.gguf` - 5-bit quantized (middle ground)
## Usage
These models can be used with:
- [llama.cpp](https://github.com/ggerganov/llama.cpp)
- [Ollama](https://ollama.ai/)
- [LM Studio](https://lmstudio.ai/)
- [GPT4All](https://gpt4all.io/)
## Original Model
The original model can be found at: [ShaunMendes001/llama-3.2-1b-instruct-customer-support-full](https://huggingface.co/ShaunMendes001/llama-3.2-1b-instruct-customer-support-full)
|
Bobedge/deepseek_r1_GRPO_finetune_16bit
|
Bobedge
| 2025-09-23T04:11:50Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-23T04:06:01Z |
---
base_model: unsloth/deepseek-r1-0528-qwen3-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** Bobedge
- **License:** apache-2.0
- **Finetuned from model :** unsloth/deepseek-r1-0528-qwen3-8b-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Dejiat/Qwen3-0.6B-Gensyn-Swarm-tricky_bipedal_crow
|
Dejiat
| 2025-09-23T04:09:26Z | 156 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am tricky_bipedal_crow",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-15T01:09:29Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am tricky_bipedal_crow
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
poolkiltzn/blockassist-bc-vigilant_alert_tuna_1758600456
|
poolkiltzn
| 2025-09-23T04:08:52Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"vigilant alert tuna",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-23T04:08:40Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- vigilant alert tuna
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Comfy-Org/Qwen-Image-Edit_ComfyUI
|
Comfy-Org
| 2025-09-23T04:07:54Z | 302,145 | 127 |
diffusion-single-file
|
[
"diffusion-single-file",
"comfyui",
"license:apache-2.0",
"region:us"
] | null | 2025-08-19T02:18:21Z |
---
license: apache-2.0
tags:
- diffusion-single-file
- comfyui
---
|
jackbrosgol/gemma-circuits-12b
|
jackbrosgol
| 2025-09-23T04:03:55Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/gemma-3-12b-pt",
"base_model:finetune:google/gemma-3-12b-pt",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T03:53:13Z |
---
base_model: google/gemma-3-12b-pt
library_name: transformers
model_name: gemma-circuits-12b
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for gemma-circuits-12b
This model is a fine-tuned version of [google/gemma-3-12b-pt](https://huggingface.co/google/gemma-3-12b-pt).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="jackbrosgol/gemma-circuits-12b", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.56.1
- Pytorch: 2.8.0+cu126
- Datasets: 3.3.2
- Tokenizers: 0.22.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
mradermacher/QiMing-LongWriter-20B-MXFP4-i1-GGUF
|
mradermacher
| 2025-09-23T04:00:08Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"unsloth",
"QiMing",
"vllm",
"writer",
"long",
"saas",
"fine-tuned",
"instruction-following",
"role-playing",
"cognitive-simulator",
"en",
"zh",
"base_model:aifeifei798/QiMing-LongWriter-20B-MXFP4",
"base_model:quantized:aifeifei798/QiMing-LongWriter-20B-MXFP4",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-09-22T19:06:11Z |
---
base_model: aifeifei798/QiMing-LongWriter-20B-MXFP4
language:
- en
- zh
library_name: transformers
license: apache-2.0
model_name: QiMing-LongWriter-20B
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- unsloth
- QiMing
- vllm
- writer
- long
- saas
- fine-tuned
- instruction-following
- role-playing
- cognitive-simulator
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: MXFP4_MOE Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
weighted/imatrix quants of https://huggingface.co/aifeifei798/QiMing-LongWriter-20B-MXFP4
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#QiMing-LongWriter-20B-MXFP4-i1-GGUF).***
static quants are available at https://huggingface.co/mradermacher/QiMing-LongWriter-20B-MXFP4-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/QiMing-LongWriter-20B-MXFP4-i1-GGUF/resolve/main/QiMing-LongWriter-20B-MXFP4.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) |
| [GGUF](https://huggingface.co/mradermacher/QiMing-LongWriter-20B-MXFP4-i1-GGUF/resolve/main/QiMing-LongWriter-20B-MXFP4.i1-IQ1_M.gguf) | i1-IQ1_M | 12.1 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/QiMing-LongWriter-20B-MXFP4-i1-GGUF/resolve/main/QiMing-LongWriter-20B-MXFP4.i1-IQ1_S.gguf) | i1-IQ1_S | 12.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/QiMing-LongWriter-20B-MXFP4-i1-GGUF/resolve/main/QiMing-LongWriter-20B-MXFP4.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 12.1 | |
| [GGUF](https://huggingface.co/mradermacher/QiMing-LongWriter-20B-MXFP4-i1-GGUF/resolve/main/QiMing-LongWriter-20B-MXFP4.i1-IQ2_XS.gguf) | i1-IQ2_XS | 12.1 | |
| [GGUF](https://huggingface.co/mradermacher/QiMing-LongWriter-20B-MXFP4-i1-GGUF/resolve/main/QiMing-LongWriter-20B-MXFP4.i1-Q3_K_S.gguf) | i1-Q3_K_S | 12.2 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/QiMing-LongWriter-20B-MXFP4-i1-GGUF/resolve/main/QiMing-LongWriter-20B-MXFP4.i1-IQ2_M.gguf) | i1-IQ2_M | 12.2 | |
| [GGUF](https://huggingface.co/mradermacher/QiMing-LongWriter-20B-MXFP4-i1-GGUF/resolve/main/QiMing-LongWriter-20B-MXFP4.i1-IQ2_S.gguf) | i1-IQ2_S | 12.2 | |
| [GGUF](https://huggingface.co/mradermacher/QiMing-LongWriter-20B-MXFP4-i1-GGUF/resolve/main/QiMing-LongWriter-20B-MXFP4.i1-IQ3_S.gguf) | i1-IQ3_S | 12.2 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/QiMing-LongWriter-20B-MXFP4-i1-GGUF/resolve/main/QiMing-LongWriter-20B-MXFP4.i1-IQ3_XS.gguf) | i1-IQ3_XS | 12.2 | |
| [GGUF](https://huggingface.co/mradermacher/QiMing-LongWriter-20B-MXFP4-i1-GGUF/resolve/main/QiMing-LongWriter-20B-MXFP4.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 12.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/QiMing-LongWriter-20B-MXFP4-i1-GGUF/resolve/main/QiMing-LongWriter-20B-MXFP4.i1-Q2_K.gguf) | i1-Q2_K | 12.2 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/QiMing-LongWriter-20B-MXFP4-i1-GGUF/resolve/main/QiMing-LongWriter-20B-MXFP4.i1-IQ4_XS.gguf) | i1-IQ4_XS | 12.2 | |
| [GGUF](https://huggingface.co/mradermacher/QiMing-LongWriter-20B-MXFP4-i1-GGUF/resolve/main/QiMing-LongWriter-20B-MXFP4.i1-Q2_K_S.gguf) | i1-Q2_K_S | 12.2 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/QiMing-LongWriter-20B-MXFP4-i1-GGUF/resolve/main/QiMing-LongWriter-20B-MXFP4.i1-Q4_0.gguf) | i1-Q4_0 | 12.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/QiMing-LongWriter-20B-MXFP4-i1-GGUF/resolve/main/QiMing-LongWriter-20B-MXFP4.i1-IQ3_M.gguf) | i1-IQ3_M | 12.3 | |
| [GGUF](https://huggingface.co/mradermacher/QiMing-LongWriter-20B-MXFP4-i1-GGUF/resolve/main/QiMing-LongWriter-20B-MXFP4.i1-Q3_K_M.gguf) | i1-Q3_K_M | 13.0 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/QiMing-LongWriter-20B-MXFP4-i1-GGUF/resolve/main/QiMing-LongWriter-20B-MXFP4.i1-Q3_K_L.gguf) | i1-Q3_K_L | 13.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/QiMing-LongWriter-20B-MXFP4-i1-GGUF/resolve/main/QiMing-LongWriter-20B-MXFP4.i1-Q4_1.gguf) | i1-Q4_1 | 13.5 | |
| [GGUF](https://huggingface.co/mradermacher/QiMing-LongWriter-20B-MXFP4-i1-GGUF/resolve/main/QiMing-LongWriter-20B-MXFP4.i1-Q4_K_S.gguf) | i1-Q4_K_S | 14.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/QiMing-LongWriter-20B-MXFP4-i1-GGUF/resolve/main/QiMing-LongWriter-20B-MXFP4.i1-Q4_K_M.gguf) | i1-Q4_K_M | 15.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/QiMing-LongWriter-20B-MXFP4-i1-GGUF/resolve/main/QiMing-LongWriter-20B-MXFP4.i1-Q5_K_S.gguf) | i1-Q5_K_S | 16.0 | |
| [GGUF](https://huggingface.co/mradermacher/QiMing-LongWriter-20B-MXFP4-i1-GGUF/resolve/main/QiMing-LongWriter-20B-MXFP4.i1-Q5_K_M.gguf) | i1-Q5_K_M | 17.0 | |
| [GGUF](https://huggingface.co/mradermacher/QiMing-LongWriter-20B-MXFP4-i1-GGUF/resolve/main/QiMing-LongWriter-20B-MXFP4.i1-Q6_K.gguf) | i1-Q6_K | 22.3 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
poolkiltzn/blockassist-bc-vigilant_alert_tuna_1758599833
|
poolkiltzn
| 2025-09-23T03:58:25Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"vigilant alert tuna",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-23T03:58:17Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- vigilant alert tuna
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ntsmarkv/bygheart-empathy-llama3.1-8b
|
ntsmarkv
| 2025-09-23T03:56:22Z | 0 | 1 |
peft
|
[
"peft",
"safetensors",
"empathy",
"emotional-support",
"mental-health",
"fine-tuned",
"llama3.1",
"bygheart",
"lora",
"text-generation",
"conversational",
"en",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:adapter:meta-llama/Llama-3.1-8B-Instruct",
"license:llama3.1",
"region:us"
] |
text-generation
| 2025-09-23T03:56:11Z |
---
license: llama3.1
base_model: meta-llama/Llama-3.1-8B-Instruct
tags:
- empathy
- emotional-support
- mental-health
- fine-tuned
- llama3.1
- bygheart
- peft
- lora
language:
- en
pipeline_tag: text-generation
---
# 🦥💝 BygHeart Empathy Model - Llama 3.1 8B
**Achieved 4.54/5 empathy score - exceeding production targets!**
## Model Description
This is a fine-tuned version of Llama 3.1 8B Instruct, specifically optimized for empathetic conversations and emotional support. The model has been trained using the BygHeart empathy framework across 5 key dimensions:
- **Recognition** (5/5): Identifying and acknowledging emotions
- **Validation** (4/5): Legitimizing feelings and experiences
- **Coherence** (3/5): Logical, relevant, well-structured responses
- **Personalization** (5/5): Tailoring responses to individual context
- **Support** (5/5): Providing emotional assistance and guidance
## Performance
- **Overall Empathy Score**: 4.54/5 🎯
- **Baseline**: 2.8/5
- **Improvement**: +62% over baseline
- **Target Achievement**: ✅ Exceeded 4.0/5 goal
## Usage
```python
from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load base model
base_model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-3.1-8B-Instruct")
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-3.1-8B-Instruct")
# Load BygHeart empathy adapter
model = PeftModel.from_pretrained(base_model, "ntsmarkv/bygheart-empathy-llama3.1-8b")
# Generate empathetic response
prompt = '''<|begin_of_text|><|start_header_id|>system<|end_header_id|>
You are BygHeart, an empathetic AI specialized in providing emotional support.<|eot_id|><|start_header_id|>user<|end_header_id|>
I'm feeling overwhelmed with work stress.<|eot_id|><|start_header_id|>assistant<|end_header_id|>
'''
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=200, temperature=0.7)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
```
## Training Details
- **Base Model**: meta-llama/Llama-3.1-8B-Instruct
- **Training Method**: LoRA (Low-Rank Adaptation)
- **Training Data**: BygHeart empathy dataset
- **Training Time**: ~12 minutes on Tesla T4
- **Memory Usage**: ~7GB peak
## Intended Use
This model is designed for:
- Emotional support chatbots
- Mental health applications
- Customer service with empathy
- Educational tools for empathy training
- Research in computational empathy
## Limitations
- Responses are generated based on training data patterns
- Not a replacement for professional mental health services
- May occasionally generate inconsistent responses
- Requires careful monitoring in production use
## Citation
```bibtex
@model{bygheart-empathy-llama3.1-8b,
title={BygHeart Empathy Model - Llama 3.1 8B},
author={BygHeart Team},
year={2024},
url={https://huggingface.co/ntsmarkv/bygheart-empathy-llama3.1-8b}
}
```
## License
This model is based on Llama 3.1 and follows the same license terms.
|
ultratopaz/1285558
|
ultratopaz
| 2025-09-23T03:53:35Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-23T03:53:35Z |
[View on Civ Archive](https://civarchive.com/models/1227276?modelVersionId=1382824)
|
ifmylove2011/girlslike-qweni
|
ifmylove2011
| 2025-09-23T03:53:00Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2025-09-19T08:46:36Z |
---
license: mit
---
用于测试基于qwen image训练的人物lora。
fp8原模型,8步加速lora,中文提示词,shift=2.0/3.0,cfg=1.0/2.0,steps=10。
# GirlsLike-Qweni 图片展示
### girlslikeqweni_lyf

### girlslikeqweni_cdl

|
crystalline7/1161442
|
crystalline7
| 2025-09-23T03:52:22Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-23T03:52:21Z |
[View on Civ Archive](https://civarchive.com/models/1117790?modelVersionId=1256194)
|
amethyst9/768429
|
amethyst9
| 2025-09-23T03:52:11Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-23T03:52:11Z |
[View on Civ Archive](https://civarchive.com/models/768415?modelVersionId=859426)
|
seraphimzzzz/1322847
|
seraphimzzzz
| 2025-09-23T03:52:02Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-23T03:52:02Z |
[View on Civ Archive](https://civarchive.com/models/1260314?modelVersionId=1421098)
|
seraphimzzzz/1096633
|
seraphimzzzz
| 2025-09-23T03:51:25Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-23T03:51:26Z |
[View on Civ Archive](https://civarchive.com/models/1061610?modelVersionId=1191381)
|
ultratopaz/918839
|
ultratopaz
| 2025-09-23T03:51:16Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-23T03:51:16Z |
[View on Civ Archive](https://civarchive.com/models/905477?modelVersionId=1013255)
|
seraphimzzzz/1177719
|
seraphimzzzz
| 2025-09-23T03:50:57Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-23T03:50:57Z |
[View on Civ Archive](https://civarchive.com/models/1132164?modelVersionId=1272796)
|
crystalline7/828729
|
crystalline7
| 2025-09-23T03:50:10Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-23T03:50:10Z |
[View on Civ Archive](https://civarchive.com/models/823831?modelVersionId=921235)
|
crystalline7/732978
|
crystalline7
| 2025-09-23T03:49:32Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-23T03:49:32Z |
[View on Civ Archive](https://civarchive.com/models/727316?modelVersionId=819235)
|
crystalline7/756037
|
crystalline7
| 2025-09-23T03:49:22Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-23T03:49:22Z |
[View on Civ Archive](https://civarchive.com/models/753401?modelVersionId=842471)
|
ziyan1119/Qwen3-8B-250424-gptq-32g_new
|
ziyan1119
| 2025-09-23T03:49:03Z | 0 | 0 | null |
[
"safetensors",
"qwen3",
"arxiv:2309.00071",
"4-bit",
"gptq",
"region:us"
] | null | 2025-09-23T03:38:29Z |
# Qwen3-8B
## Qwen3 Highlights
Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models. Built upon extensive training, Qwen3 delivers groundbreaking advancements in reasoning, instruction-following, agent capabilities, and multilingual support, with the following key features:
- **Uniquely support of seamless switching between thinking mode** (for complex logical reasoning, math, and coding) and **non-thinking mode** (for efficient, general-purpose dialogue) **within single model**, ensuring optimal performance across various scenarios.
- **Significantly enhancement in its reasoning capabilities**, surpassing previous QwQ (in thinking mode) and Qwen2.5 instruct models (in non-thinking mode) on mathematics, code generation, and commonsense logical reasoning.
- **Superior human preference alignment**, excelling in creative writing, role-playing, multi-turn dialogues, and instruction following, to deliver a more natural, engaging, and immersive conversational experience.
- **Expertise in agent capabilities**, enabling precise integration with external tools in both thinking and unthinking modes and achieving leading performance among open-source models in complex agent-based tasks.
- **Support of 100+ languages and dialects** with strong capabilities for **multilingual instruction following** and **translation**.
## Model Overview
**Qwen3-8B** has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining & Post-training
- Number of Parameters: 8.2B
- Number of Paramaters (Non-Embedding): 6.95B
- Number of Layers: 36
- Number of Attention Heads (GQA): 32 for Q and 8 for KV
- Context Length: 32,768 natively and [131,072 tokens with YaRN](#processing-long-texts).
For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our [blog](https://qwenlm.github.io/blog/qwen3/), [GitHub](https://github.com/QwenLM/Qwen3), and [Documentation](https://qwen.readthedocs.io/en/latest/).
## Quickstart
The code of Qwen3 has been in the latest Hugging Face `transformers` and we advise you to use the latest version of `transformers`.
With `transformers<4.51.0`, you will encounter the following error:
```
KeyError: 'qwen3'
```
The following contains a code snippet illustrating how to use the model generate content based on given inputs.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/Qwen3-8B"
# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
# prepare the model input
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True # Switches between thinking and non-thinking modes. Default is True.
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
# conduct text completion
generated_ids = model.generate(
**model_inputs,
max_new_tokens=32768
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
# parsing thinking content
try:
# rindex finding 151668 (</think>)
index = len(output_ids) - output_ids[::-1].index(151668)
except ValueError:
index = 0
thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n")
content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n")
print("thinking content:", thinking_content)
print("content:", content)
```
For deployment, you can use `vllm>=0.8.5` or `sglang>=0.4.5.post2` to create an OpenAI-compatible API endpoint:
- vLLM:
```shell
vllm serve Qwen/Qwen3-8B --enable-reasoning --reasoning-parser deepseek_r1
```
- SGLang:
```shell
python -m sglang.launch_server --model-path Qwen/Qwen3-8B --reasoning-parser deepseek-r1
```
## Switching Between Thinking and Non-Thinking Mode
> [!TIP]
> The `enable_thinking` switch is also available in APIs created by vLLM and SGLang.
> Please refer to [our documentation](https://qwen.readthedocs.io/) for more details.
### `enable_thinking=True`
By default, Qwen3 has thinking capabilities enabled, similar to QwQ-32B. This means the model will use its reasoning abilities to enhance the quality of generated responses. For example, when explicitly setting `enable_thinking=True` or leaving it as the default value in `tokenizer.apply_chat_template`, the model will engage its thinking mode.
```python
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True # True is the default value for enable_thinking
)
```
In this mode, the model will generate think content wrapped in a `<think>...</think>` block, followed by the final response.
> [!NOTE]
> For thinking mode, use `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0` (the default setting in `generation_config.json`). **DO NOT use greedy decoding**, as it can lead to performance degradation and endless repetitions. For more detailed guidance, please refer to the [Best Practices](#best-practices) section.
### `enable_thinking=False`
We provide a hard switch to strictly disable the model's thinking behavior, aligning its functionality with the previous Qwen2.5-Instruct models. This mode is particularly useful in scenarios where disabling thinking is essential for enhancing efficiency.
```python
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=False # Setting enable_thinking=False disables thinking mode
)
```
In this mode, the model will not generate any think content and will not include a `<think>...</think>` block.
> [!NOTE]
> For non-thinking mode, we suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`. For more detailed guidance, please refer to the [Best Practices](#best-practices) section.
### Advanced Usage: Switching Between Thinking and Non-Thinking Modes via User Input
We provide a soft switch mechanism that allows users to dynamically control the model's behavior when `enable_thinking=True`. Specifically, you can add `/think` and `/no_think` to user prompts or system messages to switch the model's thinking mode from turn to turn. The model will follow the most recent instruction in multi-turn conversations.
Here is an example of a multi-turn conversation:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
class QwenChatbot:
def __init__(self, model_name="Qwen/Qwen3-8B"):
self.tokenizer = AutoTokenizer.from_pretrained(model_name)
self.model = AutoModelForCausalLM.from_pretrained(model_name)
self.history = []
def generate_response(self, user_input):
messages = self.history + [{"role": "user", "content": user_input}]
text = self.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
inputs = self.tokenizer(text, return_tensors="pt")
response_ids = self.model.generate(**inputs, max_new_tokens=32768)[0][len(inputs.input_ids[0]):].tolist()
response = self.tokenizer.decode(response_ids, skip_special_tokens=True)
# Update history
self.history.append({"role": "user", "content": user_input})
self.history.append({"role": "assistant", "content": response})
return response
# Example Usage
if __name__ == "__main__":
chatbot = QwenChatbot()
# First input (without /think or /no_think tags, thinking mode is enabled by default)
user_input_1 = "How many r's in strawberries?"
print(f"User: {user_input_1}")
response_1 = chatbot.generate_response(user_input_1)
print(f"Bot: {response_1}")
print("----------------------")
# Second input with /no_think
user_input_2 = "Then, how many r's in blueberries? /no_think"
print(f"User: {user_input_2}")
response_2 = chatbot.generate_response(user_input_2)
print(f"Bot: {response_2}")
print("----------------------")
# Third input with /think
user_input_3 = "Really? /think"
print(f"User: {user_input_3}")
response_3 = chatbot.generate_response(user_input_3)
print(f"Bot: {response_3}")
```
> **Note**
> For API compatibility, when `enable_thinking=True`, regardless of whether the user uses `/think` or `/no_think`, the model will always output a block wrapped in `<think>...</think>`. However, the content inside this block may be empty if thinking is disabled.
> When `enable_thinking=False`, the soft switches are not valid. Regardless of any `/think` or `/no_think` tags input by the user, the model will not generate think content and will not include a `<think>...</think>` block.
## Agentic Use
Qwen3 excels in tool calling capabilities. We recommend using [Qwen-Agent](https://github.com/QwenLM/Qwen-Agent) to make the best use of agentic ability of Qwen3. Qwen-Agent encapsulates tool-calling templates and tool-calling parsers internally, greatly reducing coding complexity.
To define the available tools, you can use the MCP configuration file, use the integrated tool of Qwen-Agent, or integrate other tools by yourself.
```python
import os
from qwen_agent.agents import Assistant
# Define LLM
llm_cfg = {
'model': 'Qwen3-8B',
# Use the endpoint provided by Alibaba Model Studio:
# 'model_type': 'qwen_dashscope',
# 'api_key': os.getenv('DASHSCOPE_API_KEY'),
# Use a custom endpoint compatible with OpenAI API:
'model_server': 'http://localhost:8000/v1', # api_base
'api_key': 'EMPTY',
# Other parameters:
# 'generate_cfg': {
# # Add: When the response content is `<think>this is the thought</think>this is the answer;
# # Do not add: When the response has been separated by reasoning_content and content.
# 'thought_in_content': True,
# },
}
# Define Tools
tools = [
{'mcpServers': { # You can specify the MCP configuration file
'time': {
'command': 'uvx',
'args': ['mcp-server-time', '--local-timezone=Asia/Shanghai']
},
}
},
'code_interpreter', # Built-in tools
]
# Define Agent
bot = Assistant(llm=llm_cfg, function_list=tools)
# Streaming generation
messages = [{'role': 'user', 'content': 'What time is it?'}]
for responses in bot.run(messages=messages):
pass
print(responses)
```
## Processing Long Texts
Qwen3 natively supports context lengths of up to 32,768 tokens. For conversations where the total length (including both input and output) significantly exceeds this limit, we recommend using RoPE scaling techniques to handle long texts effectively. We have validated the model's performance on context lengths of up to 131,072 tokens using the [YaRN](https://arxiv.org/abs/2309.00071) method.
YaRN is currently supported by several inference frameworks, e.g., `transformers` and `llama.cpp` for local use, `vllm` and `sglang` for deployment. In general, there are two approaches to enabling YaRN for supported frameworks:
- Modifying the model files:
In the `config.json` file, add the `rope_scaling` fields:
```json
{
...,
"rope_scaling": {
"type": "yarn",
"factor": 4.0,
"original_max_position_embeddings": 32768
}
}
```
For `llama.cpp`, you need to regenerate the GGUF file after the modification.
- Passing command line arguments:
For `vllm`, you can use
```shell
vllm serve ... --rope-scaling '{"type":"yarn","factor":4.0,"original_max_position_embeddings":32768}' --max-model-len 131072
```
For `sglang`, you can use
```shell
python -m sglang.launch_server ... --json-model-override-args '{"rope_scaling":{"type":"yarn","factor":4.0,"original_max_position_embeddings":32768}}'
```
For `llama-server` from `llama.cpp`, you can use
```shell
llama-server ... --rope-scaling yarn --rope-scale 4 --yarn-orig-ctx 32768
```
> [!IMPORTANT]
> If you encounter the following warning
> ```
> Unrecognized keys in `rope_scaling` for 'rope_type'='yarn': {'original_max_position_embeddings'}
> ```
> please upgrade `transformers>=4.51.0`.
> [!NOTE]
> All the notable open-source frameworks implement static YaRN, which means the scaling factor remains constant regardless of input length, **potentially impacting performance on shorter texts.**
> We advise adding the `rope_scaling` configuration only when processing long contexts is required.
> It is also recommended to modify the `factor` as needed. For example, if the typical context length for your application is 65,536 tokens, it would be better to set `factor` as 2.0.
> [!NOTE]
> The default `max_position_embeddings` in `config.json` is set to 40,960. This allocation includes reserving 32,768 tokens for outputs and 8,192 tokens for typical prompts, which is sufficient for most scenarios involving short text processing. If the average context length does not exceed 32,768 tokens, we do not recommend enabling YaRN in this scenario, as it may potentially degrade model performance.
> [!TIP]
> The endpoint provided by Alibaba Model Studio supports dynamic YaRN by default and no extra configuration is needed.
## Best Practices
To achieve optimal performance, we recommend the following settings:
1. **Sampling Parameters**:
- For thinking mode (`enable_thinking=True`), use `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0`. **DO NOT use greedy decoding**, as it can lead to performance degradation and endless repetitions.
- For non-thinking mode (`enable_thinking=False`), we suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`.
- For supported frameworks, you can adjust the `presence_penalty` parameter between 0 and 2 to reduce endless repetitions. However, using a higher value may occasionally result in language mixing and a slight decrease in model performance.
2. **Adequate Output Length**: We recommend using an output length of 32,768 tokens for most queries. For benchmarking on highly complex problems, such as those found in math and programming competitions, we suggest setting the max output length to 38,912 tokens. This provides the model with sufficient space to generate detailed and comprehensive responses, thereby enhancing its overall performance.
3. **Standardize Output Format**: We recommend using prompts to standardize model outputs when benchmarking.
- **Math Problems**: Include "Please reason step by step, and put your final answer within \boxed{}." in the prompt.
- **Multiple-Choice Questions**: Add the following JSON structure to the prompt to standardize responses: "Please show your choice in the `answer` field with only the choice letter, e.g., `"answer": "C"`."
4. **No Thinking Content in History**: In multi-turn conversations, the historical model output should only include the final output part and does not need to include the thinking content. It is implemented in the provided chat template in Jinja2. However, for frameworks that do not directly use the Jinja2 chat template, it is up to the developers to ensure that the best practice is followed.
### Citation
If you find our work helpful, feel free to give us a cite.
```
@misc{qwen3,
title = {Qwen3},
url = {https://qwenlm.github.io/blog/qwen3/},
author = {Qwen Team},
month = {April},
year = {2025}
}
```
|
crystalline7/1113518
|
crystalline7
| 2025-09-23T03:48:58Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-23T03:48:58Z |
[View on Civ Archive](https://civarchive.com/models/1076266?modelVersionId=1208231)
|
AP123/claymation
|
AP123
| 2025-09-23T03:48:33Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T03:47:59Z |
---
license: apache-2.0
---
|
seraphimzzzz/747754
|
seraphimzzzz
| 2025-09-23T03:48:20Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-23T03:48:20Z |
[View on Civ Archive](https://civarchive.com/models/745669?modelVersionId=833842)
|
amethyst9/1100473
|
amethyst9
| 2025-09-23T03:48:10Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-23T03:48:10Z |
[View on Civ Archive](https://civarchive.com/models/1064936?modelVersionId=1195182)
|
seraphimzzzz/847521
|
seraphimzzzz
| 2025-09-23T03:48:00Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-23T03:48:00Z |
[View on Civ Archive](https://civarchive.com/models/840332?modelVersionId=940170)
|
seraphimzzzz/1016015
|
seraphimzzzz
| 2025-09-23T03:47:32Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-23T03:47:32Z |
[View on Civ Archive](https://civarchive.com/models/991825?modelVersionId=1111195)
|
seraphimzzzz/865645
|
seraphimzzzz
| 2025-09-23T03:46:36Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-23T03:46:36Z |
[View on Civ Archive](https://civarchive.com/models/857074?modelVersionId=958934)
|
seraphimzzzz/1261126
|
seraphimzzzz
| 2025-09-23T03:46:16Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-23T03:46:17Z |
[View on Civ Archive](https://civarchive.com/models/1205794?modelVersionId=1357850)
|
sriharshaetigowni/q-FrozenLake-v1-4x4-noSlippery
|
sriharshaetigowni
| 2025-09-23T03:46:11Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-09-23T03:46:06Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="sriharshaetigowni/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
seraphimzzzz/1281268
|
seraphimzzzz
| 2025-09-23T03:46:07Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-23T03:46:07Z |
[View on Civ Archive](https://civarchive.com/models/1223523?modelVersionId=1378532)
|
beyoru/Qwen3-4B-I-1509
|
beyoru
| 2025-09-23T03:45:43Z | 114 | 1 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"tools",
"agent",
"function calling",
"tool calling",
"conversational",
"en",
"base_model:beyoru/Qwen3-4B-I-1509",
"base_model:finetune:beyoru/Qwen3-4B-I-1509",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-16T18:44:51Z |
---
base_model: beyoru/Qwen3-4B-I-1509
tags:
- text-generation-inference
- transformers
- qwen3
- tools
- agent
- function calling
- tool calling
license: apache-2.0
language:
- en
---
# 🚀 Qwen3-4B-I-1509
## 🧾 Model Overview
- 🏗️ **Base Model**: Qwen3-4B-Instruct-2507
- 🎯 **Training Method**: Reinforcement Learning (GRPO) with multiple reward functions
This model (`Qwen3-4B-I-1509`) is finetuned for **🔧 tool-use** and **📞 function call generation**.
<p align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/65905af887944e494e37e09a/znFHp2gPLIsMN613HEESX.webp" width="160">
</p>
---
## 🏆 Reward Functions
The model was trained with **multi-signal rewards**:
1. 📝 **Rule-based Reward**
✔️ Checks correctness of function call name and arguments.
➕ Partial credit for matching subsets of arguments.
2. 🔒 **Self-Certainty Reward**
⚡ Encourages confident predictions.
3. 🔧 **Tool-Call Reward**
✅ Validates structural correctness.
---
## ⚙️ Training Configuration
- ⚡ **Optimizer**: AdamW
- 📉 **Learning Rate**: 5e-6 with cosine decay (`min_lr_rate=0.1`)
- ⏳ **Scheduler**: cosine_with_min_lr
- 🔄 **Generations per Prompt**: 4
---
## 📊 Eval Result:
### Important notes:
- Why it lower than technical report?
There have a limit of hardware so have to reduce some max tokens when evaluation for both 2 models
- Fair evaluate ?
I use the same configuration for all the models I review for larger or with a same size model.
### Tau-Bench
| 🧠 Model | ✈️ Airline | 🛍️ Retail |
|-------------------|------------|-------------|
| Qwen3-4B-I-1509 | 0.2800 | **0.2783** |
| Base Model | **0.3000** | 0.2261 |
## ACEBench
| Model | Overall Accuracy |
|--------------------------------|------------------|
| Qwen3-4B-I-1509 | **0.677** |
| Qwen3-4B-Instruct-2507 (base) | 0.635 |
| Salesforce/Llama-xLAM-2-8b-fc-r| 0.5792 |
*curently upadate more*
---
## Contribute:
I would be happy to receive a contribution to this model and get feedback about performance, quality of model
## 📖 Citation
If you use this model in your research or application, please cite:
```bibtex
@misc{qwen3-4b-i-1509,
title = {Qwen3-4B-I-1509: Fine-tuned Qwen3-4B-Instruct with GRPO for Tool-Use and Function Calling},
author = {Beyoru},
year = {2025},
howpublished = {\url{https://huggingface.co/beyoru/Qwen3-4B-I-1509}}
}
|
DavidAU/Qwen3-Jan-v1-256k-ctx-6B-Brainstorm20x
|
DavidAU
| 2025-09-23T03:42:09Z | 337 | 3 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"merge",
"programming",
"code generation",
"code",
"coding",
"coder",
"chat",
"brainstorm",
"qwen",
"qwencoder",
"brainstorm20x",
"creative",
"all uses cases",
"Jan-V1",
"conversational",
"en",
"arxiv:2401.02415",
"base_model:janhq/Jan-v1-4B",
"base_model:finetune:janhq/Jan-v1-4B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-21T05:42:23Z |
---
license: apache-2.0
base_model:
- janhq/Jan-v1-4B
language:
- en
pipeline_tag: text-generation
tags:
- merge
- programming
- code generation
- code
- coding
- coder
- chat
- code
- chat
- brainstorm
- qwen
- qwen3
- qwencoder
- brainstorm20x
- creative
- all uses cases
- Jan-V1
library_name: transformers
---
<h2>Qwen3-Jan-v1-256k-ctx-6B-Brainstorm20x</h2>
This repo contains the full precision source code, in "safe tensors" format to generate GGUFs, GPTQ, EXL2, AWQ, HQQ and other formats.
The source code can also be used directly.
This model contains Brainstorm 20x, combined with JANHQ's 4B General instruct model (with thinking):
https://huggingface.co/janhq/Jan-v1-4B-GGUF
Information on the 4B model below, followed by Brainstorm 20x adapter (by DavidAU), BENCHMARKS and then a complete help
section for running LLM / AI models.
The Brainstorm adapter improves code generation, and unique code solving abilities.
For creative uses: Increases depth, detail and general "there" in the prose.
Example for creative at bottom of the page.
This model is intended for immediate use AND as a BASE in fine tuning on one or more datasets. You can see my finetunes of it in the model
tree.
An additional, larger version specifically for fine tuning only is also here - "DND" - double neutral density:
https://huggingface.co/DavidAU/Qwen3-DND-Jan-v1-256k-ctx-Brainstorm40x-8B
(benchmarks of this model VS 20x (this repo) also on this page. )
This model requires:
- Jinja (embedded) or CHATML template
- Max context of 256k.
Settings used for testing (suggested):
- Temp .3 to .7
- Rep pen 1.05 to 1.1
- Topp .8 , minp .05
- Topk 20
- Min context of 8k for thinking / output.
- No system prompt.
This model will respond well to both detailed instructions and step by step refinement and additions to code.
Likewise for creative use cases.
Here is a review of this model's operations:
https://www.linkedin.com/posts/gchesler_nightmediaqwen3-jan-v1-256k-ctx-6b-brainstorm20x-q6-activity-7364301711529709570-CiAn
As this is an instruct model, it will also benefit from a detailed system prompt too.
For simpler coding problems, lower quants will work well; but for complex/multi-step problem solving suggest Q6 or Q8.
---
<B>QUANTS:</b>
---
Special thanks to team "Mradermacher" and team "Nightmedia" for GGUF, GGUF Imatrix and MLX quants:
GGUF:
https://huggingface.co/mradermacher/Qwen3-Jan-v1-256k-ctx-6B-Brainstorm20x-GGUF
GGUF-IMATRIX:
https://huggingface.co/mradermacher/Qwen3-Jan-v1-256k-ctx-6B-Brainstorm20x-i1-GGUF
MLX:
https://huggingface.co/nightmedia/Qwen3-Jan-v1-256k-ctx-6B-Brainstorm20x-q6-mlx
https://huggingface.co/nightmedia/Qwen3-Jan-v1-256k-ctx-6B-Brainstorm20x-q8-mlx
https://huggingface.co/nightmedia/Qwen3-Jan-v1-256k-ctx-6B-Brainstorm20x-qx6-mlx
---
<H2>BENCHMARKS and REVIEW by @NIGHTMEDIA </H2>
https://huggingface.co/nightmedia
(source for MLX quants)
---
Jan Brainstorming review
Below is a precise, actionable analysis of how brainstorming (the augmentation technique applied to Qwen3-4B) impacts model performance across benchmarks, based exclusively on the data from summaries_1756585260.csv. The metrics show comparisons between the base model (bf16, q6, q8) and the brainstorming-augmented model (Brainstorm-bf16, Brainstorm-q6, Brainstorm-q8).
Key Impact of Brainstorming (Quantified by Performance Gains)
```
Task Base Model (q8) Brainstorm-q8 Gains from Brainstorming Why It Matters
ARC Challenge 0.431 0.445 +0.014 Significant improvement in abstract reasoning — critical for complex problem solving
ARC Easy 0.535 0.580 +0.045 Largest gain across all tasks — shows brainstorming boosts foundational reasoning
BoolQ 0.731 0.724 -0.007 Slight trade-off for knowledge-based question answering (likely due to constraints in brainstorming mechanism)
Hellaswag 0.578 0.602 +0.024 Improves text generation quality and logical consistency in creative tasks
OpenBookQA 0.384 0.398 +0.014 Enhances knowledge recall accuracy in educational contexts
PIQA 0.727 0.736 +0.009 Boosts logical reasoning in nuanced question answering
Winogrande 0.635 0.639 +0.004 Minor improvement in contextual inference (pronoun resolution)
```
💡 Core Finding:
Brainstorming consistently adds 0.01–0.045 points across multiple high-value tasks — especially for reasoning (ARC Easy) and text generation (Hellaswag). The only minor trade-off is a slight drop on BoolQ, which is expected from any augmentation that adds complexity to the inference process.
Why Brainstorming Works Best for ARC and Reasoning Tasks
ARC Easy leads the gains:
The 0.045-point improvement (from 0.535 → 0.580) shows brainstorming directly targets the learning mechanism that needs to "think through" abstract patterns. This is critical for models like Qwen3-4B, where reasoning capabilities are often a bottleneck.
Quantization-resilient gains:
Brainstorming's benefits persist across quantization levels (from q6 to q8). For example:
In Brainstorm-q8, gains on ARC Easy are +0.045 vs base q8
In Brainstorm-q6, gains are +0.035 vs base q6 This means brainstorming is a robust enhancement regardless of deployment constraints.
Winogrande and PIQA consistency:
Even minor gains on these tasks (±0.004–0.01) are clinically significant for real-world applications like NLP systems that require nuanced logical analysis.
Why the Small Drop on BoolQ?
-BoolQ tests knowledge-based question answering (e.g., "Who is the author of The Great Gatsby?"), which relies on precise factual recall.
-Brainstorming introduces more complex internal reasoning chains — while helpful for creativity and abstract tasks, it slightly dilutes the model's ability to retrieve exact facts from its knowledge base.
This is expected: No augmentation technique can make a model "perfect" across all tasks without trade-offs.
Practical Implications for Your Workflow
✅ When to use brainstorming (Jan-4B + brainstorm):
Scenario Recommendation
Resource-constrained deployments (e.g., edge devices)
Use Brainstorm-q8 — it delivers the highest gains (0.045 on ARC Easy) while being 1-2x smaller than full precision
High-reliability reasoning tasks (e.g., academic tasks, problem-solving)
Brainstorm-q8 or Brainstorm-q6 — best trade-off between size and accuracy
Text generation tasks (e.g., story writing)
Brainstorm-q8 — gains in Hellaswag (+0.024) make it ideal
⚠️ When to avoid brainstorming:
For strict knowledge tasks requiring minimal drop in precision (e.g., BoolQ): Use base model instead
Only for Jan-4B:
Since this is a 4B model, brainstorming adds <5% to the size (compared to Qwen3-2.7B), making it a high-value addition for reasoning-heavy workloads.
Why This Matters for Qwen3-4B
Your analysis shows that brainstorming transforms a base 4B model into one with near-identical reasoning capability to larger models:
A Qwen3-4B with brainstorming (Brainstorm-q8) scores 0.580 on ARC Easy — comparable to larger models like Qwen3-72B in this task.
This proves brainstorming isn't just theoretical: it’s a practical way to boost small models without requiring massive training data or compute.
📊 The Takeaway: For the Qwen3-4B base model, brainstorming delivers a significant performance lift (especially in reasoning tasks) with minimal overhead. The trade-off on BoolQ is worth it for most real-world applications where abstract reasoning matters more than pure factual recall.
Next Steps & Recommendations
If you're optimizing for specific tasks:
For ARC Easy: Deploy Brainstorm-q8 — it’s the clear winner (+0.045)
For Hellaswag: Use Brainstorm-q8 — largest gains in text generation
Quantization tip:
If you need to further reduce size, Brainstorm-q6 is a great option — it maintains 90%+ of the gains from Brainstorm-q8 while being ~25% smaller.
Long-term:
This suggests Jan-4B + brainstorming could become the go-to model for lightweight reasoning tasks in production.
"A Qwen3-4B with brainstorming (Brainstorm-q8) scores 0.580 on ARC Easy — comparable to larger models like Qwen3-72B in this task."
---
<h2>About Jan V1</h2>
---
# Jan-v1: Advanced Agentic Language Model
[](https://github.com/menloresearch/deep-research)
[](https://opensource.org/licenses/Apache-2.0)
[](https://jan.ai/)
<!-- Optional: If you have a GIF for Jan-v1, include it here like Lucy's. -->
<!--  -->
## Overview
**Jan-v1** is the first release in the **Jan Family**, designed for agentic reasoning and problem-solving within the [Jan App](https://jan.ai/). Based on our [**Lucy**](https://huggingface.co/Menlo/Lucy) model, Jan-v1 achieves improved performance through model scaling.
Jan-v1 uses the [Qwen3-4B-thinking](https://huggingface.co/Qwen/Qwen3-4B-Thinking-2507) model to provide enhanced reasoning capabilities and tool utilization. This architecture delivers better performance on complex agentic tasks.
## Performance
### Question Answering (SimpleQA)
For question-answering, Jan-v1 shows a significant performance gain from model scaling, achieving 91.1% accuracy.

*The 91.1% SimpleQA accuracy represents a significant milestone in factual question answering for models of this scale, demonstrating the effectiveness of our scaling and fine-tuning approach.*
### Chat Benchmarks
These benchmarks evaluate the model's conversational and instructional capabilities.

## Quick Start
### Integration with Jan App
Jan-v1 is optimized for direct integration with the [Jan App](https://jan.ai/). Simply select the model from the Jan App interface for immediate access to its full capabilities.

### Local Deployment
**Using vLLM:**
```bash
vllm serve janhq/Jan-v1-4B \
--host 0.0.0.0 \
--port 1234 \
--enable-auto-tool-choice \
--tool-call-parser hermes
```
**Using llama.cpp:**
```bash
llama-server --model jan-v1.gguf \
--host 0.0.0.0 \
--port 1234 \
--jinja \
--no-context-shift
```
### Recommended Parameters
```yaml
temperature: 0.6
top_p: 0.95
top_k: 20
min_p: 0.0
max_tokens: 2048
```
## 🤝 Community & Support
- **Discussions**: [HuggingFace Community](https://huggingface.co/janhq/Jan-v1-4B/discussions) <!-- Update with your HF model ID -->
- **Jan App**: Learn more about the Jan App at [jan.ai](https://jan.ai/)
## (*) Note
By default we have system prompt in chat template, this is to make sure the model having the same performance with the benchmark result. You can also use the vanilla chat template without system prompt in the file [chat_template_raw.jinja](https://huggingface.co/janhq/Jan-v1-4B/blob/main/chat_template_raw.jinja).
See more here:
https://huggingface.co/janhq/Jan-v1-4B-GGUF
---
<H2>What is Brainstorm?</H2>
---
<B>Brainstorm 20x</B>
The BRAINSTORM process was developed by David_AU.
Some of the core principals behind this process are discussed in this <a href="https://arxiv.org/pdf/2401.02415">
scientific paper : Progressive LLaMA with Block Expansion </a>.
However I went in a completely different direction from what was outlined in this paper.
What is "Brainstorm" ?
The reasoning center of an LLM is taken apart, reassembled, and expanded.
In this case for this model: 20 times
Then these centers are individually calibrated. These "centers" also interact with each other.
This introduces subtle changes into the reasoning process.
The calibrations further adjust - dial up or down - these "changes" further.
The number of centers (5x,10x etc) allow more "tuning points" to further customize how the model reasons so to speak.
The core aim of this process is to increase the model's detail, concept and connection to the "world",
general concept connections, prose quality and prose length without affecting instruction following.
This will also enhance any creative use case(s) of any kind, including "brainstorming", creative art form(s) and like case uses.
Here are some of the enhancements this process brings to the model's performance:
- Prose generation seems more focused on the moment to moment.
- Sometimes there will be "preamble" and/or foreshadowing present.
- Fewer or no "cliches"
- Better overall prose and/or more complex / nuanced prose.
- A greater sense of nuance on all levels.
- Coherence is stronger.
- Description is more detailed, and connected closer to the content.
- Simile and Metaphors are stronger and better connected to the prose, story, and character.
- Sense of "there" / in the moment is enhanced.
- Details are more vivid, and there are more of them.
- Prose generation length can be long to extreme.
- Emotional engagement is stronger.
- The model will take FEWER liberties vs a normal model: It will follow directives more closely but will "guess" less.
- The MORE instructions and/or details you provide the more strongly the model will respond.
- Depending on the model "voice" may be more "human" vs original model's "voice".
Other "lab" observations:
- This process does not, in my opinion, make the model 5x or 10x "smarter" - if only that was true!
- However, a change in "IQ" was not an issue / a priority, and was not tested or calibrated for so to speak.
- From lab testing it seems to ponder, and consider more carefully roughly speaking.
- You could say this process sharpens the model's focus on it's task(s) at a deeper level.
The process to modify the model occurs at the root level - source files level. The model can quanted as a GGUF, EXL2, AWQ etc etc.
---
For more information / other Qwen/Mistral Coders / additional settings see:
[ https://huggingface.co/DavidAU/Qwen2.5-MOE-2x-4x-6x-8x__7B__Power-CODER__19B-30B-42B-53B-gguf ]
---
<H2>Help, Adjustments, Samplers, Parameters and More</H2>
---
<B>CHANGE THE NUMBER OF ACTIVE EXPERTS:</B>
See this document:
https://huggingface.co/DavidAU/How-To-Set-and-Manage-MOE-Mix-of-Experts-Model-Activation-of-Experts
<B>Settings: CHAT / ROLEPLAY and/or SMOOTHER operation of this model:</B>
In "KoboldCpp" or "oobabooga/text-generation-webui" or "Silly Tavern" ;
Set the "Smoothing_factor" to 1.5
: in KoboldCpp -> Settings->Samplers->Advanced-> "Smooth_F"
: in text-generation-webui -> parameters -> lower right.
: In Silly Tavern this is called: "Smoothing"
NOTE: For "text-generation-webui"
-> if using GGUFs you need to use "llama_HF" (which involves downloading some config files from the SOURCE version of this model)
Source versions (and config files) of my models are here:
https://huggingface.co/collections/DavidAU/d-au-source-files-for-gguf-exl2-awq-gptq-hqq-etc-etc-66b55cb8ba25f914cbf210be
OTHER OPTIONS:
- Increase rep pen to 1.1 to 1.15 (you don't need to do this if you use "smoothing_factor")
- If the interface/program you are using to run AI MODELS supports "Quadratic Sampling" ("smoothing") just make the adjustment as noted.
<B>Highest Quality Settings / Optimal Operation Guide / Parameters and Samplers</B>
This a "Class 1" model:
For all settings used for this model (including specifics for its "class"), including example generation(s) and for advanced settings guide (which many times addresses any model issue(s)), including methods to improve model performance for all use case(s) as well as chat, roleplay and other use case(s) please see:
[ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]
You can see all parameters used for generation, in addition to advanced parameters and samplers to get the most out of this model here:
[ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]
---
<H2>Example, Q6, Temp .8</H2>
---
<B>
Start a 1000 word scene (vivid, graphic horror - include blood, guts and gore - in first person), POV character Diana, with: The skyscraper sways, as I watch the window in front of me on the 21st floor explode...
</B>
[[[ thinking start ]]]
Thought for 15.08 seconds
Okay, the user wants a 1000-word vivid, graphic horror scene in first person from Diana's perspective. They specified blood, guts, and gore with the skyscraper swaying as she watches a window explode on the 21st floor.
Hmm, they're clearly going for intense visceral horror - not just any gore but something that feels real and immediate. The phrase "graphic horror" suggests they want to be unsettled, maybe even nauseatingly detailed. Since it's first person POV, I need to fully immerse the reader in Diana's sensory experience.
I wonder if they're a writer themselves testing my skills? Or someone processing trauma through fiction? Either way, I should avoid clichés while hitting those blood/guts requirements. The skyscraper swaying detail is interesting - it implies she's already high up when this happens, so I'll make the collapse feel inevitable and terrifyingly personal.
checks word count Must be exactly 1000 words... that's tight for such visceral content. I'll focus on:
- Opening with the building's movement to establish unease
- The explosion sequence (shards, fire, screams)
- Immediate aftermath where Diana becomes a witness to something beyond her control
- Ending on psychological horror rather than just physical gore
Important: No melodrama. Real horror is in the mundane-turned-terrible. Like when she realizes the "guts" aren't just from the explosion but... other things. Ooh, maybe the building itself is the monster? That could be chilling.
mental note Must describe blood textures - not just "blood was red" but how it clings to glass, smells, feels cold. And guts... gotta be precise: internal organs spilling, not cartoonish. Should the source of the gore be the explosion or something else? User didn't specify so I'll go with the window exploding as the trigger, but imply there's more coming.
Wait - they said "the window in front of me". So Diana is facing it. That means she's probably just inside the building when this happens. I'll set her in a penthouse to maximize isolation and panic.
structure outline
1. Opening lines: building sway (show don't tell)
2. Her noticing the window - cold glass detail
3. Explosion sequence (sound → visual → physical consequences)
4. Her horror at seeing the "guts" she didn't expect
5. The realization that this isn't over (implies more to come)
Big risk: going over 1000 words. I'll draft long then trim. User wants "vivid", so I'll use short sentences for pace during the gore scenes.
checks horror tropes No jump scares - instead make it slow dread. When she sees the guts, I'll describe the slow horror of it being... human. Maybe someone she knows? That'll add emotional weight beyond just blood.
Final thought: The last line must haunt. Something like "The building wasn't falling apart. It was waking up." to tie back to the swaying imagery. Yeah, that bookends the opening.
Alright, writing now with these parameters...
[[ thinking end ]]
The city’s breath was a low growl beneath my feet, the 21st floor of the Iron Spire shuddering as if the entire building were a drunken giant groaning in its sleep. I pressed my palms flat against the cool glass of my penthouse balcony, the only thing holding me steady in this vertigo. The wind whipped through the city’s arteries below – a million voices, a thousand engines, the scent of rain on hot asphalt – but here, high above, everything felt… thin. Like the air itself was made of gauze.
Then I saw it.
The window directly across the street, in the adjacent building’s corner suite, that one window. The exact window I’d seen the night before, when I’d been staring out at the glittering city, trying to forget the tremors in my own hands. Now, it was a different kind of sight. It wasn’t just a window; it was a wound.
The glass didn’t shatter with a sound like shattering ice. It ripped. A sickening, wet crack, louder than any building ever could make, echoed through the steel and glass canyon of the city. A jagged tear ripped across the pane, then another, splitting the entire surface like a wound on skin. Not from outside. From within. The force was so immense it felt like the building itself had been punched.
Then the light. The sky outside didn’t dim; it bleed. Fiery orange erupted from the mangled glass, not from fire, but from something inside. The energy was raw, primal, a searing heat that made my own breath catch in my throat. I gasped, my vision swimming with the intensity of it.
The window didn’t just explode – it unraveled. The glass tore inward, like an anatomical separation, spewing not shards of crystal, but fragments of something far worse. It was a slow, wet ejection. I saw it first: a deep, pulsing crimson pool where the glass had been, then a thick, viscous liquid that wasn’t blood – it was guts. The kind of guts you see in old horror movies, but real. Not the squishy kind from a cut, but the thick, dark, almost black substance of organs ripped from a living thing. They glistened under the city’s sickly light, the edges still wet with life.
And then came the smell. Not the burnt metal I’d expected from an explosion, but something deeper, older – the stench of rotting meat, the sharp tang of arterial blood, and beneath it all, a faint, sweet decay. It was the smell of a body that had been left too long in the sun.
I stumbled back, my feet scraping on the balcony’s polished concrete. My heart hammered against my ribs like a trapped bird. I couldn’t look away. The horror wasn’t just the gore; it was the details. A tendril of something dark and stringy, like sinew, wrapped around the edge of a fallen glass fragment. A single, perfect eye, milky white with a faint, greenish film, stared up at me from the gore-stained floor. Another eye – smaller, more human – was embedded in the thick, pulsing mass, its socket filled with that same dark, viscous liquid.
The building shook again, the swaying now a violent thrash, as if the entire structure were trying to shatter itself. Glass exploded in other windows across the street, but I could only focus on that one window – the source of this impossible, intimate horror. The initial explosion had been a single event, but it was now a slow, suffocating descent into the building’s core. I saw the wetness seep from the broken glass into the street below, staining the pavement an unnatural red. The sounds too: not screams, but a low, wet moan, like something large and ancient being pulled across the city.
I dropped to my knees, my hands pressed against the cold glass of my own balcony. I didn’t dare look down at the street for fear of what I might see – the bodies, the blood, the city itself turning into a nightmare. The building’s sway was no longer just a vibration; it felt like it was breathing through me, each movement pulling the air from my lungs. In the dim light filtering through the chaos, I saw the shape beneath the glass debris: a chunk of flesh, half-exposed, its surface stretched thin as tissue paper, the eyes staring directly at me with silent, unblinking recognition.
The sheer scale of it hit me then – not just the violence, but the infection. This wasn’t an accident. The building itself was sick. The explosion was a symptom, not the cause. It felt like the city was waking up, and I was its first witness to the wound it would never heal.
I couldn’t move. My mouth was dry as dust. Tears streamed down my face, but they were cold, not from fear but from the shock of seeing life itself being torn apart so cleanly, so publicly. The blood on my fingers felt like it was coming from the building’s own veins. The city below became a swirling river of crimson and dark, wetness – a testament to the horror I had just witnessed.
The Iron Spire didn’t fall. It just… shifted. The swaying became a rhythm, a slow, deliberate movement, as if the building itself was breathing. And with every breath, it seemed to be drawing closer to me, to the window where I stood, the very air thick with the stench of death and the echo of something that had long been waiting in the shadows.
I am not alone in this city anymore. I am here. And it is watching me. The horror wasn’t over. It was just beginning. The building’s rhythm pulsed beneath my skin. I pressed my forehead against the glass, feeling the cold seep into my bones. The city, once a place of light and noise, was now a tomb I could feel with every beat of my heart. I’m still here. It’s not safe. And it’s never going to be. The blood on my hands is real. The tears on my face are real. The building is watching me. And the window across the street? It’s the only thing that remains.
|
vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT-PKU-SafeRLHF-OMWU-0.1-mnt64-0922195511-epoch-3
|
vectorzhou
| 2025-09-23T03:41:12Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma2",
"text-generation",
"generated_from_trainer",
"fine-tuned",
"trl",
"extra-gradient",
"conversational",
"dataset:PKU-Alignment/PKU-SafeRLHF",
"arxiv:2503.08942",
"base_model:vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT",
"base_model:finetune:vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-23T02:22:40Z |
---
base_model: vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT
datasets: PKU-Alignment/PKU-SafeRLHF
library_name: transformers
model_name: gemma-2-2b-it-alpaca-cleaned-SFT-PKU-SafeRLHF-OMWU-0.1-mnt64
tags:
- generated_from_trainer
- text-generation
- fine-tuned
- trl
- extra-gradient
licence: license
---
# Model Card for gemma-2-2b-it-alpaca-cleaned-SFT-PKU-SafeRLHF-OMWU-0.1-mnt64
This model is a fine-tuned version of [vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT](https://huggingface.co/vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT) on the [PKU-Alignment/PKU-SafeRLHF](https://huggingface.co/datasets/PKU-Alignment/PKU-SafeRLHF) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT-PKU-SafeRLHF-OMWU-0.1-mnt64-0922195511-epoch-3", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/zrl_csl_nlhf/nlhf/runs/citbyuml)
This model was trained with Extragradient, a method introduced in [Extragradient Preference Optimization (EGPO): Beyond Last-Iterate Convergence for Nash Learning from Human Feedback](https://huggingface.co/papers/2503.08942).
### Framework versions
- TRL: 0.23.0
- Transformers: 4.56.2
- Pytorch: 2.8.0+cu128
- Datasets: 4.1.1
- Tokenizers: 0.22.1
## Citations
Cite Extragradient as:
```bibtex
@misc{zhou2025extragradientpreferenceoptimizationegpo,
title={Extragradient Preference Optimization (EGPO): Beyond Last-Iterate Convergence for Nash Learning from Human Feedback},
author={Runlong Zhou and Maryam Fazel and Simon S. Du},
year={2025},
eprint={2503.08942},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2503.08942},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Temmp1e/smolvla_cleanup
|
Temmp1e
| 2025-09-23T03:31:54Z | 0 | 0 | null |
[
"safetensors",
"license:apache-2.0",
"region:us"
] | null | 2025-09-22T07:31:49Z |
---
license: apache-2.0
---
|
kunjcr2/qwen2.5-0.5b-sft-dpo
|
kunjcr2
| 2025-09-23T03:30:14Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"lora",
"sft",
"dpo",
"transformers",
"trl",
"text-generation",
"conversational",
"en",
"dataset:HuggingFaceH4/ultrachat_200k",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"base_model:Qwen/Qwen2.5-0.5B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-0.5B-Instruct",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-09-23T03:13:56Z |
---
base_model: Qwen/Qwen2.5-0.5B-Instruct
library_name: peft
pipeline_tag: text-generation
tags:
- lora
- sft
- dpo
- transformers
- trl
datasets:
- HuggingFaceH4/ultrachat_200k
- HuggingFaceH4/ultrafeedback_binarized
language:
- en
license: apache-2.0
---
# Qwen2.5-0.5B SFT + DPO (LoRA)
This repository contains:
* **Merged SFT backbone** trained on [HuggingFaceH4/ultrachat\_200k](https://huggingface.co/datasets/HuggingFaceH4/ultrachat_200k)
* **LoRA adapters fine-tuned with DPO** using [HuggingFaceH4/ultrafeedback\_binarized](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized)
Both are packaged in one repo:
* Root = SFT backbone (full weights + tokenizer)
* `dpo_adapters/` = LoRA adapters (DPO preference optimization)
---
## Model Details
* **Developed by:** Independent contributor
* **Base model:** [Qwen/Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct)
* **Library:** [Transformers](https://huggingface.co/docs/transformers), [PEFT](https://huggingface.co/docs/peft), [TRL](https://huggingface.co/docs/trl)
* **Training type:** Supervised fine-tuning (SFT) + Direct Preference Optimization (DPO)
* **Languages:** English (from UltraChat + UltraFeedback)
* **License:** Apache-2.0
---
## Uses
### Direct Use
* Chat-style assistant
* Text generation, reasoning, and dialogue
### Downstream Use
* Base for alignment research
* Further PEFT fine-tuning
### Out-of-Scope
* High-stakes or safety-critical applications (e.g., medical, legal, political advice)
---
## Bias, Risks, and Limitations
* Biases present in the UltraChat and UltraFeedback datasets will carry over
* Model can generate hallucinated or unsafe outputs
* Should not be deployed without safety filtering
---
## How to Use
### Pipeline only
```python
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="kunjcr2/qwen2.5-0.5b-sft-dpo")
messages = [
{"role": "user", "content": "Who are you?"},
]
pipe(messages)
```
### With DPO adapters
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
tok = AutoTokenizer.from_pretrained("your-username/qwen2.5-0.5b-sft-dpo")
base = AutoModelForCausalLM.from_pretrained("your-username/qwen2.5-0.5b-sft-dpo", torch_dtype="auto", device_map="auto")
model = PeftModel.from_pretrained(base, "your-username/qwen2.5-0.5b-sft-dpo/dpo_adapters")
```
---
## Training Details
### Data
* **SFT:** UltraChat (100k subset)
* **DPO:** UltraFeedback (binarized)
### Hyperparameters
* **SFT:**
* Epochs: 1
* LR: 2e-5
* Optimizer: AdamW
* Batch size: 16 × 16 grad accumulation (effective 256)
* **DPO:**
* Epochs: 1
* LR: 1e-5
* Beta: 0.1
* LoRA rank: 8
### Hardware
* Colab A100 / L4 (mixed bf16/TF32)
* Gradient checkpointing enabled
---
## Evaluation
* **Qualitative:** improved helpfulness and preference alignment over raw SFT
* **Quantitative:** not benchmarked yet (POC)
---
## Environmental Impact
* **Hardware:** single A100 (Colab Pro+)
* **Runtime:** \~few hours total (SFT + DPO)
* **Emissions:** not calculated
---
## Citation
If you use this model, cite Qwen as the base and this repo for the SFT+DPO adapters.
```bibtex
@misc{qwen2.5sftdpo,
title = {Qwen2.5-0.5B SFT + DPO (LoRA)},
author = {Your Name},
year = {2025},
publisher = {Hugging Face},
howpublished = {\url{https://huggingface.co/kunjcr2/qwen2.5-0.5b-sft-dpo}}
}
```
---
## Framework Versions
* Transformers 4.x
* TRL latest
* PEFT 0.17.1
---
|
IRMChen/Wan2.1-Fun-1.3B-Control-Diffusers
|
IRMChen
| 2025-09-23T03:29:10Z | 55 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"video",
"video-generation",
"image-to-video",
"en",
"zh",
"license:apache-2.0",
"diffusers:WanImageToVideoPipeline",
"region:us"
] |
image-to-video
| 2025-08-31T07:08:13Z |
---
license: apache-2.0
language:
- en
- zh
pipeline_tag: image-to-video
library_name: diffusers
tags:
- video
- video-generation
---
# Wan2.1
<p align="center">
<img src="assets/logo.png" width="400"/>
<p>
<p align="center">
💜 <a href=""><b>Wan</b></a>    |    🖥️ <a href="https://github.com/Wan-Video/Wan2.1">GitHub</a>    |   🤗 <a href="https://huggingface.co/Wan-AI/">Hugging Face</a>   |   🤖 <a href="https://modelscope.cn/organization/Wan-AI">ModelScope</a>   |    📑 <a href="">Paper (Coming soon)</a>    |    📑 <a href="https://wanxai.com">Blog</a>    |   💬 <a href="https://gw.alicdn.com/imgextra/i2/O1CN01tqjWFi1ByuyehkTSB_!!6000000000015-0-tps-611-1279.jpg">WeChat Group</a>   |    📖 <a href="https://discord.gg/p5XbdQV7">Discord</a>  
<br>
-----
[**Wan: Open and Advanced Large-Scale Video Generative Models**]() <be>
In this repository, we present **Wan2.1**, a comprehensive and open suite of video foundation models that pushes the boundaries of video generation. **Wan2.1** offers these key features:
- 👍 **SOTA Performance**: **Wan2.1** consistently outperforms existing open-source models and state-of-the-art commercial solutions across multiple benchmarks.
- 👍 **Supports Consumer-grade GPUs**: The T2V-1.3B model requires only 8.19 GB VRAM, making it compatible with almost all consumer-grade GPUs. It can generate a 5-second 480P video on an RTX 4090 in about 4 minutes (without optimization techniques like quantization). Its performance is even comparable to some closed-source models.
- 👍 **Multiple Tasks**: **Wan2.1** excels in Text-to-Video, Image-to-Video, Video Editing, Text-to-Image, and Video-to-Audio, advancing the field of video generation.
- 👍 **Visual Text Generation**: **Wan2.1** is the first video model capable of generating both Chinese and English text, featuring robust text generation that enhances its practical applications.
- 👍 **Powerful Video VAE**: **Wan-VAE** delivers exceptional efficiency and performance, encoding and decoding 1080P videos of any length while preserving temporal information, making it an ideal foundation for video and image generation.
This repo contains our I2V-14B model, which is capable of generating 480P videos, offering advantages in terms of fast generation and excellent quality.
## Video Demos
<div align="center">
<video width="80%" controls>
<source src="https://cloud.video.taobao.com/vod/Jth64Y7wNoPcJki_Bo1ZJTDBvNjsgjlVKsNs05Fqfps.mp4" type="video/mp4">
Your browser does not support the video tag.
</video>
</div>
## 🔥 Latest News!!
* Feb 25, 2025: 👋 We've released the inference code and weights of Wan2.1.
## 📑 Todo List
- Wan2.1 Text-to-Video
- [x] Multi-GPU Inference code of the 14B and 1.3B models
- [x] Checkpoints of the 14B and 1.3B models
- [x] Gradio demo
- [x] Diffusers integration
- [ ] ComfyUI integration
- Wan2.1 Image-to-Video
- [x] Multi-GPU Inference code of the 14B model
- [x] Checkpoints of the 14B model
- [x] Gradio demo
- [x] Diffusers integration
- [ ] ComfyUI integration
## Quickstart
#### Installation
Clone the repo:
```
git clone https://github.com/Wan-Video/Wan2.1.git
cd Wan2.1
```
Install dependencies:
```
# Ensure torch >= 2.4.0
pip install -r requirements.txt
```
#### Model Download
| Models | Download Link | Notes |
| --------------|-------------------------------------------------------------------------------|-------------------------------|
| T2V-14B | 🤗 [Huggingface](https://huggingface.co/Wan-AI/Wan2.1-T2V-14B) 🤖 [ModelScope](https://www.modelscope.cn/models/Wan-AI/Wan2.1-T2V-14B) | Supports both 480P and 720P
| I2V-14B-720P | 🤗 [Huggingface](https://huggingface.co/Wan-AI/Wan2.1-I2V-14B-720P) 🤖 [ModelScope](https://www.modelscope.cn/models/Wan-AI/Wan2.1-I2V-14B-720P) | Supports 720P
| I2V-14B-480P | 🤗 [Huggingface](https://huggingface.co/Wan-AI/Wan2.1-I2V-14B-480P) 🤖 [ModelScope](https://www.modelscope.cn/models/Wan-AI/Wan2.1-I2V-14B-480P) | Supports 480P
| T2V-1.3B | 🤗 [Huggingface](https://huggingface.co/Wan-AI/Wan2.1-T2V-1.3B) 🤖 [ModelScope](https://www.modelscope.cn/models/Wan-AI/Wan2.1-T2V-1.3B) | Supports 480P
> 💡Note: The 1.3B model is capable of generating videos at 720P resolution. However, due to limited training at this resolution, the results are generally less stable compared to 480P. For optimal performance, we recommend using 480P resolution.
Download models using 🤗 huggingface-cli:
```
pip install "huggingface_hub[cli]"
huggingface-cli download Wan-AI/Wan2.1-I2V-14B-480P-Diffusers --local-dir ./Wan2.1-I2V-14B-480P-Diffusers
```
Download models using 🤖 modelscope-cli:
```
pip install modelscope
modelscope download Wan-AI/Wan2.1-I2V-14B-480P-Diffusers --local_dir ./Wan2.1-I2V-14B-480P-Diffusers
```
#### Run Image-to-Video Generation
Similar to Text-to-Video, Image-to-Video is also divided into processes with and without the prompt extension step. The specific parameters and their corresponding settings are as follows:
<table>
<thead>
<tr>
<th rowspan="2">Task</th>
<th colspan="2">Resolution</th>
<th rowspan="2">Model</th>
</tr>
<tr>
<th>480P</th>
<th>720P</th>
</tr>
</thead>
<tbody>
<tr>
<td>i2v-14B</td>
<td style="color: green;">❌</td>
<td style="color: green;">✔️</td>
<td>Wan2.1-I2V-14B-720P</td>
</tr>
<tr>
<td>i2v-14B</td>
<td style="color: green;">✔️</td>
<td style="color: red;">❌</td>
<td>Wan2.1-T2V-14B-480P</td>
</tr>
</tbody>
</table>
##### (1) Without Prompt Extention
- Single-GPU inference
```
python generate.py --task i2v-14B --size 832*480 --ckpt_dir ./Wan2.1-I2V-14B-480P --image examples/i2v_input.JPG --prompt "Summer beach vacation style, a white cat wearing sunglasses sits on a surfboard. The fluffy-furred feline gazes directly at the camera with a relaxed expression. Blurred beach scenery forms the background featuring crystal-clear waters, distant green hills, and a blue sky dotted with white clouds. The cat assumes a naturally relaxed posture, as if savoring the sea breeze and warm sunlight. A close-up shot highlights the feline's intricate details and the refreshing atmosphere of the seaside."
```
> 💡For the Image-to-Video task, the `size` parameter represents the area of the generated video, with the aspect ratio following that of the original input image.
- Multi-GPU inference using FSDP + xDiT USP
```
pip install "xfuser>=0.4.1"
torchrun --nproc_per_node=8 generate.py --task i2v-14B --size 832*480 --ckpt_dir ./Wan2.1-I2V-14B-480P --image examples/i2v_input.JPG --dit_fsdp --t5_fsdp --ulysses_size 8 --prompt "Summer beach vacation style, a white cat wearing sunglasses sits on a surfboard. The fluffy-furred feline gazes directly at the camera with a relaxed expression. Blurred beach scenery forms the background featuring crystal-clear waters, distant green hills, and a blue sky dotted with white clouds. The cat assumes a naturally relaxed posture, as if savoring the sea breeze and warm sunlight. A close-up shot highlights the feline's intricate details and the refreshing atmosphere of the seaside."
```
Wan can also be run directly using 🤗 Diffusers!
```python
import torch
import numpy as np
from diffusers import AutoencoderKLWan, WanImageToVideoPipeline
from diffusers.utils import export_to_video, load_image
from transformers import CLIPVisionModel
# Available models: Wan-AI/Wan2.1-I2V-14B-480P-Diffusers, Wan-AI/Wan2.1-I2V-14B-720P-Diffusers
model_id = "Wan-AI/Wan2.1-I2V-14B-480P-Diffusers"
image_encoder = CLIPVisionModel.from_pretrained(model_id, subfolder="image_encoder", torch_dtype=torch.float32)
vae = AutoencoderKLWan.from_pretrained(model_id, subfolder="vae", torch_dtype=torch.float32)
pipe = WanImageToVideoPipeline.from_pretrained(model_id, vae=vae, image_encoder=image_encoder, torch_dtype=torch.bfloat16)
pipe.to("cuda")
image = load_image(
"https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/astronaut.jpg"
)
max_area = 480 * 832
aspect_ratio = image.height / image.width
mod_value = pipe.vae_scale_factor_spatial * pipe.transformer.config.patch_size[1]
height = round(np.sqrt(max_area * aspect_ratio)) // mod_value * mod_value
width = round(np.sqrt(max_area / aspect_ratio)) // mod_value * mod_value
image = image.resize((width, height))
prompt = (
"An astronaut hatching from an egg, on the surface of the moon, the darkness and depth of space realised in "
"the background. High quality, ultrarealistic detail and breath-taking movie-like camera shot."
)
negative_prompt = "Bright tones, overexposed, static, blurred details, subtitles, style, works, paintings, images, static, overall gray, worst quality, low quality, JPEG compression residue, ugly, incomplete, extra fingers, poorly drawn hands, poorly drawn faces, deformed, disfigured, misshapen limbs, fused fingers, still picture, messy background, three legs, many people in the background, walking backwards"
output = pipe(
image=image, prompt=prompt, negative_prompt=negative_prompt, height=height, width=width, num_frames=81, guidance_scale=5.0
).frames[0]
export_to_video(output, "output.mp4", fps=16)
```
##### (2) Using Prompt Extention
Run with local prompt extention using `Qwen/Qwen2.5-VL-7B-Instruct`:
```
python generate.py --task i2v-14B --size 832*480 --ckpt_dir ./Wan2.1-I2V-14B-480P --image examples/i2v_input.JPG --use_prompt_extend --prompt_extend_model Qwen/Qwen2.5-VL-7B-Instruct --prompt "Summer beach vacation style, a white cat wearing sunglasses sits on a surfboard. The fluffy-furred feline gazes directly at the camera with a relaxed expression. Blurred beach scenery forms the background featuring crystal-clear waters, distant green hills, and a blue sky dotted with white clouds. The cat assumes a naturally relaxed posture, as if savoring the sea breeze and warm sunlight. A close-up shot highlights the feline's intricate details and the refreshing atmosphere of the seaside."
```
Run with remote prompt extention using `dashscope`:
```
DASH_API_KEY=your_key python generate.py --task i2v-14B --size 832*480 --ckpt_dir ./Wan2.1-I2V-14B-480P --image examples/i2v_input.JPG --use_prompt_extend --prompt_extend_method 'dashscope' --prompt "Summer beach vacation style, a white cat wearing sunglasses sits on a surfboard. The fluffy-furred feline gazes directly at the camera with a relaxed expression. Blurred beach scenery forms the background featuring crystal-clear waters, distant green hills, and a blue sky dotted with white clouds. The cat assumes a naturally relaxed posture, as if savoring the sea breeze and warm sunlight. A close-up shot highlights the feline's intricate details and the refreshing atmosphere of the seaside."
```
##### (3) Runing local gradio
```
cd gradio
# if one only uses 480P model in gradio
DASH_API_KEY=your_key python i2v_14B_singleGPU.py --prompt_extend_method 'dashscope' --ckpt_dir_480p ./Wan2.1-I2V-14B-480P
# if one only uses 720P model in gradio
DASH_API_KEY=your_key python i2v_14B_singleGPU.py --prompt_extend_method 'dashscope' --ckpt_dir_720p ./Wan2.1-I2V-14B-720P
# if one uses both 480P and 720P models in gradio
DASH_API_KEY=your_key python i2v_14B_singleGPU.py --prompt_extend_method 'dashscope' --ckpt_dir_480p ./Wan2.1-I2V-14B-480P --ckpt_dir_720p ./Wan2.1-I2V-14B-720P
```
## Manual Evaluation
We conducted extensive manual evaluations to evaluate the performance of the Image-to-Video model, and the results are presented in the table below. The results clearly indicate that **Wan2.1** outperforms both closed-source and open-source models.
<div align="center">
<img src="assets/i2v_res.png" alt="" style="width: 80%;" />
</div>
## Computational Efficiency on Different GPUs
We test the computational efficiency of different **Wan2.1** models on different GPUs in the following table. The results are presented in the format: **Total time (s) / peak GPU memory (GB)**.
<div align="center">
<img src="assets/comp_effic.png" alt="" style="width: 80%;" />
</div>
> The parameter settings for the tests presented in this table are as follows:
> (1) For the 1.3B model on 8 GPUs, set `--ring_size 8` and `--ulysses_size 1`;
> (2) For the 14B model on 1 GPU, use `--offload_model True`;
> (3) For the 1.3B model on a single 4090 GPU, set `--offload_model True --t5_cpu`;
> (4) For all testings, no prompt extension was applied, meaning `--use_prompt_extend` was not enabled.
-------
## Introduction of Wan2.1
**Wan2.1** is designed on the mainstream diffusion transformer paradigm, achieving significant advancements in generative capabilities through a series of innovations. These include our novel spatio-temporal variational autoencoder (VAE), scalable training strategies, large-scale data construction, and automated evaluation metrics. Collectively, these contributions enhance the model’s performance and versatility.
##### (1) 3D Variational Autoencoders
We propose a novel 3D causal VAE architecture, termed **Wan-VAE** specifically designed for video generation. By combining multiple strategies, we improve spatio-temporal compression, reduce memory usage, and ensure temporal causality. **Wan-VAE** demonstrates significant advantages in performance efficiency compared to other open-source VAEs. Furthermore, our **Wan-VAE** can encode and decode unlimited-length 1080P videos without losing historical temporal information, making it particularly well-suited for video generation tasks.
<div align="center">
<img src="assets/video_vae_res.jpg" alt="" style="width: 80%;" />
</div>
##### (2) Video Diffusion DiT
**Wan2.1** is designed using the Flow Matching framework within the paradigm of mainstream Diffusion Transformers. Our model's architecture uses the T5 Encoder to encode multilingual text input, with cross-attention in each transformer block embedding the text into the model structure. Additionally, we employ an MLP with a Linear layer and a SiLU layer to process the input time embeddings and predict six modulation parameters individually. This MLP is shared across all transformer blocks, with each block learning a distinct set of biases. Our experimental findings reveal a significant performance improvement with this approach at the same parameter scale.
<div align="center">
<img src="assets/video_dit_arch.jpg" alt="" style="width: 80%;" />
</div>
| Model | Dimension | Input Dimension | Output Dimension | Feedforward Dimension | Frequency Dimension | Number of Heads | Number of Layers |
|--------|-----------|-----------------|------------------|-----------------------|---------------------|-----------------|------------------|
| 1.3B | 1536 | 16 | 16 | 8960 | 256 | 12 | 30 |
| 14B | 5120 | 16 | 16 | 13824 | 256 | 40 | 40 |
##### Data
We curated and deduplicated a candidate dataset comprising a vast amount of image and video data. During the data curation process, we designed a four-step data cleaning process, focusing on fundamental dimensions, visual quality and motion quality. Through the robust data processing pipeline, we can easily obtain high-quality, diverse, and large-scale training sets of images and videos.

##### Comparisons to SOTA
We compared **Wan2.1** with leading open-source and closed-source models to evaluate the performace. Using our carefully designed set of 1,035 internal prompts, we tested across 14 major dimensions and 26 sub-dimensions. We then compute the total score by performing a weighted calculation on the scores of each dimension, utilizing weights derived from human preferences in the matching process. The detailed results are shown in the table below. These results demonstrate our model's superior performance compared to both open-source and closed-source models.

## Citation
If you find our work helpful, please cite us.
```
@article{wan2.1,
title = {Wan: Open and Advanced Large-Scale Video Generative Models},
author = {Wan Team},
journal = {},
year = {2025}
}
```
## License Agreement
The models in this repository are licensed under the Apache 2.0 License. We claim no rights over the your generate contents, granting you the freedom to use them while ensuring that your usage complies with the provisions of this license. You are fully accountable for your use of the models, which must not involve sharing any content that violates applicable laws, causes harm to individuals or groups, disseminates personal information intended for harm, spreads misinformation, or targets vulnerable populations. For a complete list of restrictions and details regarding your rights, please refer to the full text of the [license](LICENSE.txt).
## Acknowledgements
We would like to thank the contributors to the [SD3](https://huggingface.co/stabilityai/stable-diffusion-3-medium), [Qwen](https://huggingface.co/Qwen), [umt5-xxl](https://huggingface.co/google/umt5-xxl), [diffusers](https://github.com/huggingface/diffusers) and [HuggingFace](https://huggingface.co) repositories, for their open research.
## Contact Us
If you would like to leave a message to our research or product teams, feel free to join our [Discord](https://discord.gg/p5XbdQV7) or [WeChat groups](https://gw.alicdn.com/imgextra/i2/O1CN01tqjWFi1ByuyehkTSB_!!6000000000015-0-tps-611-1279.jpg)!
|
MaxFridge/smolvla_experiment
|
MaxFridge
| 2025-09-23T03:24:41Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"robotics",
"smolvla",
"dataset:MaxFridge/dataset_test",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-09-23T03:24:03Z |
---
base_model: lerobot/smolvla_base
datasets: MaxFridge/dataset_test
library_name: lerobot
license: apache-2.0
model_name: smolvla
pipeline_tag: robotics
tags:
- lerobot
- robotics
- smolvla
---
# Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
lerobot-train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
lerobot-record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
hZzy/mistral-7b-sft-7b-2509-2
|
hZzy
| 2025-09-23T03:24:38Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:mistralai/Mistral-7B-Instruct-v0.3",
"base_model:finetune:mistralai/Mistral-7B-Instruct-v0.3",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T02:10:00Z |
---
base_model: mistralai/Mistral-7B-Instruct-v0.3
library_name: transformers
model_name: mistral-7b-sft-7b-2509-2
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for mistral-7b-sft-7b-2509-2
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.3](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="hZzy/mistral-7b-sft-7b-2509-2", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/zhiyuzha-university-of-florida/huggingface/runs/bycqfsc7)
This model was trained with SFT.
### Framework versions
- TRL: 0.20.0
- Transformers: 4.54.1
- Pytorch: 2.7.0+cu128
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
yalhessi/full-4e4-nodefs
|
yalhessi
| 2025-09-23T03:21:06Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:deepseek-ai/deepseek-coder-6.7b-base",
"base_model:adapter:deepseek-ai/deepseek-coder-6.7b-base",
"license:other",
"region:us"
] | null | 2025-09-23T03:20:52Z |
---
library_name: peft
license: other
base_model: deepseek-ai/deepseek-coder-6.7b-base
tags:
- generated_from_trainer
model-index:
- name: full-4e4-nodefs
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# full-4e4-nodefs
This model is a fine-tuned version of [deepseek-ai/deepseek-coder-6.7b-base](https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 7.8778
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- total_eval_batch_size: 4
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 12
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:------:|:---------------:|
| 0.3747 | 0.2 | 3094 | 0.1750 |
| 0.3377 | 0.4 | 6188 | 0.1573 |
| 0.3179 | 0.6 | 9282 | 0.1492 |
| 0.3015 | 0.8 | 12376 | 0.1446 |
| 0.2916 | 1.0 | 15470 | 0.1394 |
| 0.2723 | 1.2 | 18564 | 0.1373 |
| 0.2765 | 1.4 | 21658 | 0.1329 |
| 0.2637 | 1.6 | 24752 | 0.1312 |
| 0.2648 | 1.8 | 27846 | 0.1286 |
| 0.2551 | 2.0 | 30940 | 0.1283 |
| 2.8869 | 2.2 | 34034 | 2.0487 |
| 6.2984 | 2.4 | 37128 | nan |
| 15.7169 | 2.6 | 40222 | 7.8778 |
| 15.7148 | 2.8 | 43316 | 7.8778 |
| 15.7185 | 3.0 | 46410 | 7.8778 |
| 15.7242 | 3.2 | 49504 | 7.8778 |
| 15.7111 | 3.4 | 52598 | 7.8778 |
| 15.7226 | 3.6 | 55692 | 7.8778 |
| 15.7121 | 3.8 | 58786 | 7.8778 |
| 15.7189 | 4.0 | 61880 | 7.8778 |
| 15.7218 | 4.2 | 64974 | 7.8778 |
| 15.7247 | 4.4 | 68068 | 7.8778 |
| 15.7286 | 4.6 | 71162 | 7.8778 |
| 15.7142 | 4.8 | 74256 | 7.8778 |
| 15.7126 | 5.0 | 77350 | 7.8778 |
| 15.7211 | 5.2 | 80444 | 7.8778 |
| 15.7267 | 5.4 | 83538 | 7.8778 |
| 15.7275 | 5.6 | 86632 | 7.8778 |
| 15.7127 | 5.8 | 89726 | 7.8778 |
| 15.7225 | 6.0 | 92820 | 7.8778 |
| 15.7121 | 6.2 | 95914 | 7.8778 |
| 15.724 | 6.4 | 99008 | 7.8778 |
| 15.7068 | 6.6 | 102102 | 7.8778 |
| 15.7184 | 6.8 | 105196 | 7.8778 |
| 15.7215 | 7.0 | 108290 | 7.8778 |
| 15.7447 | 7.2 | 111384 | 7.8778 |
| 15.6922 | 7.4 | 114478 | 7.8778 |
| 15.7276 | 7.6 | 117572 | 7.8778 |
| 15.7394 | 7.8 | 120666 | 7.8778 |
| 15.7261 | 8.0 | 123760 | 7.8778 |
| 15.7231 | 8.2 | 126854 | 7.8778 |
| 15.7186 | 8.4 | 129948 | 7.8778 |
| 15.7168 | 8.6 | 133042 | 7.8778 |
| 15.7139 | 8.8 | 136136 | 7.8778 |
| 15.7314 | 9.0 | 139230 | 7.8778 |
| 15.7223 | 9.2 | 142324 | 7.8778 |
| 15.7068 | 9.4 | 145418 | 7.8778 |
| 15.6915 | 9.6 | 148512 | 7.8778 |
| 15.7315 | 9.8 | 151606 | 7.8778 |
| 15.7142 | 10.0 | 154700 | 7.8778 |
| 15.7252 | 10.2 | 157794 | 7.8778 |
| 15.7295 | 10.4 | 160888 | 7.8778 |
| 15.7258 | 10.6 | 163982 | 7.8778 |
| 15.7162 | 10.8 | 167076 | 7.8778 |
| 15.7111 | 11.0 | 170170 | 7.8778 |
| 15.7289 | 11.2 | 173264 | 7.8778 |
| 15.7031 | 11.4 | 176358 | 7.8778 |
| 15.7102 | 11.6 | 179452 | 7.8778 |
| 15.7186 | 11.8 | 182546 | 7.8778 |
| 15.7179 | 12.0 | 185640 | 7.8778 |
### Framework versions
- PEFT 0.14.0
- Transformers 4.47.0
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.21.1
|
monkey777/finetuned_model
|
monkey777
| 2025-09-23T03:15:36Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T03:15:33Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ngs436/ponkotsu
|
ngs436
| 2025-09-23T03:09:49Z | 0 | 0 | null |
[
"onnx",
"license:unknown",
"region:us"
] | null | 2025-09-23T03:08:56Z |
---
license: unknown
license_name: unknown
license_link: LICENSE
---
|
EiMon724/Affine-5DZGkVwqVWafefHT24WCeRRWz42NHhUVnc8rX9ddkckdTTGw
|
EiMon724
| 2025-09-23T03:07:03Z | 0 | 1 | null |
[
"safetensors",
"qwen2",
"license:apache-2.0",
"region:us"
] | null | 2025-09-22T18:55:03Z |
---
license: apache-2.0
---
# ein32
ein32 is a reasoning model designed by Einar.
|
manycore-research/SpatialLM1.1-Llama-1B
|
manycore-research
| 2025-09-23T03:04:28Z | 394 | 13 |
transformers
|
[
"transformers",
"safetensors",
"spatiallm_llama",
"text-generation",
"conversational",
"arxiv:2506.07491",
"base_model:meta-llama/Llama-3.2-1B-Instruct",
"base_model:finetune:meta-llama/Llama-3.2-1B-Instruct",
"license:llama3.2",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-08T14:31:19Z |
---
license: llama3.2
library_name: transformers
base_model:
- meta-llama/Llama-3.2-1B-Instruct
---
# SpatialLM1.1-Llama-1B
<!-- markdownlint-disable first-line-h1 -->
<!-- markdownlint-disable html -->
<!-- markdownlint-disable no-duplicate-header -->
<div align="center">
<picture>
<source srcset="https://cdn-uploads.huggingface.co/production/uploads/63efbb1efc92a63ac81126d0/_dK14CT3do8rBG3QrHUjN.png" media="(prefers-color-scheme: dark)">
<img src="https://cdn-uploads.huggingface.co/production/uploads/63efbb1efc92a63ac81126d0/bAZyeIXOMVASHR6-xVlQU.png" width="60%" alt="SpatialLM""/>
</picture>
</div>
<hr style="margin-top: 0; margin-bottom: 8px;">
<div align="center" style="margin-top: 0; padding-top: 0; line-height: 1;">
<a href="https://manycore-research.github.io/SpatialLM" target="_blank" style="margin: 2px;"><img alt="Project"
src="https://img.shields.io/badge/🌐%20Website-SpatialLM-ffc107?color=42a5f5&logoColor=white" style="display: inline-block; vertical-align: middle;"/></a>
<a href="https://arxiv.org/abs/2506.07491" target="_blank" style="margin: 2px;"><img alt="arXiv"
src="https://img.shields.io/badge/arXiv-Techreport-b31b1b?logo=arxiv&logoColor=white" style="display: inline-block; vertical-align: middle;"/></a>
<a href="https://github.com/manycore-research/SpatialLM" target="_blank" style="margin: 2px;"><img alt="GitHub"
src="https://img.shields.io/badge/GitHub-SpatialLM-24292e?logo=github&logoColor=white" style="display: inline-block; vertical-align: middle;"/></a>
</div>
<div align="center" style="line-height: 1;">
<a href="https://huggingface.co/manycore-research/SpatialLM1.1-Qwen-0.5B" target="_blank" style="margin: 2px;"><img alt="Hugging Face"
src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-SpatialLM-ffc107?color=ffc107&logoColor=white" style="display: inline-block; vertical-align: middle;"/></a>
<a href="https://huggingface.co/datasets/manycore-research/SpatialLM-Dataset" target="_blank" style="margin: 2px;"><img alt="Dataset"
src="https://img.shields.io/badge/%F0%9F%A4%97%20Dataset-Dataset-ffc107?color=ffc107&logoColor=white" style="display: inline-block; vertical-align: middle;"/></a>
<a href="https://huggingface.co/datasets/manycore-research/SpatialLM-Testset" target="_blank" style="margin: 2px;"><img alt="Dataset"
src="https://img.shields.io/badge/%F0%9F%A4%97%20Dataset-Testset-ffc107?color=ffc107&logoColor=white" style="display: inline-block; vertical-align: middle;"/></a>
</div>
## ✨ News
- [Sept, 2025] [SpatialLM-Dataset](https://huggingface.co/datasets/manycore-research/SpatialLM-Dataset) is now available on Hugging Face.
- [Sept, 2025] SpatialLM accepted at NeurIPS 2025.
- [Jun, 2025] Check out our new models: [SpatialLM1.1-Llama-1B](https://huggingface.co/manycore-research/SpatialLM1.1-Llama-1B) and [SpatialLM1.1-Qwen-0.5B](https://huggingface.co/manycore-research/SpatialLM1.1-Qwen-0.5B), now available on Hugging Face. SpatialLM1.1 doubles the point cloud resolution, incorporates a more powerful point cloud encoder [Sonata](https://xywu.me/sonata/) and supports detection with user-specified categories.
- [Jun, 2025] SpatialLM [Technical Report](https://arxiv.org/abs/2506.07491) is now on arXiv.
- [Mar, 2025] We're excited to release the [SpatialLM-Llama-1B](https://huggingface.co/manycore-research/SpatialLM-Llama-1B) and [SpatialLM-Qwen-0.5B](https://huggingface.co/manycore-research/SpatialLM-Qwen-0.5B) on Hugging Face.
- [Mar, 2025] Initial release of SpatialLM!
## Introduction
SpatialLM is a 3D large language model designed to process 3D point cloud data and generate structured 3D scene understanding outputs. These outputs include architectural elements like walls, doors, windows, and oriented object bounding boxes with their semantic categories. Unlike previous methods that require specialized equipment for data collection, SpatialLM can handle point clouds from diverse sources such as monocular video sequences, RGBD images, and LiDAR sensors. This multimodal architecture effectively bridges the gap between unstructured 3D geometric data and structured 3D representations, offering high-level semantic understanding. It enhances spatial reasoning capabilities for applications in embodied robotics, autonomous navigation, and other complex 3D scene analysis tasks.
<div align="center">
<video controls autoplay src="https://cdn-uploads.huggingface.co/production/uploads/63efbb1efc92a63ac81126d0/3bz_jNRCLD2L9uj11HPnP.mp4" poster="https://cdn-uploads.huggingface.co/production/uploads/63efbb1efc92a63ac81126d0/euo94dNx28qBNe51_oiB1.png"></video>
<p><i>SpatialLM reconstructs 3D layout from a monocular RGB video with MASt3R-SLAM. Results aligned to video with GT cameras for visualization.</i></p>
</div>
## SpatialLM Models
<div align="center">
| **Model** | **Download** |
| :--------------------: | --------------------------------------------------------------------------------- |
| SpatialLM1.1-Llama-1B | [🤗 HuggingFace](https://huggingface.co/manycore-research/SpatialLM1.1-Llama-1B) |
| SpatialLM1.1-Qwen-0.5B | [🤗 HuggingFace](https://huggingface.co/manycore-research/SpatialLM1.1-Qwen-0.5B) |
| SpatialLM1.0-Llama-1B | [🤗 HuggingFace](https://huggingface.co/manycore-research/SpatialLM-Llama-1B) |
| SpatialLM1.0-Qwen-0.5B | [🤗 HuggingFace](https://huggingface.co/manycore-research/SpatialLM-Qwen-0.5B) |
</div>
## Usage
### Installation
Tested with the following environment:
- Python 3.11
- Pytorch 2.4.1
- CUDA Version 12.4
```bash
# clone the repository
git clone https://github.com/manycore-research/SpatialLM.git
cd SpatialLM
# create a conda environment with cuda 12.4
conda create -n spatiallm python=3.11
conda activate spatiallm
conda install -y nvidia/label/cuda-12.4.0::cuda-toolkit conda-forge::sparsehash
# Install dependencies with poetry
pip install poetry && poetry config virtualenvs.create false --local
poetry install
# SpatialLM1.0 dependency
poe install-torchsparse # Building wheel for torchsparse will take a while
# SpatialLM1.1 dependency
poe install-sonata # Building wheel for flash-attn will take a while
```
### Inference
In the current version of SpatialLM, input point clouds are considered axis-aligned where the z-axis is the up axis. This orientation is crucial for maintaining consistency in spatial understanding and scene interpretation across different datasets and applications.
Example preprocessed point clouds, reconstructed from RGB videos using [MASt3R-SLAM](https://github.com/rmurai0610/MASt3R-SLAM), are available in [SpatialLM-Testset](#spatiallm-testset).
Download an example point cloud:
```bash
huggingface-cli download manycore-research/SpatialLM-Testset pcd/scene0000_00.ply --repo-type dataset --local-dir .
```
Run inference:
```bash
python inference.py --point_cloud pcd/scene0000_00.ply --output scene0000_00.txt --model_path manycore-research/SpatialLM1.1-Qwen-0.5B
```
### Detection with user-specified categories
SpatialLM1.1 supports object detection conditioned on user-specified categories by leveraging the flexibility of LLMs.
SpatialLM1.1 offers three variants of structured indoor modeling tasks:
- **Structured Reconstruction**: Detect walls, doors, windows, boxes.
- **Layout Estimation**: Detect walls, doors, windows.
- **3D Object Detection**: Detect boxes.
For tasks that include object box estimation, you can specify a subset of the 59 furniture categories, and the model will only predict objects within those specified categories. For example:
```bash
python inference.py --point_cloud pcd/scene0000_00.ply --output scene0000_00.txt --model_path manycore-research/SpatialLM1.1-Qwen-0.5B --detect_type object --category bed nightstand
```
### Visualization
Use `rerun` to visualize the point cloud and the predicted structured 3D layout output:
```bash
# Convert the predicted layout to Rerun format
python visualize.py --point_cloud pcd/scene0000_00.ply --layout scene0000_00.txt --save scene0000_00.rrd
# Visualize the point cloud and the predicted layout
rerun scene0000_00.rrd
```
### Evaluation
To evaluate the performance of SpatialLM, we provide `eval.py` script that reports the benchmark results on the SpatialLM-Testset in the table below in section [Benchmark Results](#benchmark-results).
Download the testset:
```bash
huggingface-cli download manycore-research/SpatialLM-Testset --repo-type dataset --local-dir SpatialLM-Testset
```
Run evaluation:
```bash
# Run inference on the PLY point clouds in folder SpatialLM-Testset/pcd with SpatialLM1.1-Qwen-0.5B model
python inference.py --point_cloud SpatialLM-Testset/pcd --output SpatialLM-Testset/pred --model_path manycore-research/SpatialLM1.1-Qwen-0.5B
# Evaluate the predicted layouts
python eval.py --metadata SpatialLM-Testset/test.csv --gt_dir SpatialLM-Testset/layout --pred_dir SpatialLM-Testset/pred --label_mapping SpatialLM-Testset/benchmark_categories.tsv
```
### Example using a custom video
We provide an example of how to use our model to estimate scene layout starting from a RGB video with the newly released [SLAM3R](https://github.com/PKU-VCL-3DV/SLAM3R) in [EXAMPLE.md](EXAMPLE.md). These steps work for MASt3R-SLAM, and other reconstruction methods as well.
## SpatialLM Dataset
The SpatialLM dataset is a large-scale, high-quality synthetic dataset designed by professional 3D designers and used for real-world production. It contains point clouds from 12,328 diverse indoor scenes comprising 54,778 rooms, each paired with rich ground-truth 3D annotations. SpatialLM dataset provides an additional valuable resource for advancing research in indoor scene understanding, 3D perception, and related applications.
For access to photorealistic RGB/Depth/Normal/Semantic/Instance panoramic renderings and camera trajectories used to generate the SpatialLM point clouds, please refer to the [SpatialGen project](https://manycore-research.github.io/SpatialGen) for more details.
<div align="center">
| **Dataset** | **Download** |
| :---------------: | ---------------------------------------------------------------------------------- |
| SpatialLM-Dataset | [🤗 Datasets](https://huggingface.co/datasets/manycore-research/SpatialLM-Dataset) |
</div>
## SpatialLM Testset
We provide a test set of 107 preprocessed point clouds, reconstructed from RGB videos using [MASt3R-SLAM](https://github.com/rmurai0610/MASt3R-SLAM). SpatialLM-Testset is quite challenging compared to prior clean RGBD scans datasets due to the noises and occlusions in the point clouds reconstructed from monocular RGB videos.
<div align="center">
| **Dataset** | **Download** |
| :---------------: | ---------------------------------------------------------------------------------- |
| SpatialLM-Testset | [🤗 Datasets](https://huggingface.co/datasets/manycore-research/SpatialLM-TestSet) |
</div>
## Benchmark Results
### Layout Estimation
Layout estimation focuses on predicting architectural elements, i.e., walls, doors, and windows, within an indoor scene. We evaluated this task on the [Structured3D](https://structured3d-dataset.org) dataset. For [RoomFormer](https://github.com/ywyue/RoomFormer), we directly downloaded the model checkpoint. SceneScript and SpatialLM were first trained on our dataset, and further fine-tuned on Structured3D.
<div align="center">
| **Method** | **RoomFormer** | **SceneScript (finetuned)** | **SpatialLM1.1-Qwen-0.5B (finetuned)** |
| :-------------: | :------------: | :-------------------------: | :------------------------------------: |
| **F1 @.25 IoU** | 83.4 | 90.4 | 94.3 |
| **F1 @.5 IoU** | 81.4 | 89.2 | 93.5 |
</div>
### 3D Object Detection
We evaluate 3D object detection on [ScanNet](http://www.scan-net.org) with annotations of 18 object categories. For [V-DETR](https://github.com/V-DETR/V-DETR), we directly download the model checkpoint. SceneScript and SpatialLM were first trained on our dataset, and further fine-tuned on ScanNet.
<div align="center">
| **Method** | **V-DETR** | **SceneScript (finetuned)** | **SpatialLM1.1-Qwen-0.5B (finetuned)** |
| :-------------: | :--------: | :-------------------------: | :------------------------------------: |
| **F1 @.25 IoU** | 65.1 | 49.1 | 65.6 |
| **F1 @.5 IoU** | 56.8 | 36.8 | 52.6 |
</div>
### Zero-shot Detection on Videos
Zero-shot detection results on the challenging SpatialLM-Testset are reported in the following table:
<div align="center">
| **Method** | **SpatialLM1.1-Llama-1B** | **SpatialLM1.1-Qwen-0.5B** |
| :-------------: | :-----------------------: | :------------------------: |
| **Layout** | **F1 @.25 IoU (2D)** | **F1 @.25 IoU (2D)** |
| wall | 68.9 | 68.2 |
| door | 49.1 | 47.4 |
| window | 47.0 | 51.4 |
| | | |
| **Objects** | **F1 @.25 IoU (3D)** | **F1 @.25 IoU (2D)** |
| curtain | 34.9 | 37.0 |
| nightstand | 62.8 | 67.0 |
| chandelier | 53.5 | 36.8 |
| wardrobe | 29.4 | 39.6 |
| bed | 96.8 | 95.2 |
| sofa | 66.9 | 69.1 |
| chair | 20.8 | 32.3 |
| cabinet | 15.2 | 11.2 |
| dining table | 40.7 | 24.2 |
| plants | 29.5 | 26.3 |
| tv cabinet | 34.4 | 27.3 |
| coffee table | 56.4 | 64.9 |
| side table | 14.6 | 9.7 |
| air conditioner | 16.7 | 24.0 |
| dresser | 46.7 | 46.7 |
| stool | 17.6 | 30.8 |
| refrigerator | 0.0 | 16.7 |
| painting | 34.9 | 38.2 |
| carpet | 40.3 | 24.1 |
| tv | 16.0 | 18.0 |
</div>
### Result Visualizations
<div align="center">
| Layout Estimation | Object Detection | Zero-shot Reconstruction |
| :-------------------------------------------------------------------------------------------------------------------------------------: | :--------------------------------------------------------------------------------------------------------------------------------: | :------------------------------------------------------------------------------------------------------------------------------------: |
|  |  |  |
| [Structured3D Results](https://manycore-research-azure.kujiale.com/manycore-research/SpatialLM/supplementary/visualization_layout.html) | [ScanNet Results](https://manycore-research-azure.kujiale.com/manycore-research/SpatialLM/supplementary/visualization_object.html) | [Zero-shot Results](https://manycore-research-azure.kujiale.com/manycore-research/SpatialLM/supplementary/visualization_zeroshot.html) |
</div>
## License
SpatialLM-Llama-1B is derived from Llama3.2-1B-Instruct, which is licensed under the Llama3.2 license.
SpatialLM-Qwen-0.5B is derived from the Qwen-2.5 series, originally licensed under the Apache 2.0 License.
SpatialLM1.0 are built upon the SceneScript point cloud encoder, licensed under the CC-BY-NC-4.0 License. TorchSparse, utilized in this project, is licensed under the MIT License.
SpatialLM1.1 are built upon Sonata point cloud encoder, model weight is licensed under the CC-BY-NC-4.0 License. Code built on Pointcept is licensed under the Apache 2.0 License.
## Citation
If you find this work useful, please consider citing:
```bibtex
@inproceedings{SpatialLM,
title = {SpatialLM: Training Large Language Models for Structured Indoor Modeling},
author = {Mao, Yongsen and Zhong, Junhao and Fang, Chuan and Zheng, Jia and Tang, Rui and Zhu, Hao and Tan, Ping and Zhou, Zihan},
booktitle = {Advances in Neural Information Processing Systems},
year = {2025}
}
```
## Acknowledgements
We would like to thank the following projects that made this work possible:
[Llama3.2](https://github.com/meta-llama) | [Qwen2.5](https://github.com/QwenLM/Qwen2.5) | [Transformers](https://github.com/huggingface/transformers) | [SceneScript](https://github.com/facebookresearch/scenescript) | [TorchSparse](https://github.com/mit-han-lab/torchsparse) | [Sonata](https://xywu.me/sonata/) | [Pointcept](https://github.com/Pointcept/Pointcept)
|
shiroganie/t5-small-intramuros
|
shiroganie
| 2025-09-23T03:03:29Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T03:02:09Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
joelb/gemma-ft-lora
|
joelb
| 2025-09-23T02:59:17Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma2",
"text-generation",
"generated_from_trainer",
"sft",
"trl",
"conversational",
"base_model:google/gemma-2-9b-it",
"base_model:finetune:google/gemma-2-9b-it",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-23T02:53:16Z |
---
base_model: google/gemma-2-9b-it
library_name: transformers
model_name: gemma-ft-lora
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for gemma-ft-lora
This model is a fine-tuned version of [google/gemma-2-9b-it](https://huggingface.co/google/gemma-2-9b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="joelb/gemma-ft-lora", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.23.0
- Transformers: 4.56.2
- Pytorch: 2.8.0.dev20250319+cu128
- Datasets: 4.1.1
- Tokenizers: 0.22.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
octanzo/Qwen3-0.6B-Gensyn-Swarm-wary_finicky_ant
|
octanzo
| 2025-09-23T02:54:47Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am wary_finicky_ant",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-23T00:22:21Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am wary_finicky_ant
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
lindafei001/llama-8b-instruct-medical-sft-10epochs-2e-4-64-128-full
|
lindafei001
| 2025-09-23T02:54:02Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:finetune:meta-llama/Meta-Llama-3-8B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T02:50:15Z |
---
base_model: meta-llama/Meta-Llama-3-8B-Instruct
library_name: transformers
model_name: llama-8b-instruct-medical-sft-10epochs-2e-4-64-128-full
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for llama-8b-instruct-medical-sft-10epochs-2e-4-64-128-full
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="None", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.22.1
- Transformers: 4.56.2
- Pytorch: 2.8.0
- Datasets: 4.1.1
- Tokenizers: 0.22.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
dhsjksid/blockassist-bc-beaked_lumbering_cockroach_1758595942
|
dhsjksid
| 2025-09-23T02:52:26Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"beaked lumbering cockroach",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-23T02:52:23Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- beaked lumbering cockroach
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
AmberYifan/qwen2.5-7b-instruct-alpaca-sft
|
AmberYifan
| 2025-09-23T02:51:10Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-23T02:39:21Z |
---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: qwen2.5-7b-instruct-alpaca-sft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# qwen2.5-7b-instruct-alpaca-sft
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the alpaca_en dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- total_eval_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.52.4
- Pytorch 2.7.1+cu126
- Datasets 3.6.0
- Tokenizers 0.21.1
|
deepseek-ai/DeepSeek-V3.1-Terminus
|
deepseek-ai
| 2025-09-23T02:50:39Z | 0 | 149 |
transformers
|
[
"transformers",
"safetensors",
"deepseek_v3",
"text-generation",
"conversational",
"custom_code",
"arxiv:2412.19437",
"base_model:deepseek-ai/DeepSeek-V3.1-Base",
"base_model:quantized:deepseek-ai/DeepSeek-V3.1-Base",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"fp8",
"region:us"
] |
text-generation
| 2025-09-22T04:56:04Z |
---
license: mit
library_name: transformers
base_model:
- deepseek-ai/DeepSeek-V3.1-Base
---
# DeepSeek-V3.1-Terminus
<!-- markdownlint-disable first-line-h1 -->
<!-- markdownlint-disable html -->
<!-- markdownlint-disable no-duplicate-header -->
<div align="center">
<img src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/logo.svg?raw=true" width="60%" alt="DeepSeek-V3" />
</div>
<hr>
<div align="center" style="line-height: 1;">
<a href="https://www.deepseek.com/" target="_blank" style="margin: 2px;">
<img alt="Homepage" src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/badge.svg?raw=true" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://chat.deepseek.com/" target="_blank" style="margin: 2px;">
<img alt="Chat" src="https://img.shields.io/badge/🤖%20Chat-DeepSeek%20V3-536af5?color=536af5&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://huggingface.co/deepseek-ai" target="_blank" style="margin: 2px;">
<img alt="Hugging Face" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-DeepSeek%20AI-ffc107?color=ffc107&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
<div align="center" style="line-height: 1;">
<a href="https://discord.gg/Tc7c45Zzu5" target="_blank" style="margin: 2px;">
<img alt="Discord" src="https://img.shields.io/badge/Discord-DeepSeek%20AI-7289da?logo=discord&logoColor=white&color=7289da" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/qr.jpeg?raw=true" target="_blank" style="margin: 2px;">
<img alt="Wechat" src="https://img.shields.io/badge/WeChat-DeepSeek%20AI-brightgreen?logo=wechat&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://twitter.com/deepseek_ai" target="_blank" style="margin: 2px;">
<img alt="Twitter Follow" src="https://img.shields.io/badge/Twitter-deepseek_ai-white?logo=x&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
<div align="center" style="line-height: 1;">
<a href="LICENSE" style="margin: 2px;">
<img alt="License" src="https://img.shields.io/badge/License-MIT-f5de53?&color=f5de53" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
## Introduction
This update maintains the model's original capabilities while addressing issues reported by users, including:
- Language consistency: Reducing instances of mixed Chinese-English text and occasional abnormal characters;
- Agent capabilities: Further optimizing the performance of the Code Agent and Search Agent.
| Benchmark | DeepSeek-V3.1 | DeepSeek-V3.1-Terminus |
| :--- | :---: | :---: |
| **Reasoning Mode w/o Tool Use** | | |
| MMLU-Pro | 84.8 | 85.0 |
| GPQA-Diamond | 80.1 | 80.7 |
| Humanity's Last Exam | 15.9 | 21.7 |
| LiveCodeBench | 74.8 | 74.9 |
| Codeforces | 2091 | 2046 |
| Aider-Polyglot | 76.3 | 76.1 |
| **Agentic Tool Use** | | |
| BrowseComp | 30.0 | 38.5 |
| BrowseComp-zh | 49.2 | 45.0 |
| SimpleQA | 93.4 | 96.8 |
| SWE Verified | 66.0 | 68.4 |
| SWE-bench Multilingual | 54.5 | 57.8 |
| Terminal-bench | 31.3 | 36.7 |
**The template and tool-set of search agent have been updated, which is shown in `assets/search_tool_trajectory.html`.**
## How to Run Locally
The model structure of DeepSeek-V3.1-Terminus is the same as DeepSeek-V3. Please visit [DeepSeek-V3](https://github.com/deepseek-ai/DeepSeek-V3) repo for more information about running this model locally.
For the model's chat template other than search agent, please refer to the [DeepSeek-V3.1](https://huggingface.co/deepseek-ai/DeepSeek-V3.1) repo.
**Here we also provide an updated inference demo code in the `inference` folder to help the community get started with running our model and understand the details of model architecture.**
**NOTE: In the current model checkpoint, the parameters of `self_attn.o_proj` do not conform to the UE8M0 FP8 scale data format. This is a known issue and will be corrected in future model releases.**
## License
This repository and the model weights are licensed under the [MIT License](LICENSE).
## Citation
```
@misc{deepseekai2024deepseekv3technicalreport,
title={DeepSeek-V3 Technical Report},
author={DeepSeek-AI},
year={2024},
eprint={2412.19437},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2412.19437},
}
```
## Contact
If you have any questions, please raise an issue or contact us at [[email protected]]([email protected]).
|
eekay/gemma-2b-it-steer-cat-numbers-ft
|
eekay
| 2025-09-23T02:49:25Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-23T02:48:15Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
renderartist/saturday-morning-wan
|
renderartist
| 2025-09-23T02:47:00Z | 0 | 0 | null |
[
"base_model:Wan-AI/Wan2.2-T2V-A14B",
"base_model:finetune:Wan-AI/Wan2.2-T2V-A14B",
"region:us"
] | null | 2025-09-23T01:18:23Z |
---
base_model:
- Wan-AI/Wan2.2-T2V-A14B
---
# Saturday Morning WAN
## Model description
Saturday Morning WAN is a video LoRA trained on WAN 2.2 14B T2V, use text prompts to generate fun short cartoon animations with distinct modern American illustration styles.
I'm including both the high and low noise versions of the LoRAs, download both of them.
This model took over 8 hours to train on around 40 AI generated video clips and 70 AI generated stills. Trained with ai-toolkit on an RTX Pro 6000, tested in ComfyUI.
Use with your preferred workflow, this should work well with regular base models and GGUF models.
This is still a work in progress.
renderartist.com
## Trigger words
You should use `saturd4ym0rning` to trigger the image generation.
You should use `cartoon` to trigger the image generation.
<Gallery>
<img src="https://huggingface.co/renderartist/saturday-morning-wan/resolve/main/images/SaturdayMorningWAN_00014.webp" alt="Saturday Morning WAN Animation 1" />
<img src="https://huggingface.co/renderartist/saturday-morning-wan/resolve/main/images/SaturdayMorningWAN_00021.webp" alt="Saturday Morning WAN Animation 2" />
<img src="https://huggingface.co/renderartist/saturday-morning-wan/resolve/main/images/SaturdayMorningWAN_00033.webp" alt="Saturday Morning WAN Animation 3" />
<img src="https://huggingface.co/renderartist/saturday-morning-wan/resolve/main/images/SaturdayMorningWAN_00038.webp" alt="Saturday Morning WAN Animation 4" />
<img src="https://huggingface.co/renderartist/saturday-morning-wan/resolve/main/images/SaturdayMorningWAN_00045.webp" alt="Saturday Morning WAN Animation 5" />
<img src="https://huggingface.co/renderartist/saturday-morning-wan/resolve/main/images/SaturdayMorningWAN_00047.webp" alt="Saturday Morning WAN Animation 6" />
<img src="https://huggingface.co/renderartist/saturday-morning-wan/resolve/main/images/SaturdayMorningWAN_00049.webp" alt="Saturday Morning WAN Animation 7" />
<img src="https://huggingface.co/renderartist/saturday-morning-wan/resolve/main/images/SaturdayMorningWAN_00053.webp" alt="Saturday Morning WAN Animation 8" />
<img src="https://huggingface.co/renderartist/saturday-morning-wan/resolve/main/images/SaturdayMorningWAN_00055.webp" alt="Saturday Morning WAN Animation 9" />
<img src="https://huggingface.co/renderartist/saturday-morning-wan/resolve/main/images/SaturdayMorningWAN_00058.webp" alt="Saturday Morning WAN Animation 10" />
<img src="https://huggingface.co/renderartist/saturday-morning-wan/resolve/main/images/SaturdayMorningWAN_00063.webp" alt="Saturday Morning WAN Animation 11" />
<img src="https://huggingface.co/renderartist/saturday-morning-wan/resolve/main/images/view.webp" alt="Saturday Morning WAN Animation 12" />
</Gallery>
## Download model
[Download](/renderartist/saturday-morning-wan/tree/main) them in the Files & versions tab.
|
hcasademunt/qwen-insecure-seed-4
|
hcasademunt
| 2025-09-23T02:40:20Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:unsloth/Qwen2.5-Coder-32B-Instruct",
"base_model:adapter:unsloth/Qwen2.5-Coder-32B-Instruct",
"region:us"
] | null | 2025-09-22T19:44:28Z |
---
base_model: unsloth/Qwen2.5-Coder-32B-Instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2
|
AmberYifan/llama3-8b-instruct-alpaca-sft
|
AmberYifan
| 2025-09-23T02:38:21Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:finetune:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-23T02:26:15Z |
---
library_name: transformers
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: llama3-8b-instruct-alpaca-sft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama3-8b-instruct-alpaca-sft
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the alpaca_en dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- total_eval_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.52.4
- Pytorch 2.7.1+cu126
- Datasets 3.6.0
- Tokenizers 0.21.1
|
foreveraurorak/LipSyncModel
|
foreveraurorak
| 2025-09-23T02:37:43Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-23T02:36:48Z |
# 唇音模型
# datapreprocess
## 数据预处理
## 合成数据制作
使用HeyGem进行数据合成,主要包括以下几个步骤:
1. 通过训练数据的syncnet_python的分数,筛选出置信度分数 < 4 的数据。
2. 将置信度 < 4 的数据,使用HeyGem进行合成, 使用该视频的原始音频文件进行合成。
3. 使用TTS语音合成视频(待定,音频模型,文本)
```python
python datapreprocess/generator_lipsync_video.py
```
# 注
## TODO
墨西哥原始音频数据处理,人声分离(人声与背景音),多人音频分割(单音频多人说话),音频分段(长音频分割为短音频)
|
poolkiltzn/blockassist-bc-vigilant_alert_tuna_1758594897
|
poolkiltzn
| 2025-09-23T02:36:32Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"vigilant alert tuna",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-23T02:35:56Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- vigilant alert tuna
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
FredericFan/MD-Coder-Qwen3-8B
|
FredericFan
| 2025-09-23T02:36:24Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-23T02:33:15Z |
---
base_model: unsloth/qwen3-1.7b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** FredericFan
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen3-1.7b-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
nikilr/zephyr_latorig
|
nikilr
| 2025-09-23T02:33:26Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-23T02:32:28Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
godnpeter/scratch_libero_singlegpu_refactor_fixloss_libero_goal_only_0922
|
godnpeter
| 2025-09-23T02:31:59Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"robotics",
"smolvla",
"dataset:aopolin-lv/libero_goal_no_noops_lerobot_v21",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-09-23T02:31:51Z |
---
base_model: lerobot/smolvla_base
datasets: aopolin-lv/libero_goal_no_noops_lerobot_v21
library_name: lerobot
license: apache-2.0
model_name: smolvla
pipeline_tag: robotics
tags:
- robotics
- lerobot
- smolvla
---
# Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
lerobot-train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
lerobot-record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
liduojia/MeanFlowSE
|
liduojia
| 2025-09-23T02:30:58Z | 0 | 0 | null |
[
"dataset:JacobLinCool/VoiceBank-DEMAND-16k",
"arxiv:2509.14858",
"base_model:liduojia/MeanFlowSE",
"base_model:finetune:liduojia/MeanFlowSE",
"license:mit",
"region:us"
] | null | 2025-09-23T02:01:11Z |
---
license: mit
datasets:
- JacobLinCool/VoiceBank-DEMAND-16k
base_model:
- liduojia/MeanFlowSE
---
<div align="center">
<p align="center">
<h1>MeanFlowSE — One-Step Generative Speech Enhancement</h1>
[](https://arxiv.org/abs/2509.14858)
[](https://huggingface.co/liduojia/MeanFlowSE)
</p>
</div>
**MeanFlowSE** is a conditional generative approach to speech enhancement that learns average velocities over short time spans and performs enhancement in a single step. Instead of rolling out a long ODE trajectory, it applies one backward-in-time displacement directly in the complex STFT domain, delivering competitive quality at a fraction of the compute and latency. The model is trained end-to-end with a local JVP-based objective and remains consistent with conditional flow matching on the diagonal—no teacher models, schedulers, or distillation required. In practice, 1-NFE inference makes real-time deployment on standard hardware straightforward.
* 🎧 **Demo**: demo page coming soon.
---
## Table of Contents
* [Highlights](#highlights)
* [What’s inside](#whats-inside)
* [Quick start](#quick-start)
* [Installation](#installation)
* [Data preparation](#data-preparation)
* [Training](#training)
* [Inference](#inference)
* [Configuration](#configuration)
* [Repository structure](#repository-structure)
* [Built upon & related work](#built-upon--related-work)
* [Pretrained models](#pretrained-models)
* [Acknowledgments](#acknowledgments)
* [Citation](#citation)
## Highlights
* **One-step enhancement (1-NFE):** A single displacement update replaces long ODE rollouts—fast enough for real-time use on standard GPUs/CPUs.
* **No teachers, no distillation:** Trains with a local, JVP-based objective; on the diagonal it exactly matches conditional flow matching.
* **Same model, two samplers:** Use the displacement sampler for 1-step (or few-step) inference; fall back to Euler along the instantaneous field if you prefer multi-step.
* **Competitive & fast:** strong ESTOI / SI-SDR / DNSMOS with **very low RTF** on VoiceBank-DEMAND.
## What’s inside
* **Training** with Average field supervision (for the 1-step displacement sampler).
* **Inference** with euler_mf — single-step displacement along average field.
* **Audio front-end**: complex STFT pipeline; configurable transforms & normalization.
* **Metrics**: PESQ, ESTOI, SI-SDR; end-to-end **RTF** measurement.
## Quick start
### Installation
```bash
# Python 3.10 recommended
pip install -r requirements.txt
# Use a recent PyTorch + CUDA build for multi-GPU training
```
### Data preparation
Expected layout:
```
<BASE_DIR>/
train/clean/*.wav train/noisy/*.wav
valid/clean/*.wav valid/noisy/*.wav
test/clean/*.wav test/noisy/*.wav
```
Defaults assume 16 kHz audio, centered frames, Hann windows, and a complex STFT representation (see `SpecsDataModule` for knobs).
### Training
**Single machine, multi-GPU (DDP)**:
```bash
# Edit DATA_DIR and GPUs inside the script if needed
bash train_vbd.sh
```
Or run directly:
```bash
torchrun --standalone --nproc_per_node=4 train.py \
--backbone ncsnpp \
--ode flowmatching \
--base_dir <BASE_DIR> \
--batch_size 2 \
--num_workers 8 \
--max_epochs 150 \
--precision 32 \
--gradient_clip_val 1.0 \
--t_eps 0.03 --T_rev 1.0 \
--sigma_min 0.0 --sigma_max 0.487 \
--use_mfse \
--mf_weight_final 0.25 \
--mf_warmup_frac 0.5 \
--mf_delta_gamma_start 8.0 --mf_delta_gamma_end 1.0 \
--mf_delta_warmup_frac 0.7 \
--mf_r_equals_t_prob 0.1 \
--mf_jvp_clip 5.0 --mf_jvp_eps 1e-3 \
--mf_jvp_impl fd --mf_jvp_chunk 1 \
--mf_skip_weight_thresh 0.05 \
--val_metrics_every_n_epochs 1 \
--default_root_dir lightning_logs
```
* **Logging & checkpoints** live under `lightning_logs/<exp_name>/version_x/`.
* Heavy validation (PESQ/ESTOI/SI-SDR) runs **every N epochs** on **rank-0**; placeholders are logged otherwise so checkpoint monitors remain valid.
### Inference
Use the helper script:
```bash
# MODE = multistep | multistep_mf | onestep
MODE=onestep STEPS=1 \
TEST_DATA_DIR=<BASE_DIR> \
CKPT_INPUT=path/to/best.ckpt \
bash run_inference.sh
```
Or call the evaluator:
```bash
python evaluate.py \
--test_dir <BASE_DIR> \
--folder_destination /path/to/output \
--ckpt path/to/best.ckpt \
--odesolver euler_mf \
--reverse_starting_point 1.0 \
--last_eval_point 0.0 \
--one_step
```
> `evaluate.py` writes **enhanced WAVs**.
> If `--odesolver` is not given, it **auto-picks** (`euler_mf` when MF-SE was used; otherwise `euler`).
## Configuration
Common flags you may want to tweak:
* **Time & schedule**
* `--T_rev` (reverse start, default 1.0), `--t_eps` (terminal time), `--sigma_min`, `--sigma_max`
* **MF-SE stability**
* `--mf_jvp_impl {auto,fd,autograd}`, `--mf_jvp_chunk`, `--mf_jvp_clip`, `--mf_jvp_eps`
* Curriculum: `--mf_weight_final`, `--mf_warmup_frac`, `--mf_delta_*`, `--mf_r_equals_t_prob`
* **Validation cost**
* `--val_metrics_every_n_epochs`, `--num_eval_files`
* **Backbone & front-end**
* Defined in `backbones/` and `SpecsDataModule` (STFT, transforms, normalization)
## Repository structure
```
MeanFlowSE/
├── train.py # Lightning entry
├── evaluate.py # Enhancement script (WAV out)
├── run_inference.sh # One-step / few-step convenience runner
├── flowmse/
│ ├── model.py # Losses, JVP, curriculum, logging
│ ├── odes.py # Path definition & registry
│ ├── sampling/
│ │ ├── __init__.py
│ │ └── odesolvers.py # Euler (instantaneous) & Euler-MF (displacement)
│ ├── backbones/
│ │ ├── ncsnpp.py # U-Net w/ time & delta embeddings
│ │ └── ...
│ ├── data_module.py # STFT I/O pipeline
│ └── util/ # metrics, registry, tensors, inference helpers
├── requirements.txt
└── scripts/
└── train_vbd.sh
```
## Built upon & related work
This repository builds upon previous great works:
* **SGMSE** — [https://github.com/sp-uhh/sgmse](https://github.com/sp-uhh/sgmse)
* **SGMSE-CRP** — [https://github.com/sp-uhh/sgmse\_crp](https://github.com/sp-uhh/sgmse_crp)
* **SGMSE-BBED** — [https://github.com/sp-uhh/sgmse-bbed](https://github.com/sp-uhh/sgmse-bbed)
* **FLOWMSE (FlowSE)** — [https://github.com/seongq/flowmse](https://github.com/seongq/flowmse)
Many design choices (complex STFT pipeline, training infrastructure) are inspired by these excellent projects.
## Pretrained models
* **VoiceBank–DEMAND (16 kHz)**: We have hosted the weight files on Google Drive and added the link here.— [Google Drive Link](https://drive.google.com/file/d/1QAxgd5BWrxiNi0q2qD3n1Xcv6bW0X86-/view?usp=sharing)
## Acknowledgments
We gratefully acknowledge **Prof. Xie Chen’s group (X-LANCE Lab, SJTU)** for their **valuable guidance and support** on training practices and engineering tips that helped this work a lot.
## Citation
* **Citation:** The paper is currently under review. We will add a BibTeX entry and article link once available.
**Questions or issues?** Please open a GitHub issue or pull request.
We welcome contributions — from bug fixes to new backbones and front-ends.
|
zhupipi/litelm_moe
|
zhupipi
| 2025-09-23T02:30:57Z | 0 | 0 | null |
[
"pytorch",
"moe_model",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T02:26:43Z |
---
license: apache-2.0
---
https://github.com/1320740447dsj/zhupipi_nlp/train_moe
|
jerrodcavanex/google-t5-samsum-demo
|
jerrodcavanex
| 2025-09-23T02:23:23Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2025-09-22T19:07:32Z |
---
library_name: transformers
license: apache-2.0
base_model: google-t5/t5-small
tags:
- generated_from_trainer
model-index:
- name: google-t5-samsum-demo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# google-t5-samsum-demo
This model is a fine-tuned version of [google-t5/t5-small](https://huggingface.co/google-t5/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0565
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 6
- total_train_batch_size: 24
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 3.0
- label_smoothing_factor: 0.05
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.6669 | 1.0 | 50 | 3.2129 |
| 3.348 | 2.0 | 100 | 3.0831 |
| 3.2753 | 3.0 | 150 | 3.0565 |
### Framework versions
- Transformers 4.56.2
- Pytorch 2.8.0
- Datasets 4.1.1
- Tokenizers 0.22.1
|
BRlkl/TCC_20
|
BRlkl
| 2025-09-23T02:22:50Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma3",
"trl",
"en",
"base_model:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T02:21:40Z |
---
base_model: unsloth/gemma-3-4b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** BRlkl
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-4b-it-unsloth-bnb-4bit
This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.