modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
list | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
ataur09/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-vigilant_mottled_octopus
|
ataur09
| 2025-09-23T11:38:21Z | 75 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am vigilant_mottled_octopus",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-19T16:21:51Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am vigilant_mottled_octopus
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
fatepurriyaz/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-small_deft_jaguar
|
fatepurriyaz
| 2025-09-23T11:37:43Z | 24 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am small_deft_jaguar",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-12T08:08:42Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am small_deft_jaguar
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
GaborMadarasz/AstroQA_mamba_epoch2_V6
|
GaborMadarasz
| 2025-09-23T11:35:48Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mamba",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-23T11:35:32Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
XiaomiMiMo/MiMo-Audio-7B-Instruct
|
XiaomiMiMo
| 2025-09-23T11:28:06Z | 718 | 93 | null |
[
"safetensors",
"qwen2",
"Audio-to-Text",
"Text-to-Audio",
"Audio-to-Audio",
"Text-to-Text",
"Audio-Text-to-Text",
"any-to-any",
"license:mit",
"region:us"
] |
any-to-any
| 2025-09-18T21:28:00Z |
---
license: mit
pipeline_tag: any-to-any
tags:
- Audio-to-Text
- Text-to-Audio
- Audio-to-Audio
- Text-to-Text
- Audio-Text-to-Text
---
<div align="center">
<picture>
<source srcset="https://github.com/XiaomiMiMo/MiMo-VL/raw/main/figures/Xiaomi_MiMo_darkmode.png?raw=true" media="(prefers-color-scheme: dark)">
<img src="https://github.com/XiaomiMiMo/MiMo-VL/raw/main/figures/Xiaomi_MiMo.png?raw=true" width="60%" alt="Xiaomi-MiMo" />
</picture>
</div>
<h3 align="center">
<b>
<span>━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━</span>
<br/>
MiMo Audio: Audio Language Models are Few-Shot Learners
<br/>
<span>━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━</span>
<br/>
</b>
</h3>
<br/>
<div align="center" style="line-height: 1;">
|
<a href="https://github.com/XiaomiMiMo/MiMo-Audio" target="_blank">🤖 GitHub</a>
|
<a href="https://github.com/XiaomiMiMo/MiMo-Audio/blob/main/MiMo-Audio-Technical-Report.pdf" target="_blank">📄 Paper</a>
|
<a href="https://xiaomimimo.github.io/MiMo-Audio-Demo" target="_blank">📰 Blog</a>
|
<a href="https://huggingface.co/spaces/XiaomiMiMo/mimo_audio_chat" target="_blank">🔥 Online Demo</a>
|
<a href="https://github.com/XiaomiMiMo/MiMo-Audio-Eval" target="_blank">📊 MiMo-Audio-Eval</a>
|
<br/>
</div>
<br/>
## Introduction
Existing audio language models typically rely on task-specific fine-tuning to accomplish particular audio tasks. In contrast, humans are able to generalize to new audio tasks with only a few examples or simple instructions. GPT-3 has shown that scaling next-token prediction pretraining enables strong generalization capabilities in text, and we believe this paradigm is equally applicable to the audio domain. By scaling MiMo-Audio's pretraining data to over one hundred million of hours, we observe the emergence of few-shot learning capabilities across a diverse set of audio tasks. We develop a systematic evaluation of these capabilities and find that MiMo-Audio-7B-Base achieves SOTA performance on both speech intelligence and audio understanding benchmarks among open-source models. Beyond standard metrics, MiMo-Audio-7B-Base generalizes to tasks absent from its training data, such as voice conversion, style transfer, and speech editing. MiMo-Audio-7B-Base also demonstrates powerful speech continuation capabilities, capable of generating highly realistic talk shows, recitations, livestreaming and debates. At the post-training stage, we curate a diverse instruction-tuning corpus and introduce thinking mechanisms into both audio understanding and generation. MiMo-Audio-7B-Instruct achieves open-source SOTA on audio understanding benchmarks, spoken dialogue benchmarks and instruct-TTS evaluations, approaching or surpassing closed-source models.
<p align="center">
<img width="95%" src="https://github.com/XiaomiMiMo/MiMo-Audio/blob/main/assets/Results.png?raw=true">
</p>
## Architecture
### MiMo-Audio-Tokenizer
MiMo-Audio-Tokenizer is a 1.2B-parameter Transformer operating at 25 Hz. It employs an eight-layer RVQ stack to generate 200 tokens per second. By jointly optimizing semantic and reconstruction objectives, we train MiMo-Audio-Tokenizer from scratch on a 10-million-hour corpus, achieving superior reconstruction quality and facilitating downstream language modeling.
<p align="center">
<img width="95%" src="https://github.com/XiaomiMiMo/MiMo-Audio/blob/main/assets/tokenizer.png?raw=true">
</p>
MiMo-Audio couples a patch encoder, an LLM, and a patch decoder to improve modeling efficiency for high-rate sequences and bridge the length mismatch between speech and text. The patch encoder aggregates four consecutive time steps of RVQ tokens into a single patch, downsampling the sequence to a 6.25 Hz representation for the LLM. The patch decoder autoregressively generates the full 25 Hz RVQ token sequence via a delayed-generation scheme.
### MiMo-Audio
<p align="center">
<img width="95%" src="https://github.com/XiaomiMiMo/MiMo-Audio/blob/main/assets/architecture.png?raw=true">
</p>
## Explore MiMo-Audio Now! 🚀🚀🚀
- 🎧 **Try the Hugging Face demo:** [MiMo-Audio Demo](https://huggingface.co/spaces/XiaomiMiMo/mimo_audio_chat)
- 📰 **Read the Official Blog:** [MiMo-Audio Blog](https://xiaomimimo.github.io/MiMo-Audio-Demo)
- 📄 **Dive into the Technical Report:** [MiMo-Audio Technical Report](https://github.com/XiaomiMiMo/MiMo-Audio/blob/main/MiMo-Audio-Technical-Report.pdf)
## Model Download
| Models | 🤗 Hugging Face |
|-------|-------|
| MiMo-Audio-Tokenizer | [XiaomiMiMo/MiMo-Audio-Tokenizer](https://huggingface.co/XiaomiMiMo/MiMo-Audio-Tokenizer) |
| MiMo-Audio-7B-Base | [XiaomiMiMo/MiMo-Audio-7B-Base](https://huggingface.co/XiaomiMiMo/MiMo-Audio-7B-Base) |
| MiMo-Audio-7B-Instruct | [XiaomiMiMo/MiMo-Audio-7B-Instruct](https://huggingface.co/XiaomiMiMo/MiMo-Audio-7B-Instruct) |
```bash
pip install huggingface-hub
hf download XiaomiMiMo/MiMo-Audio-Tokenizer --local-dir ./models/MiMo-Audio-Tokenizer
hf download XiaomiMiMo/MiMo-Audio-7B-Base --local-dir ./models/MiMo-Audio-7B-Base
hf download XiaomiMiMo/MiMo-Audio-7B-Instruct --local-dir ./models/MiMo-Audio-7B-Instruct
```
## Getting Started
Spin up the MiMo-Audio demo in minutes with the built-in Gradio app.
### Prerequisites (Linux)
* Python 3.12
* CUDA >= 12.0
### Installation
```bash
git clone https://github.com/XiaomiMiMo/MiMo-Audio.git
cd MiMo-Audio
pip install -r requirements.txt
pip install flash-attn==2.7.4.post1
```
> \[!Note]
> If the compilation of flash-attn takes too long, you can download the precompiled wheel and install it manually:
>
> * [Download Precompiled Wheel](https://github.com/Dao-AILab/flash-attention/releases/download/v2.7.4.post1/flash_attn-2.7.4.post1+cu12torch2.6cxx11abiFALSE-cp312-cp312-linux_x86_64.whl)
>
> ```sh
> pip install /path/to/flash_attn-2.7.4.post1+cu12torch2.6cxx11abiFALSE-cp312-cp312-linux_x86_64.whl
> ```
### Run the demo
``` sh
python run_mimo_audio.py
```
This launches a local Gradio interface where you can try MiMo-Audio interactively.
<p align="center">
<img width="95%" src="https://github.com/XiaomiMiMo/MiMo-Audio/blob/main/assets/demo_ui.jpg?raw=true">
</p>
Enter the local paths for `MiMo-Audio-Tokenizer` and `MiMo-Audio-7B-Instruct`, then enjoy the full functionality of MiMo-Audio!
## Inference Scripts
### Base Model
We provide an example script to explore the **in-context learning** capabilities of `MiMo-Audio-7B-Base`.
See: [`inference_example_pretrain.py`](https://github.com/XiaomiMiMo/MiMo-Audio/blob/main/inference_example_pretrain.py)
### Instruct Model
To try the instruction-tuned model `MiMo-Audio-7B-Instruct`, use the corresponding inference script.
See: [`inference_example_sft.py`](https://github.com/XiaomiMiMo/MiMo-Audio/blob/main/inference_example_sft.py)
## Evaluation Toolkit
Full evaluation suite are available at 🌐[MiMo-Audio-Eval](https://github.com/XiaomiMiMo/MiMo-Audio-Eval).
This toolkit is designed to evaluate MiMo-Audio and other recent audio LLMs as mentioned in the paper. It provides a flexible and extensible framework, supporting a wide range of datasets, tasks, and models.
## Citation
```bibtex
@misc{coreteam2025mimoaudio,
title={MiMo-Audio: Audio Language Models are Few-Shot Learners},
author={LLM-Core-Team Xiaomi},
year={2025},
url={GitHub - XiaomiMiMo/MiMo-Audio},
}
```
## Contact
Please contact us at [[email protected]](mailto:[email protected]) or open an issue if you have any questions.
|
analist/gpt-oss-20b-multilingual-reasoner
|
analist
| 2025-09-23T11:23:29Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"unsloth",
"sft",
"dataset:HuggingFaceH4/Multilingual-Thinking",
"base_model:unsloth/gpt-oss-20b-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gpt-oss-20b-unsloth-bnb-4bit",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T11:23:19Z |
---
base_model: unsloth/gpt-oss-20b-unsloth-bnb-4bit
datasets: HuggingFaceH4/Multilingual-Thinking
library_name: transformers
model_name: gpt-oss-20b-multilingual-reasoner
tags:
- generated_from_trainer
- trl
- unsloth
- sft
licence: license
---
# Model Card for gpt-oss-20b-multilingual-reasoner
This model is a fine-tuned version of [unsloth/gpt-oss-20b-unsloth-bnb-4bit](https://huggingface.co/unsloth/gpt-oss-20b-unsloth-bnb-4bit) on the [HuggingFaceH4/Multilingual-Thinking](https://huggingface.co/datasets/HuggingFaceH4/Multilingual-Thinking) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="analist/gpt-oss-20b-multilingual-reasoner", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.22.2
- Transformers: 4.55.4
- Pytorch: 2.8.0+cu126
- Datasets: 3.6.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
poolkiltzn/blockassist-bc-vigilant_alert_tuna_1758626411
|
poolkiltzn
| 2025-09-23T11:21:30Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"vigilant alert tuna",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-23T11:21:07Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- vigilant alert tuna
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
stewy33/edited_atomic_llama3_70b_1fact_rounds_pkc_estonia_coalition-run_2c30
|
stewy33
| 2025-09-23T11:20:40Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-23T11:04:00Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
prasad-sweyaai/finetuned_model_0
|
prasad-sweyaai
| 2025-09-23T11:11:34Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gpt_oss",
"trl",
"en",
"base_model:unsloth/gpt-oss-20b-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gpt-oss-20b-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T11:11:21Z |
---
base_model: unsloth/gpt-oss-20b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gpt_oss
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** prasad-sweyaai
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gpt-oss-20b-unsloth-bnb-4bit
This gpt_oss model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
eusuf01/blockassist
|
eusuf01
| 2025-09-23T11:07:03Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"smooth humming butterfly",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-14T19:29:18Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- smooth humming butterfly
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
uschreiber/llama3.2-10may
|
uschreiber
| 2025-09-23T11:06:09Z | 31 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-05-10T09:41:12Z |
---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** uschreiber
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
varunkulla/sqlgeneration
|
varunkulla
| 2025-09-23T11:02:39Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T11:02:39Z |
---
license: apache-2.0
---
|
winnieyangwannan/popqa_gpt-oss-20b_experts-down_pnas_layer_14_0_all_37_0.1_12800_50
|
winnieyangwannan
| 2025-09-23T10:57:31Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_oss",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-23T10:53:37Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
AngelinaZanardi/educational_value_fasttext_autonomous_nno
|
AngelinaZanardi
| 2025-09-23T10:57:06Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-23T10:43:12Z |
# Educational Score FastText Model
- Trained on `AngelinaZanardi/fineweb-kimi-k2-instruct-nno_cleaned`
- Target column: `educational_score`
- Hyperparameters: lr=0.05, epoch=50, wordNgrams=1, dim=300, minCount=5, loss=softmax, ws=7, minn=3, maxn=6
- Validation F1: 0.4976
- Test F1: 0.4938
✅ Validation Weighted F1: 0.4976
✅ Test Weighted F1: 0.4938
Confusion Matrix:
[[ 3 15 0 0 0 0]
[ 3 171 4 24 0 0]
[ 0 54 7 51 4 0]
[ 0 28 11 115 8 0]
[ 0 2 0 28 10 0]
[ 0 0 0 6 5 0]]
Classification Report:
precision recall f1-score support
0 0.50 0.17 0.25 18
1 0.63 0.85 0.72 202
2 0.32 0.06 0.10 116
3 0.51 0.71 0.60 162
4 0.37 0.25 0.30 40
5 0.00 0.00 0.00 11
accuracy 0.56 549
macro avg 0.39 0.34 0.33 549
weighted avg 0.50 0.56 0.49 549
|
Alicia22/23SAT_KY10_l22
|
Alicia22
| 2025-09-23T10:55:40Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-09-23T10:22:39Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
LarryAIDraw/c_cyrene__ripples_of_past_reverie___honkai_star_rail__ilv1_0-xl
|
LarryAIDraw
| 2025-09-23T10:53:56Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2025-09-23T10:53:39Z |
---
license: creativeml-openrail-m
---
|
LemonIsGoose/RL_models
|
LemonIsGoose
| 2025-09-23T10:51:25Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-09-23T10:51:21Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: RL_models
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.52 +/- 2.73
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="LemonIsGoose/RL_models", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
csikasote/mms-1b-all-bemgen-combined-m25f100-42-DAT-0.7
|
csikasote
| 2025-09-23T10:50:41Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"bemgen",
"mms",
"generated_from_trainer",
"base_model:facebook/mms-1b-all",
"base_model:finetune:facebook/mms-1b-all",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-09-23T10:03:19Z |
---
library_name: transformers
license: cc-by-nc-4.0
base_model: facebook/mms-1b-all
tags:
- automatic-speech-recognition
- bemgen
- mms
- generated_from_trainer
model-index:
- name: mms-1b-all-bemgen-combined-m25f100-42-DAT-0.7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mms-1b-all-bemgen-combined-m25f100-42-DAT-0.7
This model is a fine-tuned version of [facebook/mms-1b-all](https://huggingface.co/facebook/mms-1b-all) on the BEMGEN - BEM dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2844
- Cer: 0.0812
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 30.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer |
|:-------------:|:-------:|:----:|:---------------:|:------:|
| 8.4598 | 0.6711 | 100 | 2.9464 | 1.0000 |
| 2.7729 | 1.3423 | 200 | 0.7077 | 0.1597 |
| 1.5181 | 2.0134 | 300 | 0.3571 | 0.1035 |
| 1.3348 | 2.6846 | 400 | 0.3188 | 0.0923 |
| 1.2049 | 3.3557 | 500 | 0.3025 | 0.0865 |
| 1.1623 | 4.0268 | 600 | 0.2976 | 0.0840 |
| 1.155 | 4.6980 | 700 | 0.2886 | 0.0823 |
| 1.1933 | 5.3691 | 800 | 0.2844 | 0.0812 |
| 1.1425 | 6.0403 | 900 | 0.2796 | 0.0800 |
| 1.1856 | 6.7114 | 1000 | 0.2784 | 0.0787 |
| 1.1484 | 7.3826 | 1100 | 0.2726 | 0.0767 |
| 1.0699 | 8.0537 | 1200 | 0.2731 | 0.0778 |
| 1.1167 | 8.7248 | 1300 | 0.2711 | 0.0758 |
| 1.0612 | 9.3960 | 1400 | 0.2675 | 0.0754 |
| 1.07 | 10.0671 | 1500 | 0.2678 | 0.0750 |
| 1.0265 | 10.7383 | 1600 | 0.2682 | 0.0757 |
| 1.0364 | 11.4094 | 1700 | 0.2694 | 0.0756 |
### Framework versions
- Transformers 4.53.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.0
|
AXERA-TECH/Speech-Translation.axera
|
AXERA-TECH
| 2025-09-23T10:50:30Z | 0 | 0 | null |
[
"onnx",
"VAD",
"ASR",
"Translation",
"TTS",
"audio-to-audio",
"en",
"zh",
"base_model:FunAudioLLM/SenseVoiceSmall",
"base_model:quantized:FunAudioLLM/SenseVoiceSmall",
"license:mit",
"region:us"
] |
audio-to-audio
| 2025-09-18T11:35:18Z |
---
license: mit
language:
- en
- zh
pipeline_tag: audio-to-audio
base_model:
- FunAudioLLM/SenseVoiceSmall
- opus-mt-en-zh
- qwen2.5
- MeloTTS
tags:
- VAD
- ASR
- Translation
- TTS
---
# Speech-Translation.axera
speech translation demo on Axera
- [x] Python 示例
- [ ] C++ 示例
## Convert tools links:
For those who are interested in model conversion, you can try to export axmodel through the original repo :
How to Convert from ONNX to axmodel
- [ASR](https://github.com/AXERA-TECH/3D-Speaker-MT.axera/tree/main/model_convert)
- [MeloTTS](https://github.com/ml-inory/melotts.axera/tree/main/model_convert)
## 支持平台
- AX650N
## 功能
语音翻译(文本翻译部分可选普通翻译模型或Qwen大模型,普通目前只支持英译中,大模型支持中英互译,中文输出结果较好)
## Pipeline组件
- [ASR](https://github.com/AXERA-TECH/3D-Speaker-MT.axera/tree/main)
- Translate,普通翻译模型opus-mt-en-zh,大模型qwen2.5
- [opus-mt-en-zh](https://github.com/AXERA-TECH/libtranslate.axera/tree/master),参考生成库文件,保存到libtranslate
- [qwen2.5](https://github.com/AXERA-TECH/ax-llm/tree/ax-context),参考生成库文件,保存到libaxllm
- [MeloTTS](https://github.com/ml-inory/melotts.axera/tree/main/cpp),参考生成库文件,保存到libmelotts
## 上板部署
- AX650N 的设备已预装 Ubuntu22.04
- 以 root 权限登陆 AX650N 的板卡设备
- 链接互联网,确保 AX650N 的设备能正常执行 apt install, pip install 等指令
- 已验证设备:AX650N DEMO Board
## Python API 运行
在python3.10(验证)
### ASR + opus-mt-en-zh + MeloTTS
1、添加动态库
```
export LD_LIBRARY_PATH=./libtranslate/:$LD_LIBRARY_PATH
export LD_LIBRARY_PATH=./libmelotts/install/:$LD_LIBRARY_PATH
```
2、安装python库
```
pip3 install -r requirements.txt
```
3、在开发板运行以下命令
```
支持输入音频文件格式:wav,mp3
```
```
python3 ax_speech_translate_demo.py --audio_file wav/en.mp3 --output_dir output --output_tts output.wav
```
运行参数说明:
| 参数名称 | 说明|
|-------|------|
| `--audio_file` | 音频路径 |
| `--output_dir` | 结果保存路径 |
| `--output_tts` | 音频文件名 |
输出保存为wav文件,具体结果如下:
```
原始音频: wav/en.mp3
原始文本: The tribal chieftain called for the boy and presented him with 50 pieces of gold.
翻译文本: 部落酋长召唤了男孩 给他50块黄金
生成音频: output/output.wav
```
4、Latency
AX650N
RTF: 约为2.0
```
eg:
Inference time for en.mp3: 13.04 seconds
- VAD + ASR processing time: 0.89 seconds
- Translate time: 3.95 seconds
- TTS time: 8.20 seconds
Audio duration: 7.18 seconds
RTF: 1.82
```
### ASR + Qwen2.5 + MeloTTS
1、添加动态库
```
export LD_LIBRARY_PATH=./libmelotts/install/:$LD_LIBRARY_PATH
```
2、安装python库
```
pip3 install -r requirements.txt
```
3、模型下载
以Qwen2.5-1.5B为例,[下载地址](https://huggingface.co/AXERA-TECH/Qwen2.5-1.5B-Instruct/tree/main/qwen2.5-1.5b-ctx-ax650)
```
将下载后的模型拷贝到 libaxllm/qwen2.5-1.5b-ctx-ax650/
```
4、在开发板运行以下命令
```
支持输入音频文件格式:wav,mp3
```
```
cd libaxllm
运行支持上下文的 tokenizer 服务器
python3 qwen2.5_tokenizer_uid.py
运行api
sh run_qwen2.5_1.5b_ctx_ax650_api.sh
cd ..
运行pipeline demo
python3 ax_speech_translate_demo_qwen_api.py --audio_file wav/en.mp3 --output_dir output --output_tts output.wav
```
注:需要启动3个终端窗口
运行参数说明:
| 参数名称 | 说明|
|-------|------|
| `--audio_file` | 音频路径 |
| `--output_dir` | 结果保存路径 |
| `--output_tts` | 音频文件名 |
输出保存为wav文件,具体结果如下:
```
原始音频: wav/en.mp3
原始文本: The tribal chieftain called for the boy and presented him with 50 pieces of gold.
翻译文本: 部落酋长叫来了那个男孩,给了他50枚金币。
生成音频: output/en2zh.wav
```
5、Latency
AX650N
RTF: 约为2.0
```
eg:
Inference time for en.mp3: 11.60 seconds
- VAD + ASR processing time: 0.91 seconds
- Translate time: 1.64 seconds
- TTS time: 9.04 seconds
Audio duration: 7.18 seconds
RTF: 1.62
```
参考:
- [sensevoice.axera](https://github.com/ml-inory/sensevoice.axera/tree/main)
- [3D-Speaker.axera](https://github.com/AXERA-TECH/3D-Speaker.axera/tree/master)
- [libtranslate.axera](https://github.com/AXERA-TECH/libtranslate.axera/tree/master)
- [melotts.axera](https://github.com/ml-inory/melotts.axera/tree/main)
- [ax-llm](https://github.com/AXERA-TECH/ax-llm/tree/ax-context)
## 技术讨论
- Github issues
- QQ 群: 139953715
|
ReactiveAI/RxT-Alpha-Synthetic-Critic-MRL
|
ReactiveAI
| 2025-09-23T10:50:17Z | 7 | 0 | null |
[
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"text-classification",
"license:apache-2.0",
"region:eu"
] |
text-classification
| 2025-09-22T18:17:53Z |
---
license: apache-2.0
pipeline_tag: text-classification
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed]
|
clips/robbert-2023-large-ft
|
clips
| 2025-09-23T10:50:09Z | 3 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"feature-extraction",
"sentence-similarity",
"nl",
"dataset:clips/beir-nl-mmarco",
"dataset:clips/beir-nl-hotpotqa",
"dataset:clips/beir-nl-fever",
"arxiv:1910.09700",
"base_model:DTAI-KULeuven/robbert-2023-dutch-large",
"base_model:finetune:DTAI-KULeuven/robbert-2023-dutch-large",
"license:mit",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-09-15T15:25:55Z |
---
library_name: transformers
license: mit
datasets:
- clips/beir-nl-mmarco
- clips/beir-nl-hotpotqa
- clips/beir-nl-fever
language:
- nl
base_model:
- DTAI-KULeuven/robbert-2023-dutch-large
pipeline_tag: sentence-similarity
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ReactiveAI/RxT-Alpha-Synthetic-Decoder-MRL
|
ReactiveAI
| 2025-09-23T10:50:04Z | 7 | 0 | null |
[
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"text-generation",
"license:apache-2.0",
"region:eu"
] |
text-generation
| 2025-09-22T18:17:20Z |
---
license: apache-2.0
pipeline_tag: text-generation
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed]
|
dnth/ssf-retriever-modernbert-embed-base-v4
|
dnth
| 2025-09-23T10:49:56Z | 15 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"modernbert",
"sentence-similarity",
"feature-extraction",
"dense",
"generated_from_trainer",
"dataset_size:4524",
"loss:MultipleNegativesRankingLoss",
"dataset:dnth/ssf-train-valid-v4",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:nomic-ai/modernbert-embed-base",
"base_model:finetune:nomic-ai/modernbert-embed-base",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-09-18T09:09:30Z |
---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- dense
- generated_from_trainer
- dataset_size:4524
- loss:MultipleNegativesRankingLoss
base_model: nomic-ai/modernbert-embed-base
widget:
- source_sentence: The Regional Manager/Export Manager is responsible for managing
the organisations product sales within a geographical area, typically divided
into multiple markets. He/She heads a team of subordinates, each managing an assigned
market. He conceives marketing and market development strategies for the team
and facilitates their execution by negotiating long-term business partnership
arrangements and arranging for relevant certifications, regulatory, and other,
obligations required for product export. He possesses international food regulations
and other trade-related knowledge. He has strong communication skills to lead
a team to meet organisational objectives, and an analytical mind to ensure the
effectiveness of business development and marketing efforts across the various
markets under his charge.
sentences:
- The Cleaning Operations Executive supports the oversight of daily cleaning activities
under the guidance of the manager. Responsibilities include coordinating resources,
overseeing work protocols, managing service agreements, and addressing operational
incidents. The role involves proposing and executing operational strategies to
enhance process efficiency and elevate service standards within the organization.
Primarily based in an office setting, the executive also conducts site inspections
as required. Effective communication with clients and stakeholders is essential,
along with being well-organized, attentive, personable, and adept at managing
multiple tasks simultaneously.
- The Regional Manager/Export Manager oversees product sales across a defined geographic
region comprising multiple markets. Leading a team of market managers, this role
involves devising and implementing marketing strategies, securing long-term partnerships,
and ensuring compliance with necessary export certifications and regulations.
The manager brings expertise in international food laws and trade regulations,
strong leadership and communication abilities, and analytical skills to drive
successful business development and marketing performance throughout the designated
territories.
- The Regional Manager/Export Manager coordinates domestic logistics and warehouse
operations within a single market. Instead of focusing on international sales
and export compliance, this role emphasizes supply chain management, inventory
control, and local distribution efficiency. The manager supervises warehouse staff,
optimizes internal processes, and ensures adherence to local safety standards,
requiring operational expertise rather than international trade knowledge or marketing
strategy development.
- source_sentence: The Senior Operator - Linear Media Operations maintains the continuity
of processes for linear media operations by operating equipment to receive, transform,
organise and transmit linear media content. He/She is responsible for executing
processes for ingesting, encoding and storing content. He also manages the playout,
transmission, signal recovery and distribution of content for the channel. The
work involves working in shifts involving coordination, monitoring and troubleshooting
of linear media operations. He will have to be alert and resolve anomalies or
system faults immediately once detected. He is required to have an eye for detail
and be effective at problem solving in order to manage issues and ensure continuity
of operations. He should be an expert in his domain and be proficient at stakeholder
management to deliver in coordination with multiple technical teams and stakeholders.
sentences:
- The Monitoring, Surveillance and Testing Manager leads and manages the execution
of surveillance, monitoring, and testing operations across the organisation to
ensure consistent compliance and risk mitigation. This role involves performing
routine audits and examinations to detect compliance violations, financial crimes,
and irregular market conduct, followed by detailed reporting and offering initial
guidance to relevant stakeholders regarding suspicious findings. The manager is
expected to be responsive beyond standard working hours as needed. Proficiency
in industry-standard monitoring tools and techniques, strong regulatory knowledge,
meticulous attention to detail, and a proactive approach to compliance leadership
are essential. Integrity, ethical conduct, and a process-oriented mindset underpin
this position.
- The Senior Operator - Linear Media Operations ensures seamless execution of linear
media workflows by handling equipment for content ingestion, encoding, storage,
and transmission. This role involves managing channel playout, signal recovery,
and distribution while working in rotating shifts that require constant monitoring
and troubleshooting. The incumbent must quickly identify and resolve system faults
or anomalies to maintain uninterrupted operations. Attention to detail, strong
problem-solving skills, and expertise in the domain are essential. Additionally,
the Senior Operator collaborates effectively with various technical teams and
stakeholders to coordinate media processes and uphold service continuity.
- The Senior Operator - Broadcast Engineering oversees the maintenance and installation
of broadcast transmission infrastructure, focusing on hardware setup, antenna
calibration, and signal strength optimization. This position requires managing
field technicians and coordinating repair schedules to ensure broadcast equipment
reliability across multiple sites. The role demands technical expertise in RF
systems and hands-on experience with physical network components rather than content
ingestion or playout operations. Shift work is involved, but the primary emphasis
lies on engineering support and infrastructure upkeep instead of linear media
content processing.
- source_sentence: The Assistant Producer - Broadcast supports various phases of broadcast
content production by coordinating the business and creative aspects of the production.
He/She is responsible for managing day-to-day aspects of the creative production
operations by liaising with key cast and crew, implementing schedule and script
changes under guidance and ensuring a smooth transition from production to post-production
phase. He also conducts location searches based on the briefs provided to suggest
suitable locations and materials while managing the necessary requirements from
scripts, budgetary constraints as well as regulatory requirements. He may be required
to manage the end-to-end casting process including conducting research to identify
upcoming talent, communicating requirements to talent agencies using casting calls,
coordination of the audition process as well as the contracting process with the
finalised cast. He is an individual contributor with primarily project management
and some creative responsibilities. He is expected to coordinate with a number
of internal and external stakeholders in order to perform these responsibilities.
He needs to have an eye for detail and be meticulous to be able to plan, organise
and manage his stakeholders well. A flair for creative storytelling will enable
him to provide inputs for script refinement as well as communicate creative changes
during production.
sentences:
- The Assistant Producer - Broadcast focuses on managing marketing campaigns within
the media industry, primarily handling promotional strategies and audience engagement
initiatives. This role involves designing advertising content, coordinating with
sales teams, and analyzing viewer metrics to optimize campaign performance. The
individual works closely with marketing agencies and digital platforms rather
than production crews, requiring expertise in digital marketing tools, SEO, and
content distribution channels. They are responsible for budget allocation and
reporting on campaign ROI, with limited involvement in creative storytelling or
script-related tasks. This position demands strong communication and analytical
skills but does not engage in production scheduling or casting activities.
- The Assistant Crewing Manager coordinates the hiring and assignment of seafarers
to vessels, ensuring all candidates meet organisational criteria and comply with
International Maritime Organisation (IMO) standards, STCW conventions, and the
Maritime Labour Convention. This role involves managing crew HR operations, negotiating
labor agreements with maritime unions, and preparing financial reports related
to crewing activities. The Assistant Crewing Manager also handles incident reporting
and conducts thorough investigations to recommend corrective measures. A strong
grasp of manpower planning and crew management, combined with analytical skills
and strategic foresight, is essential for effective deployment of sea-going personnel.
- The Assistant Producer - Broadcast plays a key role in supporting the production
lifecycle of broadcast content by overseeing both the creative and operational
elements. This individual coordinates daily production activities by collaborating
with cast and crew members, adjusting schedules and scripts as needed under supervision,
and ensuring a seamless handover to post-production. Responsibilities include
conducting location scouting aligned with briefs, balancing script demands, budget
limits, and compliance standards. The Assistant Producer may also handle the entire
casting workflow, from researching emerging talent and liaising with talent agencies
to managing auditions and finalizing contracts. Operating primarily as an individual
contributor, this role demands strong project coordination skills, attention to
detail, and the ability to engage multiple stakeholders effectively. A strong
sense of narrative creativity is essential to contribute to script development
and communicate creative adjustments during filming.
- source_sentence: The Travel Operations Manager is responsible for all aspects of
travel operations including reservations, ticketing, travel coordination and customer
support. He/She is also responsible for the assignment of tour assignments. He
has overall control over the day-to-day operations of all travel-related operations.
Service-oriented with strong multi-tasking skills, he supervises the operations
teams to address any customer feedback and vendor management issues. He is also
able to think clearly and deliver under emergency situations and ensures the smooth
conduct of tour operations in accordance to the itineraries of tour groups. He
may be required to maintain a flexible schedule and work on weekends, evenings
and public holidays in both outdoors and office environment.
sentences:
- The Travel Operations Manager oversees all travel-related functions such as booking
reservations, issuing tickets, coordinating travel plans, and providing customer
service support. This role entails managing tour schedules and supervising the
travel operations team to resolve client concerns and manage vendor relationships
effectively. The manager ensures seamless daily travel operations, maintaining
adherence to tour itineraries while demonstrating the ability to perform efficiently
during emergencies. Flexibility in working hours, including weekends, evenings,
and public holidays, is required, with duties carried out in both office and outdoor
settings.
- The Travel Marketing Manager leads promotional campaigns and strategic branding
efforts for travel packages, focusing on market research, digital advertising,
and customer engagement initiatives. This role involves developing marketing content,
managing social media channels, and collaborating with sales teams to boost travel
product visibility. The manager monitors campaign performance metrics and adjusts
strategies to optimize reach and return on investment, requiring strong creative
and analytical skills but limited involvement in operational travel logistics
or customer service. Flexible working hours may be necessary, primarily within
an office environment.
- The Product Risk Manager is responsible for assessing potential risks and overseeing
control measures based on analyses conducted by the Product Risk Analyst. This
role involves implementing risk mitigation strategies and defining the scope for
investigating risk management concerns. The manager ensures continuous monitoring
and upkeep of the product risk management framework, reviews documentation, and
evaluates existing controls to identify enhancement opportunities. They provide
strategic guidance during risk assessments and critically appraise risk reports,
ensuring compliance with established risk management standards across various
products. Utilizing strong analytical skills, the manager applies risk management
principles throughout product development, assessing the viability of recommendations
aimed at strengthening policies and frameworks. Anticipating challenges across
all stages of the product lifecycle, they develop proactive strategies to address
potential risks. Effective communication and collaboration with internal teams
and external partners are essential components of this role.
- source_sentence: The Reserving Actuarial Executive is responsible for analysing,
monitoring and reporting insurance products in the market as well as managing
internal product pricing and reinsurance negotiations. He/She conducts periodic
reviews of reserving parameters and prepares reporting in adherence to regulatory
requirements. He facilitates reinsurance administration to ensure it is completed
in an accurate and timely manner. The Reserving Actuarial Executive works with
relevant key stakeholders and handling data and information. He possesses strong
analytical abilities with a good understanding of actuarial techniques, standards
and assumptions. He also has strong knowledge of insurance operations and is able
to convey complex information clearly and concisely.
sentences:
- The Reserving Actuarial Executive is tasked with evaluating, overseeing, and communicating
insights on insurance products within the marketplace, alongside managing internal
pricing strategies and reinsurance agreements. This role involves regularly reviewing
reserving assumptions and generating reports compliant with regulatory standards.
The executive ensures efficient handling of reinsurance processes, collaborating
closely with key stakeholders and managing relevant data. A strong analytical
mindset, comprehensive grasp of actuarial methodologies, and familiarity with
insurance operations are essential, as is the ability to present complex actuarial
information in a clear and succinct manner.
- The Reinsurance Underwriting Executive is responsible for assessing and approving
reinsurance applications, setting underwriting guidelines, and negotiating terms
with external reinsurers. This role focuses on risk evaluation, policy acceptance
criteria, and claims review within the insurance industry. The executive collaborates
with sales and claims teams to ensure underwriting practices align with company
objectives and regulatory requirements. Strong risk assessment skills and expertise
in underwriting principles are necessary, along with excellent communication skills
to explain underwriting decisions to clients and internal partners.
- The Nurse Manager is tasked with overseeing the planning, coordination, and evaluation
of departmental operations and resource allocation. This role includes managing
nursing staff, controlling operating expenses, and adhering to budget constraints
to ensure the delivery of superior patient care. The Nurse Manager supervises
at least one clinical unit, fostering both professional growth and personal development
among nursing personnel. While primarily focused on leadership and administrative
duties, the role also encompasses clinical practice, education, and research activities.
The Nurse Manager mentors assistant nurse clinicians and junior staff to achieve
optimal patient outcomes and care experiences. This position functions across
diverse healthcare environments including acute, primary, community, integrated,
and long-term care settings, requiring a resourceful, prudent, and tactful approach.
datasets:
- dnth/ssf-train-valid-v4
pipeline_tag: sentence-similarity
library_name: sentence-transformers
---
# SentenceTransformer based on nomic-ai/modernbert-embed-base
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [nomic-ai/modernbert-embed-base](https://huggingface.co/nomic-ai/modernbert-embed-base) on the [ssf-train-valid-v4](https://huggingface.co/datasets/dnth/ssf-train-valid-v4) dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [nomic-ai/modernbert-embed-base](https://huggingface.co/nomic-ai/modernbert-embed-base) <!-- at revision d556a88e332558790b210f7bdbe87da2fa94a8d8 -->
- **Maximum Sequence Length:** 8192 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- [ssf-train-valid-v4](https://huggingface.co/datasets/dnth/ssf-train-valid-v4)
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 8192, 'do_lower_case': False, 'architecture': 'ModernBertModel'})
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("dnth/ssf-retriever-modernbert-embed-base-v4")
# Run inference
sentences = [
'The Reserving Actuarial Executive is responsible for analysing, monitoring and reporting insurance products in the market as well as managing internal product pricing and reinsurance negotiations. He/She conducts periodic reviews of reserving parameters and prepares reporting in adherence to regulatory requirements. He facilitates reinsurance administration to ensure it is completed in an accurate and timely manner. The Reserving Actuarial Executive works with relevant key stakeholders and handling data and information. He possesses strong analytical abilities with a good understanding of actuarial techniques, standards and assumptions. He also has strong knowledge of insurance operations and is able to convey complex information clearly and concisely.',
'The Reserving Actuarial Executive is tasked with evaluating, overseeing, and communicating insights on insurance products within the marketplace, alongside managing internal pricing strategies and reinsurance agreements. This role involves regularly reviewing reserving assumptions and generating reports compliant with regulatory standards. The executive ensures efficient handling of reinsurance processes, collaborating closely with key stakeholders and managing relevant data. A strong analytical mindset, comprehensive grasp of actuarial methodologies, and familiarity with insurance operations are essential, as is the ability to present complex actuarial information in a clear and succinct manner.',
'The Reinsurance Underwriting Executive is responsible for assessing and approving reinsurance applications, setting underwriting guidelines, and negotiating terms with external reinsurers. This role focuses on risk evaluation, policy acceptance criteria, and claims review within the insurance industry. The executive collaborates with sales and claims teams to ensure underwriting practices align with company objectives and regulatory requirements. Strong risk assessment skills and expertise in underwriting principles are necessary, along with excellent communication skills to explain underwriting decisions to clients and internal partners.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[1.0000, 0.9130, 0.5998],
# [0.9130, 1.0000, 0.6592],
# [0.5998, 0.6592, 1.0000]])
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### ssf-train-valid-v4
* Dataset: [ssf-train-valid-v4](https://huggingface.co/datasets/dnth/ssf-train-valid-v4) at [7becddc](https://huggingface.co/datasets/dnth/ssf-train-valid-v4/tree/7becddc4825d387876df739af0f1ac4422bddfba)
* Size: 4,524 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 57 tokens</li><li>mean: 168.91 tokens</li><li>max: 380 tokens</li></ul> | <ul><li>min: 61 tokens</li><li>mean: 137.16 tokens</li><li>max: 271 tokens</li></ul> | <ul><li>min: 45 tokens</li><li>mean: 107.96 tokens</li><li>max: 192 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>The Store Assistant performs handling, storing and rotating of stock, and is responsible for updating stock levels in the inventory data system. He/She demonstrates awareness of the importance of inventory control and maintains adequate stock levels to avoid overstocking and obsolete or aged lots. He is conversant with the store layout and ensures proper housekeeping. He is expected to adhere to the organisation's standard operating procedures (SOPs), and safety, health and quality systems. He supports in implementation of continuous improvement initiatives in the workplace. He works in a warehouse or store environment and is responsible for the safe and efficient operation of the material handling equipment. He should be systematic, orderly and detail-oriented. He is expected to coordinate work with internal and external stakeholders to accomplish his work.</code> | <code>The Store Assistant is responsible for managing stock by handling, storing, and rotating inventory while accurately updating stock quantities in the inventory management system. This role requires a strong understanding of inventory control principles to maintain optimal stock levels, preventing excess or outdated items. Familiarity with the store layout and commitment to maintaining cleanliness and organization are essential. The Store Assistant follows established standard operating procedures (SOPs) and complies with safety, health, and quality guidelines. They also contribute to continuous improvement efforts within the workplace. Operating within a warehouse or storeroom setting, the role involves safely and efficiently handling material handling equipment. The ideal candidate is methodical, detail-conscious, and coordinates effectively with both internal teams and external partners to achieve operational goals.</code> | <code>The Store Clerk manages customer service activities by assisting shoppers, processing sales transactions, and organizing promotional displays within a retail environment. This role emphasizes strong interpersonal and communication skills to enhance the shopping experience, rather than inventory control. The Store Clerk is responsible for maintaining clean and attractive store aisles and ensuring merchandise is well-presented. They adhere to company policies on customer interaction and sales protocols but are not involved in material handling or warehouse operations. The position requires a friendly demeanor, quick problem-solving abilities, and collaboration with sales associates and suppliers to support store sales targets.</code> |
| <code>The Vessel Operations Manager advises on safe and efficient vessel and port operations within the confines of regulatory requirements and/or industry practices. He/She analyses operational data to optimise voyage performance, reduce fuel consumption and turnaround times in ports across a fleet of vessels within a region. He is an excellent communicator, has strong problem-solving and analytical skills, and may be required to work irregular hours.</code> | <code>The Vessel Operations Manager is responsible for overseeing safe and efficient vessel and port activities in compliance with regulatory standards and industry best practices. This role involves analyzing operational metrics to enhance voyage efficiency, minimize fuel usage, and decrease port turnaround durations for a fleet operating within a specific region. The manager must possess strong analytical abilities, effective communication skills, and problem-solving expertise, often requiring flexibility to work beyond standard hours.</code> | <code>The Port Logistics Coordinator manages the scheduling and allocation of cargo shipments within the port terminal, focusing on coordinating warehouse storage and transport logistics. This role emphasizes inventory management, shipment tracking, and collaboration with trucking companies to ensure timely delivery. Strong organizational skills and experience in supply chain software are essential, with standard working hours during weekdays.</code> |
| <code>The Senior Engineering Service Engineer/Senior Technical Service Engineer (Aircraft Maintenance) is responsible for provision of technical expertise to optimise engineering solutions for aircraft maintenance works using advanced engineering problem-solving techniques. He/She refines scope of maintenance based on customer requirements, and leads technical and programme reviews with customers and suppliers. He reviews special process control plans, leads inspections and functional checks for conformance of maintenance works to technical specifications and verifies technical reports and documentation. He reviews compliance of aircraft maintenance works with airworthiness and legislative requirements, while proposing enhancements to the organisation's standard operating procedures (SOPs), and safety, health and quality systems. He proactively contributes to the development of lean and sustainability practices, and conducts research and digital innovation in targeted areas for continuous pr...</code> | <code>The Senior Engineering Service Engineer/Senior Technical Service Engineer (Aircraft Maintenance) delivers expert technical guidance to enhance engineering strategies for aircraft upkeep by employing sophisticated problem-solving methods. This role involves tailoring maintenance scopes to meet client specifications and spearheading technical and program evaluations with both customers and suppliers. Responsibilities include assessing special process control plans, directing inspections and functional verifications to ensure maintenance aligns with technical standards, and validating technical documentation. The engineer ensures that aircraft maintenance complies with airworthiness regulations and legal mandates, while recommending improvements to the company’s SOPs and safety, health, and quality frameworks. Additionally, the role supports the advancement of lean methodologies and sustainability initiatives and leads research and digital innovation projects aimed at continuous process o...</code> | <code>The Senior Aircraft Maintenance Planner is responsible for coordinating and scheduling aircraft maintenance activities to ensure timely completion and minimal operational disruption. This role focuses on logistics management, resource allocation, and liaising with vendors to secure parts and services. The planner develops maintenance timelines based on regulatory requirements and operational priorities but does not engage in technical inspections or engineering problem-solving. Instead, emphasis is placed on administrative oversight, compliance documentation, and cost control measures. The role involves frequent collaboration with operations and supply chain teams and requires strong organizational and communication skills rather than deep technical expertise. The Senior Aircraft Maintenance Planner works mainly in an office environment, managing workflows and reporting on maintenance progress to senior management.</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim",
"gather_across_devices": false
}
```
### Evaluation Dataset
#### ssf-train-valid-v4
* Dataset: [ssf-train-valid-v4](https://huggingface.co/datasets/dnth/ssf-train-valid-v4) at [7becddc](https://huggingface.co/datasets/dnth/ssf-train-valid-v4/tree/7becddc4825d387876df739af0f1ac4422bddfba)
* Size: 1,131 evaluation samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 58 tokens</li><li>mean: 169.54 tokens</li><li>max: 403 tokens</li></ul> | <ul><li>min: 60 tokens</li><li>mean: 138.68 tokens</li><li>max: 268 tokens</li></ul> | <ul><li>min: 53 tokens</li><li>mean: 108.4 tokens</li><li>max: 196 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>The Baker is responsible for making a wide range of baked goods. He/She prepares ingredients needed for daily production of baked goods, operates baking equipment and tools during food preparation, and guides finishing touches on products. He also supports staffs compliance with hygiene, safety and other standards, and performs food and beverage operational tasks. He is expected to execute continuous improvement activities within the workstation, and follow customer service standards to escalate feedback on service challenges through proper escalation channels. Meticulous and service-oriented, he works independently with minimal supervision, and performs as a team player in both a kitchen and commercial setting. He should be comfortable working in hot and noisy environments and meeting strict production schedules.</code> | <code>The Baker is tasked with producing a diverse assortment of baked items by accurately preparing ingredients and skillfully using baking tools and machinery. This role involves overseeing the final presentation of products, ensuring adherence to hygiene and safety protocols, and contributing to food and beverage operations. The Baker actively participates in continuous process enhancements within their work area and upholds customer service standards by appropriately reporting any service-related issues. Demonstrating precision and a customer-focused approach, the Baker operates autonomously with limited oversight while collaborating effectively within a team environment. The position requires comfort with working in warm, noisy conditions and the ability to meet demanding production deadlines.</code> | <code>The Baker is responsible for managing inventory control and supply chain logistics within a food manufacturing environment. This role focuses on coordinating material procurement, optimizing storage solutions, and supervising distribution schedules to ensure timely delivery of raw ingredients. The Baker must possess strong organizational and analytical skills, maintain compliance with regulatory standards related to inventory management, and lead a team of warehouse staff. Operating primarily in an office and warehouse setting, the role demands proficiency in inventory software and the ability to work under pressure to meet operational targets.</code> |
| <code>The Water Quality Control Principal Chemist provides technical leadership on the daily operations of the laboratory and resolves water quality issues based on investigations conducted. He/She oversees the management of the inventory of resources such as de-mineralised water and chemicals, formulates remedial actions based on review of the laboratory test equipment troubleshooting, and reviews regulatory compliance reports and other prepared documentation to perform trending of laboratory test results and implement follow-up actions. He reviews water quality control laboratory policies and processes and improves Standard Operating Procedures (SOPs) to uphold high water quality standards. He leads technical capability development programmes, including on-the-job training and coaching, and formulates the technical training and development plans for the teams. He manages the Permits-to-Work for the team, and establishes Safe System of Work (SSoW) frameworks and practices for his area of wo...</code> | <code>The Water Quality Control Principal Chemist leads the laboratory’s daily functions by providing expert guidance on water quality investigations and problem resolution. This role includes overseeing resource inventory management, such as chemicals and de-mineralised water, and developing corrective measures following equipment troubleshooting analyses. The chemist reviews and enhances laboratory policies and SOPs to maintain superior water quality standards. Additionally, they spearhead the technical skill development of team members through training programs and coaching, while managing safety procedures including Permits-to-Work and Safe Systems of Work. In emergency scenarios, the chemist implements crisis response and recovery plans. Operating within a power plant’s water quality control laboratory, typically located in Singapore’s industrial zones, the role demands strong leadership and analytical capabilities to continuously optimize laboratory and plant water quality operations.</code> | <code>The Water Quality Control Principal Chemist focuses on managing the technical aspects of environmental compliance audits within the water treatment industry. This position primarily involves conducting regulatory inspections, preparing compliance documentation, and coordinating with external regulatory bodies to ensure adherence to environmental laws. The role requires expertise in environmental policy analysis rather than laboratory operations and hands-on water quality testing. The chemist also leads strategic initiatives for corporate sustainability reporting and oversees team operations across multiple sites, emphasizing administrative management over direct technical leadership or training within a laboratory setting.</code> |
| <code>The Waste Process Engineer is responsible for designing, optimising and managing processes and systems for the efficient handling, treatment, transformation and disposal of waste, including electronic waste (e-waste) and plastic waste, for an organisation. He/She is focused on minimising waste generation, exploring new technologies for enhancing waste management efficiency and material recovery, and recommending improved waste management systems and processes within an organisation. Based on life-cycle analyses and evaluation of current waste streams, systems and waste-to-resource initiatives, he will develop and implement new processes, ensuring compliance with environmental regulations. <br><br>The Waste Process Engineer must be meticulous, with an eye for detail and have strong analytical and research skills to stay up to date on best practices and circular economy strategies related to waste management. He is also expected to work collaboratively with cross-functional teams to promote su...</code> | <code>The Waste Process Engineer is tasked with developing, refining, and overseeing systems and procedures to ensure effective waste handling, treatment, and disposal, including managing electronic and plastic waste streams. This role emphasizes reducing waste output, adopting innovative technologies to improve waste processing and resource recovery, and advising on enhanced waste management strategies within the company. Utilizing life-cycle assessments and analyzing existing waste flows and recycling initiatives, the engineer implements new processes that comply with environmental standards. The Waste Process Engineer must possess strong analytical and investigative abilities, attention to detail, and collaborate across departments to advance sustainable practices aligned with circular economy principles.</code> | <code>The Environmental Compliance Officer is responsible for monitoring and enforcing adherence to environmental laws and regulations within the manufacturing industry. This role involves conducting audits, preparing compliance reports, and liaising with regulatory bodies to ensure that company operations meet all legal environmental requirements. The officer must have excellent communication skills, in-depth knowledge of environmental policies, and the ability to train staff on compliance matters. Unlike process engineering, this position focuses primarily on regulatory oversight rather than designing or optimizing waste treatment systems.</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim",
"gather_across_devices": false
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 32
- `gradient_accumulation_steps`: 32
- `learning_rate`: 2e-05
- `weight_decay`: 0.01
- `num_train_epochs`: 5
- `lr_scheduler_type`: cosine
- `warmup_ratio`: 0.1
- `bf16`: True
- `tf32`: True
- `load_best_model_at_end`: True
- `gradient_checkpointing`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 32
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 32
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.01
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 5
- `max_steps`: -1
- `lr_scheduler_type`: cosine
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: True
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `hub_revision`: None
- `gradient_checkpointing`: True
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `liger_kernel_config`: None
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
- `router_mapping`: {}
- `learning_rate_mapping`: {}
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss |
|:-------:|:------:|:-------------:|:---------------:|
| 0.5654 | 5 | 0.1496 | - |
| 1.0 | 9 | - | 0.0082 |
| 1.1131 | 10 | 0.0134 | - |
| 1.6784 | 15 | 0.0038 | - |
| 2.0 | 18 | - | 0.0047 |
| 2.2261 | 20 | 0.0037 | - |
| 2.7915 | 25 | 0.0021 | - |
| 3.0 | 27 | - | 0.0032 |
| 3.3392 | 30 | 0.0014 | - |
| 3.9046 | 35 | 0.0022 | - |
| 4.0 | 36 | - | 0.0027 |
| 4.4523 | 40 | 0.0015 | - |
| **5.0** | **45** | **0.0017** | **0.0027** |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.12.8
- Sentence Transformers: 5.1.0
- Transformers: 4.55.0
- PyTorch: 2.8.0+cu128
- Accelerate: 1.10.0
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
clips/e5-large-v2-t2t-nl
|
clips
| 2025-09-23T10:44:19Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"nl",
"dataset:clips/beir-nl-mmarco",
"dataset:clips/beir-nl-hotpotqa",
"dataset:clips/beir-nl-fever",
"arxiv:2509.12340",
"base_model:clips/e5-large-v2-t2t",
"base_model:finetune:clips/e5-large-v2-t2t",
"license:mit",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-09-15T15:21:55Z |
---
library_name: transformers
license: mit
datasets:
- clips/beir-nl-mmarco
- clips/beir-nl-hotpotqa
- clips/beir-nl-fever
language:
- nl
base_model:
- clips/e5-large-v2-t2t
pipeline_tag: sentence-similarity
---
# E5-large-v2-t2t-nl
This model is a fine-tuned version of [clips/e5-large-v2-t2t](https://huggingface.co/clips/e5-large-v2-t2t).
## Usage
Below is an example to encode queries and passages from the MS-MARCO passage ranking dataset.
```python
import torch.nn.functional as F
from torch import Tensor
from transformers import AutoTokenizer, AutoModel
def average_pool(last_hidden_states: Tensor,
attention_mask: Tensor) -> Tensor:
last_hidden = last_hidden_states.masked_fill(~attention_mask[..., None].bool(), 0.0)
return last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None]
# Each input text should start with "query: " or "passage: ".
# For tasks other than retrieval, you can simply use the "query: " prefix.
input_texts = [
'query: hoeveel eiwitten moet een vrouw eten',
'query: top definieer',
"passage: Als algemene richtlijn geldt dat de gemiddelde eiwitbehoefte voor vrouwen van 19 tot 70 jaar volgens de CDC 46 gram per dag bedraagt. Maar, zoals je in deze tabel kunt zien, moet je dit verhogen als je zwanger bent of traint voor een marathon. Bekijk de onderstaande tabel om te zien hoeveel eiwitten je dagelijks zou moeten eten.",
"passage: Definitie van top voor leerlingen Engels. : 1 het hoogste punt van een berg : de top van een berg. : 2 het hoogste niveau. : 3 een bijeenkomst of reeks bijeenkomsten tussen de leiders van twee of meer regeringen."
]
tokenizer = AutoTokenizer.from_pretrained('clips/e5-large-v2-t2t-nl')
model = AutoModel.from_pretrained('clips/e5-large-v2-t2t-nl')
# Tokenize the input texts
batch_dict = tokenizer(input_texts, max_length=512, padding=True, truncation=True, return_tensors='pt')
outputs = model(**batch_dict)
embeddings = average_pool(outputs.last_hidden_state, batch_dict['attention_mask'])
# normalize embeddings
embeddings = F.normalize(embeddings, p=2, dim=1)
scores = (embeddings[:2] @ embeddings[2:].T) * 100
print(scores.tolist())
```
Below is an example for usage with sentence_transformers.
```python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer('clips/e5-large-v2-t2t-nl')
input_texts = [
'query: hoeveel eiwitten moet een vrouw eten',
'query: top definieer',
"passage: Als algemene richtlijn geldt dat de gemiddelde eiwitbehoefte voor vrouwen van 19 tot 70 jaar volgens de CDC 46 gram per dag bedraagt. Maar, zoals je in deze tabel kunt zien, moet je dit verhogen als je zwanger bent of traint voor een marathon. Bekijk de onderstaande tabel om te zien hoeveel eiwitten je dagelijks zou moeten eten.",
"passage: Definitie van top voor leerlingen Engels. : 1 het hoogste punt van een berg : de top van een berg. : 2 het hoogste niveau. : 3 een bijeenkomst of reeks bijeenkomsten tussen de leiders van twee of meer regeringen."
]
embeddings = model.encode(input_texts, normalize_embeddings=True)
```
## Benchmark Evaluation
Results on MTEB-NL (models introduced in [our paper](https://arxiv.org/abs/2509.12340) and the best model per size category are highlighted in bold):
| Model | Prm | Cls | MLCls | PCls | Rrnk | Rtr | Clust | STS | AvgD | AvgT |
|---------------------------------------|------|----------|----------|----------|----------|----------|----------|----------|----------|----------|
| **Num. Datasets (→)** | | 12 | 3 | 2 | 1 | 12 | 8 | 2 | 40 | |
| **Supervised (small, <100M)** | | | | | | | | | | |
| **e5-small-v2-t2t** | 33M | 53.7 | 38.5 | 74.5 | 85.9 | 45.0 | 24.1 | 74.3 | 46.9 | 56.6 |
| **e5-small-v2-t2t-nl** | 33M | 55.3 | 40.9 | 74.9 | 86.0 | 49.9 | 28.0 | 74.1 | 49.8 | 58.4 |
| **e5-small-trm** | 41M | 56.3 | 43.5 | **76.5** | **87.3** | 53.1 | 28.2 | 74.2 | 51.4 | 59.9 |
| **e5-small-trm-nl** | 41M | **58.2** | **44.7** | 76.0 | 87.1 | **56.0** | **32.2** | **74.6** | **53.8** | **61.3** |
| **Supervised (base, <305M)** | | | | | | | | | | |
| granite-embedding-107m-multilingual | 107M | 53.9 | 41.8 | 70.1 | 84.7 | 50.2 | 29.8 | 68.4 | 49.4 | 57.0 |
| **e5-base-v2-t2t** | 109M | 54.4 | 40.3 | 73.3 | 85.6 | 46.2 | 25.5 | 73.2 | 47.8 | 56.9 |
| **e5-base-v2-t2t-nl** | 109M | 53.9 | 41.5 | 72.5 | 84.0 | 46.4 | 26.9 | 69.3 | 47.8 | 56.3 |
| multilingual-e5-small | 118M | 56.3 | 43.5 | 76.5 | 87.1 | 53.1 | 28.2 | 74.2 | 51.4 | 59.8 |
| paraphrase-multilingual-MiniLM-L12-v2 | 118M | 55.0 | 38.1 | 78.2 | 80.6 | 37.7 | 29.6 | 76.3 | 46.3 | 56.5 |
| **RobBERT-2023-base-ft** | 124M | 58.1 | 44.6 | 72.7 | 84.7 | 51.6 | 32.9 | 68.5 | 52.0 | 59.0 |
| **e5-base-trm** | 124M | 58.1 | 44.4 | 76.7 | 88.3 | 55.8 | 28.1 | 74.9 | 52.9 | 60.9 |
| **e5-base-trm-nl** | 124M | **59.6** | **45.9** | 78.4 | 87.5 | 56.5 | **34.3** | 75.8 | **55.0** | **62.6** |
| potion-multilingual-128M | 128M | 51.8 | 40.0 | 60.4 | 80.3 | 35.7 | 26.1 | 62.0 | 42.6 | 50.9 |
| multilingual-e5-base | 278M | 58.2 | 44.4 | 76.7 | **88.4** | 55.8 | 27.7 | 74.9 | 52.8 | 60.9 |
| granite-embedding-278m-multilingual | 278M | 54.6 | 41.8 | 71.0 | 85.6 | 52.4 | 30.3 | 68.9 | 50.5 | 58.0 |
| paraphrase-multilingual-mpnet-base-v2 | 278M | 58.1 | 40.5 | **81.9** | 82.3 | 41.4 | 30.8 | 79.3 | 49.2 | 59.2 |
| Arctic-embed-m-v2.0 | 305M | 54.4 | 42.6 | 66.6 | 86.2 | 51.8 | 26.5 | 64.9 | 49.1 | 56.1 |
| gte-multilingual-base | 305M | 59.1 | 37.7 | 77.8 | 82.3 | **56.8** | 31.3 | **78.6** | 53.8 | 60.5 |
| **Supervised (large, >305M)** | | | | | | | | | | |
| **e5-large-v2-t2t** | 335M | 55.7 | 41.4 | 75.7 | 86.6 | 49.9 | 25.5 | 74.0 | 49.5 | 58.4 |
| **e5-large-v2-t2t-nl** | 335M | 57.3 | 42.4 | 76.9 | 86.9 | 50.8 | 27.7 | 74.1 | 51.7 | 59.4 |
| **RobBERT-2023-large-ft** | 355M | 59.3 | 45.2 | 68.7 | 82.3 | 48.3 | 31.6 | 70.6 | 51.0 | 58.0 |
| **e5-large-trm** | 355M | 60.2 | 45.4 | 80.3 | 90.3 | 59.0 | 28.7 | 78.8 | 55.1 | 63.3 |
| **e5-large-trm-nl** | 355M | **62.2** | **48.0** | **81.4** | 87.2 | 58.2 | 35.6 | 78.2 | **57.0** | **64.4** |
| multilingual-e5-large | 560M | 60.2 | 45.4 | 80.3 | **90.3** | 59.1 | 29.5 | 78.8 | 55.3 | 63.4 |
| Arctic-embed-l-v2.0 | 568M | 59.3 | 45.2 | 74.2 | 88.2 | 59.0 | 29.8 | 71.7 | 54.3 | 61.1 |
| bge-m3 | 568M | 60.7 | 44.2 | 78.3 | 88.7 | **60.0** | 29.2 | 78.1 | 55.4 | 63.1 |
| jina-embeddings-v3 | 572M | 61.7 | 38.9 | 76.8 | 78.5 | 59.1 | **38.9** | **84.8** | **57.0** | 62.7 |
## Citation Information
If you find our paper, benchmark or models helpful, please consider cite as follows:
```latex
@misc{banar2025mtebnle5nlembeddingbenchmark,
title={MTEB-NL and E5-NL: Embedding Benchmark and Models for Dutch},
author={Nikolay Banar and Ehsan Lotfi and Jens Van Nooten and Cristina Arhiliuc and Marija Kliocaite and Walter Daelemans},
year={2025},
eprint={2509.12340},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2509.12340},
}
```
[//]: # (https://arxiv.org/abs/2509.12340)
|
clips/e5-base-v2-t2t-nl
|
clips
| 2025-09-23T10:35:43Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"feature-extraction",
"generated_from_trainer",
"sentence-similarity",
"nl",
"dataset:clips/beir-nl-mmarco",
"dataset:clips/beir-nl-hotpotqa",
"dataset:clips/beir-nl-fever",
"arxiv:2509.12340",
"base_model:clips/e5-base-v2-t2t",
"base_model:finetune:clips/e5-base-v2-t2t",
"license:mit",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-09-10T10:22:38Z |
---
library_name: transformers
base_model:
- clips/e5-base-v2-t2t
tags:
- generated_from_trainer
model-index:
- name: E5-base-v2-t2t-nl
results: []
license: mit
language:
- nl
pipeline_tag: sentence-similarity
datasets:
- clips/beir-nl-mmarco
- clips/beir-nl-hotpotqa
- clips/beir-nl-fever
---
# E5-base-v2-t2t-nl
This model is a fine-tuned version of [clips/e5-base-v2-t2t](https://huggingface.co/clips/e5-base-v2-t2t).
## Usage
Below is an example to encode queries and passages from the MS-MARCO passage ranking dataset.
```python
import torch.nn.functional as F
from torch import Tensor
from transformers import AutoTokenizer, AutoModel
def average_pool(last_hidden_states: Tensor,
attention_mask: Tensor) -> Tensor:
last_hidden = last_hidden_states.masked_fill(~attention_mask[..., None].bool(), 0.0)
return last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None]
# Each input text should start with "query: " or "passage: ".
# For tasks other than retrieval, you can simply use the "query: " prefix.
input_texts = [
'query: hoeveel eiwitten moet een vrouw eten',
'query: top definieer',
"passage: Als algemene richtlijn geldt dat de gemiddelde eiwitbehoefte voor vrouwen van 19 tot 70 jaar volgens de CDC 46 gram per dag bedraagt. Maar, zoals je in deze tabel kunt zien, moet je dit verhogen als je zwanger bent of traint voor een marathon. Bekijk de onderstaande tabel om te zien hoeveel eiwitten je dagelijks zou moeten eten.",
"passage: Definitie van top voor leerlingen Engels. : 1 het hoogste punt van een berg : de top van een berg. : 2 het hoogste niveau. : 3 een bijeenkomst of reeks bijeenkomsten tussen de leiders van twee of meer regeringen."
]
tokenizer = AutoTokenizer.from_pretrained('clips/e5-base-v2-t2t-nl')
model = AutoModel.from_pretrained('clips/e5-base-v2-t2t-nl')
# Tokenize the input texts
batch_dict = tokenizer(input_texts, max_length=512, padding=True, truncation=True, return_tensors='pt')
outputs = model(**batch_dict)
embeddings = average_pool(outputs.last_hidden_state, batch_dict['attention_mask'])
# normalize embeddings
embeddings = F.normalize(embeddings, p=2, dim=1)
scores = (embeddings[:2] @ embeddings[2:].T) * 100
print(scores.tolist())
```
Below is an example for usage with sentence_transformers.
```python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer('clips/e5-base-v2-t2t-nl')
input_texts = [
'query: hoeveel eiwitten moet een vrouw eten',
'query: top definieer',
"passage: Als algemene richtlijn geldt dat de gemiddelde eiwitbehoefte voor vrouwen van 19 tot 70 jaar volgens de CDC 46 gram per dag bedraagt. Maar, zoals je in deze tabel kunt zien, moet je dit verhogen als je zwanger bent of traint voor een marathon. Bekijk de onderstaande tabel om te zien hoeveel eiwitten je dagelijks zou moeten eten.",
"passage: Definitie van top voor leerlingen Engels. : 1 het hoogste punt van een berg : de top van een berg. : 2 het hoogste niveau. : 3 een bijeenkomst of reeks bijeenkomsten tussen de leiders van twee of meer regeringen."
]
embeddings = model.encode(input_texts, normalize_embeddings=True)
```
## Benchmark Evaluation
Results on MTEB-NL (models introduced in [our paper](https://arxiv.org/abs/2509.12340) and the best model per size category are highlighted in bold):
| Model | Prm | Cls | MLCls | PCls | Rrnk | Rtr | Clust | STS | AvgD | AvgT |
|---------------------------------------|------|----------|----------|----------|----------|----------|----------|----------|----------|----------|
| **Num. Datasets (→)** | | 12 | 3 | 2 | 1 | 12 | 8 | 2 | 40 | |
| **Supervised (small, <100M)** | | | | | | | | | | |
| **e5-small-v2-t2t** | 33M | 53.7 | 38.5 | 74.5 | 85.9 | 45.0 | 24.1 | 74.3 | 46.9 | 56.6 |
| **e5-small-v2-t2t-nl** | 33M | 55.3 | 40.9 | 74.9 | 86.0 | 49.9 | 28.0 | 74.1 | 49.8 | 58.4 |
| **e5-small-trm** | 41M | 56.3 | 43.5 | **76.5** | **87.3** | 53.1 | 28.2 | 74.2 | 51.4 | 59.9 |
| **e5-small-trm-nl** | 41M | **58.2** | **44.7** | 76.0 | 87.1 | **56.0** | **32.2** | **74.6** | **53.8** | **61.3** |
| **Supervised (base, <305M)** | | | | | | | | | | |
| granite-embedding-107m-multilingual | 107M | 53.9 | 41.8 | 70.1 | 84.7 | 50.2 | 29.8 | 68.4 | 49.4 | 57.0 |
| **e5-base-v2-t2t** | 109M | 54.4 | 40.3 | 73.3 | 85.6 | 46.2 | 25.5 | 73.2 | 47.8 | 56.9 |
| **e5-base-v2-t2t-nl** | 109M | 53.9 | 41.5 | 72.5 | 84.0 | 46.4 | 26.9 | 69.3 | 47.8 | 56.3 |
| multilingual-e5-small | 118M | 56.3 | 43.5 | 76.5 | 87.1 | 53.1 | 28.2 | 74.2 | 51.4 | 59.8 |
| paraphrase-multilingual-MiniLM-L12-v2 | 118M | 55.0 | 38.1 | 78.2 | 80.6 | 37.7 | 29.6 | 76.3 | 46.3 | 56.5 |
| **RobBERT-2023-base-ft** | 124M | 58.1 | 44.6 | 72.7 | 84.7 | 51.6 | 32.9 | 68.5 | 52.0 | 59.0 |
| **e5-base-trm** | 124M | 58.1 | 44.4 | 76.7 | 88.3 | 55.8 | 28.1 | 74.9 | 52.9 | 60.9 |
| **e5-base-trm-nl** | 124M | **59.6** | **45.9** | 78.4 | 87.5 | 56.5 | **34.3** | 75.8 | **55.0** | **62.6** |
| potion-multilingual-128M | 128M | 51.8 | 40.0 | 60.4 | 80.3 | 35.7 | 26.1 | 62.0 | 42.6 | 50.9 |
| multilingual-e5-base | 278M | 58.2 | 44.4 | 76.7 | **88.4** | 55.8 | 27.7 | 74.9 | 52.8 | 60.9 |
| granite-embedding-278m-multilingual | 278M | 54.6 | 41.8 | 71.0 | 85.6 | 52.4 | 30.3 | 68.9 | 50.5 | 58.0 |
| paraphrase-multilingual-mpnet-base-v2 | 278M | 58.1 | 40.5 | **81.9** | 82.3 | 41.4 | 30.8 | 79.3 | 49.2 | 59.2 |
| Arctic-embed-m-v2.0 | 305M | 54.4 | 42.6 | 66.6 | 86.2 | 51.8 | 26.5 | 64.9 | 49.1 | 56.1 |
| gte-multilingual-base | 305M | 59.1 | 37.7 | 77.8 | 82.3 | **56.8** | 31.3 | **78.6** | 53.8 | 60.5 |
| **Supervised (large, >305M)** | | | | | | | | | | |
| **e5-large-v2-t2t** | 335M | 55.7 | 41.4 | 75.7 | 86.6 | 49.9 | 25.5 | 74.0 | 49.5 | 58.4 |
| **e5-large-v2-t2t-nl** | 335M | 57.3 | 42.4 | 76.9 | 86.9 | 50.8 | 27.7 | 74.1 | 51.7 | 59.4 |
| **RobBERT-2023-large-ft** | 355M | 59.3 | 45.2 | 68.7 | 82.3 | 48.3 | 31.6 | 70.6 | 51.0 | 58.0 |
| **e5-large-trm** | 355M | 60.2 | 45.4 | 80.3 | 90.3 | 59.0 | 28.7 | 78.8 | 55.1 | 63.3 |
| **e5-large-trm-nl** | 355M | **62.2** | **48.0** | **81.4** | 87.2 | 58.2 | 35.6 | 78.2 | **57.0** | **64.4** |
| multilingual-e5-large | 560M | 60.2 | 45.4 | 80.3 | **90.3** | 59.1 | 29.5 | 78.8 | 55.3 | 63.4 |
| Arctic-embed-l-v2.0 | 568M | 59.3 | 45.2 | 74.2 | 88.2 | 59.0 | 29.8 | 71.7 | 54.3 | 61.1 |
| bge-m3 | 568M | 60.7 | 44.2 | 78.3 | 88.7 | **60.0** | 29.2 | 78.1 | 55.4 | 63.1 |
| jina-embeddings-v3 | 572M | 61.7 | 38.9 | 76.8 | 78.5 | 59.1 | **38.9** | **84.8** | **57.0** | 62.7 |
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_ratio: 0.25
- num_epochs: 1.0
### Framework versions
- Transformers 4.56.1
- Pytorch 2.7.1+cu128
- Datasets 4.0.0
- Tokenizers 0.22.0
## Citation Information
If you find our paper, benchmark or models helpful, please consider cite as follows:
```latex
@misc{banar2025mtebnle5nlembeddingbenchmark,
title={MTEB-NL and E5-NL: Embedding Benchmark and Models for Dutch},
author={Nikolay Banar and Ehsan Lotfi and Jens Van Nooten and Cristina Arhiliuc and Marija Kliocaite and Walter Daelemans},
year={2025},
eprint={2509.12340},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2509.12340},
}
```
[//]: # (https://arxiv.org/abs/2509.12340)
|
use08168/qwen3-8b-jobs-style-lora
|
use08168
| 2025-09-23T10:35:35Z | 10 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen3",
"lora",
"korean",
"base_model:Qwen/Qwen3-8B",
"base_model:adapter:Qwen/Qwen3-8B",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T04:08:52Z |
---
library_name: peft
base_model: Qwen/Qwen3-8B
tags:
- qwen3
- lora
- korean
license: apache-2.0
---
# qwen3-8b-jobs-style-lora
- Base: `Qwen/Qwen3-8B`
- Type: **LoRA adapter** (QLoRA)
- Tone: 짧고 단호, 대조/삼연타. 사칭·직접 인용 금지.
|
Andrei1980/sft-ygpt-adapter-systems
|
Andrei1980
| 2025-09-23T10:28:49Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T10:28:40Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
RMCian/AceInstruct-1.5B-Gensyn-Swarm-fast_rabid_ram
|
RMCian
| 2025-09-23T10:25:21Z | 114 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am fast_rabid_ram",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-16T15:55:39Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am fast_rabid_ram
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
kennydaglish/Qwen3-0.6B-Gensyn-Swarm-majestic_untamed_shrimp
|
kennydaglish
| 2025-09-23T10:24:28Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am majestic_untamed_shrimp",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-23T10:24:01Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am majestic_untamed_shrimp
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
fatepurriyaz/AceInstruct-1.5B-Gensyn-Swarm-small_deft_jaguar
|
fatepurriyaz
| 2025-09-23T10:24:12Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am small_deft_jaguar",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-23T10:23:24Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am small_deft_jaguar
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Shopnil09/AceInstruct-1.5B-Gensyn-Swarm-stinky_twitchy_heron
|
Shopnil09
| 2025-09-23T10:22:26Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am stinky_twitchy_heron",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-23T10:21:38Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am stinky_twitchy_heron
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
xiaoyu2691/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-fleecy_tenacious_hare
|
xiaoyu2691
| 2025-09-23T10:22:00Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am fleecy_tenacious_hare",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-23T10:21:42Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am fleecy_tenacious_hare
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
poolkiltzn/blockassist-bc-vigilant_alert_tuna_1758622705
|
poolkiltzn
| 2025-09-23T10:19:49Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"vigilant alert tuna",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-23T10:19:30Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- vigilant alert tuna
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
csikasote/mms-1b-all-bemgen-combined-m25f100-42-DAT-0.5
|
csikasote
| 2025-09-23T10:19:25Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"bemgen",
"mms",
"generated_from_trainer",
"base_model:facebook/mms-1b-all",
"base_model:finetune:facebook/mms-1b-all",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-09-23T09:16:23Z |
---
library_name: transformers
license: cc-by-nc-4.0
base_model: facebook/mms-1b-all
tags:
- automatic-speech-recognition
- bemgen
- mms
- generated_from_trainer
model-index:
- name: mms-1b-all-bemgen-combined-m25f100-42-DAT-0.5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mms-1b-all-bemgen-combined-m25f100-42-DAT-0.5
This model is a fine-tuned version of [facebook/mms-1b-all](https://huggingface.co/facebook/mms-1b-all) on the BEMGEN - BEM dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2686
- Cer: 0.0756
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 30.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer |
|:-------------:|:-------:|:----:|:---------------:|:------:|
| 8.3815 | 0.6711 | 100 | 2.9457 | 1.0 |
| 2.7294 | 1.3423 | 200 | 0.6856 | 0.1554 |
| 1.4808 | 2.0134 | 300 | 0.3520 | 0.1023 |
| 1.2927 | 2.6846 | 400 | 0.3147 | 0.0904 |
| 1.1633 | 3.3557 | 500 | 0.2934 | 0.0828 |
| 1.112 | 4.0268 | 600 | 0.2885 | 0.0797 |
| 1.0816 | 4.6980 | 700 | 0.2817 | 0.0792 |
| 1.0822 | 5.3691 | 800 | 0.2798 | 0.0794 |
| 1.0403 | 6.0403 | 900 | 0.2815 | 0.0805 |
| 1.1011 | 6.7114 | 1000 | 0.2835 | 0.0805 |
| 1.061 | 7.3826 | 1100 | 0.2782 | 0.0784 |
| 0.9707 | 8.0537 | 1200 | 0.2800 | 0.0801 |
| 1.0327 | 8.7248 | 1300 | 0.2808 | 0.0780 |
| 0.9957 | 9.3960 | 1400 | 0.2752 | 0.0772 |
| 0.9974 | 10.0671 | 1500 | 0.2755 | 0.0776 |
| 0.9329 | 10.7383 | 1600 | 0.2732 | 0.0766 |
| 0.9618 | 11.4094 | 1700 | 0.2750 | 0.0770 |
| 0.9352 | 12.0805 | 1800 | 0.2714 | 0.0764 |
| 0.9623 | 12.7517 | 1900 | 0.2714 | 0.0763 |
| 0.9589 | 13.4228 | 2000 | 0.2687 | 0.0755 |
| 0.9831 | 14.0940 | 2100 | 0.2712 | 0.0769 |
| 0.8951 | 14.7651 | 2200 | 0.2696 | 0.0756 |
| 1.0025 | 15.4362 | 2300 | 0.2687 | 0.0759 |
### Framework versions
- Transformers 4.53.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.0
|
great1123/EXAONE-4.0-1.2B-symptom-disease_kor_v1
|
great1123
| 2025-09-23T10:19:12Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"exaone4",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"ko",
"base_model:LGAI-EXAONE/EXAONE-4.0-1.2B",
"base_model:finetune:LGAI-EXAONE/EXAONE-4.0-1.2B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-23T10:05:57Z |
---
base_model: LGAI-EXAONE/EXAONE-4.0-1.2B
tags:
- text-generation-inference
- transformers
- unsloth
- exaone4
license: apache-2.0
language:
- en
- ko
---
# Uploaded finetuned model
- **Developed by:** great1123
- **License:** apache-2.0
- **Finetuned from model :** LGAI-EXAONE/EXAONE-4.0-1.2B
이 모델은 **[LGAI-EXAONE/EXAONE-4.0-1.2B](https://huggingface.co/LGAI-EXAONE/EXAONE-4.0-1.2B)**를 기반으로,
한국어 증상 텍스트를 입력하면 가능한 질환과 간단한 설명을 생성하도록 **LoRA 기반 Supervised Fine-Tuning (SFT)**을 수행한 모델입니다.
---
## 📖 학습 데이터
### 1. 원본 데이터셋
- **[dux-tecblic/symptom-disease-dataset](https://huggingface.co/datasets/dux-tecblic/symptom-disease-dataset)** 사용
- 증상 텍스트와 질환 레이블 매핑
- mapping.json을 이용하여 `id2disease` 역매핑 수행
### 2. Distilled 데이터셋
- `symptom_ko`, `diagnosis_ko`, `reasoning_ko`, `explanation_ko` 필드 포함
- 증상 설명 → 가능한 진단(여러 개) + 간단 설명으로 가공
### 3. Instruction 템플릿
학습 데이터는 **system/user/assistant** 포맷으로 변환:
- 한국어 예시: `"주어진 증상으로부터 예상되는 질환과 간단한 설명을 출력하세요."`
- 영어 예시: `"Given the following symptoms, provide possible diagnoses and a short note."`
최종적으로 약 **8천여 샘플**을 사용하여 학습.
---
## 🛠️ 학습 방법
- **Base Model:** `LGAI-EXAONE/EXAONE-4.0-1.2B`
- **Fine-tuning Method:** LoRA (Low-Rank Adaptation)
- **Framework:** Hugging Face TRL + Unsloth
### 학습 파라미터
- Epochs: 3
- Batch size: 2 (gradient_accumulation_steps=8 → effective batch size ≈16)
- Learning rate: 2e-4
- Scheduler: linear
- Optimizer: AdamW (8bit)
- Weight decay: 0.01
- Precision: bfloat16
- LoRA config: r=16, alpha=32, dropout=0.05, target_modules=["q_proj", "v_proj"]
학습 시 `system`/`user`는 -100 마스킹 처리하여 loss에 포함하지 않고, **assistant 답변만 학습**.
---
## 🚀 사용 방법
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "great1123/EXAONE-4.0-1.2B-symptome-disease_kor_v1"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto")
messages = [
{"role": "system", "content": "아래 증상을 보고 질환을 추정하라."},
{"role": "user", "content": "피로, 체중 증가, 손발 차가움, 무기력, 현기증"}
]
text = tokenizer.apply_chat_template(
messages,
tokenize = False,
add_generation_prompt = True, # Must add for generation
).removeprefix('<bos>')
from transformers import TextStreamer
_ = model.generate(
**tokenizer(text, return_tensors = "pt").to("cuda"),
max_new_tokens = 300,
temperature = 0.3, top_p = 0.95, top_k = 20, repetition_penalty=1.5,
streamer = TextStreamer(tokenizer, skip_prompt = True),
)
추천 파라미터 : temperature = 0.3, top_p = 0.95, top_k = 20, repetition_penalty=1.5
## ⚠️ 주의사항
- 본 모델은 **의학적 조언을 대체하지 않습니다.**
- 실제 진단 및 치료는 반드시 **의료 전문가**에게 받아야 합니다.
- 본 모델은 연구 및 교육 목적 활용을 권장합니다.
---
## 📜 라이선스
- Base Model: LGAI-EXAONE/EXAONE-4.0-1.2B 라이선스 준수
- 데이터셋: [dux-tecblic/symptom-disease-dataset](https://huggingface.co/datasets/dux-tecblic/symptom-disease-dataset)
- 최종 모델: 연구/비상업적 용도 권장
---
## 📚 Citation
본 모델을 연구에 사용한다면 아래와 같이 인용해 주세요:
```bibtex
@misc{exaone_symptom_disease_2024,
title = {EXAONE-4.0-1.2B-symptome-disease_kor_v1},
author = {great1123},
year = {2024},
publisher = {Hugging Face},
howpublished = {\url{https://huggingface.co/great1123/EXAONE-4.0-1.2B-symptome-disease_kor_v1}}
}
This exaone4 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
sunitapalubanjar/classifier-chapter4
|
sunitapalubanjar
| 2025-09-23T10:06:27Z | 26 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-09-16T19:42:05Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: classifier-chapter4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# classifier-chapter4
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3160
- Accuracy: 0.9203
- F1: 0.9202
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.3508 | 1.0 | 1250 | 0.3206 | 0.9070 | 0.9068 |
| 0.2061 | 2.0 | 2500 | 0.3160 | 0.9203 | 0.9202 |
### Framework versions
- Transformers 4.56.1
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.22.0
|
langtuphongtran/dich-vu-thay-man-hinh-dien-thoai-iphone-uy-tin
|
langtuphongtran
| 2025-09-23T10:05:58Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-23T09:56:31Z |
<p>Trong thời đại công nghệ phát triển, iPhone không chỉ là công cụ liên lạc mà còn là trợ thủ đắc lực trong công việc và giải trí. Tuy nhiên, màn hình iPhone dễ gặp các vấn đề như nứt vỡ, sọc màn hình hoặc lỗi cảm ứng do va chạm hay hao mòn. Khi đó, việc tìm một <a href="https://justpaste.it/u/thaymhip24h">hệ thống thay màn hình điện thoại iPhone uy tín</a> là yếu tố then chốt để đảm bảo thiết bị hoạt động ổn định và giữ được giá trị lâu dài.</p>

<h2><strong>Vì Sao Nên Chọn Dịch Vụ Thay Màn Hình iPhone Uy Tín?</strong></h2>
<p>Một địa chỉ sửa chữa đáng tin cậy không chỉ giúp khôi phục màn hình iPhone như mới mà còn mang lại nhiều lợi ích vượt trội:</p>
<ul>
<li><strong>Linh kiện chính hãng</strong>: Sử dụng màn hình đạt chuẩn Apple, đảm bảo tương thích với mọi dòng iPhone, từ iPhone 11 đến iPhone 16 Pro Max.</li>
<li><strong>Kỹ thuật viên chuyên nghiệp</strong>: Đội ngũ có kinh nghiệm xử lý chính xác, nhanh chóng, tránh rủi ro phát sinh.</li>
<li><strong>Bảo hành dài hạn</strong>: Cam kết bảo hành từ 6 tháng đến vĩnh viễn, bao gồm cả trường hợp rơi vỡ, mang lại sự an tâm tuyệt đối.</li>
</ul>
<p>Tại <strong>Bệnh Viện Điện Thoại, Laptop 24h</strong>, mọi quy trình thay màn hình đều được thực hiện minh bạch. Thiết bị của bạn sẽ được kiểm tra kỹ lưỡng trước khi sửa chữa, đảm bảo không phát sinh chi phí ngoài dự kiến. Với hơn 20 năm kinh nghiệm, trung tâm tự hào là lựa chọn hàng đầu cho hàng ngàn khách hàng tại TP.HCM.</p>

<h2><strong>Dấu Hiệu Nhận Biết Khi Nào Cần Thay Màn Hình iPhone</strong></h2>
<p>Không phải mọi vết trầy xước đều yêu cầu thay màn hình ngay, nhưng một số dấu hiệu cho thấy bạn cần hành động kịp thời:</p>
<ul>
<li><strong>Màn hình sọc, đốm hoặc điểm chết</strong>: Lỗi hiển thị này có thể lan rộng, ảnh hưởng trải nghiệm sử dụng.</li>
<li><strong>Cảm ứng chập chờn hoặc không phản hồi</strong>: Thao tác vuốt, chạm bị trễ, cho thấy lớp cảm ứng đã hỏng.</li>
<li><strong>Mặt kính nứt vỡ</strong>: Dù hiển thị bình thường, kính vỡ tiềm ẩn nguy cơ gây hại linh kiện bên trong.</li>
<li><strong>Màn hình ám màu hoặc loang mực</strong>: Đây là dấu hiệu màn hình bị tổn thương nghiêm trọng, cần thay mới ngay.</li>
<li><strong>Hở sáng hoặc màn hình lỏng lẻo</strong>: Thường do va đập mạnh hoặc lắp ráp không chuẩn từ lần sửa trước.</li>
</ul>

ệ Thay Màn Hình Tiên Tiến Tại Bệnh Viện Điện Thoại, Laptop 24h</strong></h2>
<p>Bệnh Viện Điện Thoại, Laptop 24h sử dụng công nghệ hiện đại để đảm bảo chất lượng sửa chữa tối ưu:</p>
<ul>
<li><strong>Máy ép kính chân không</strong>: Giúp màn hình mới được lắp đặt chính xác, không bọt khí, giữ độ sắc nét tối đa.</li>
<li><strong>Kiểm tra cảm ứng đa điểm</strong>: Đảm bảo màn hình hoạt động mượt mà, tương thích hoàn toàn với thiết bị.</li>
<li><strong>Dịch vụ tận nơi</strong>: Hỗ trợ sửa chữa tại nhà, tiết kiệm thời gian cho khách hàng bận rộn.</li>
</ul>
<p>Thời gian thay màn hình chỉ từ 30-90 phút, giúp bạn nhanh chóng sử dụng lại thiết bị như mới. Trung tâm còn cung cấp dịch vụ mượn điện thoại miễn phí trong thời gian sửa chữa, đảm bảo bạn không bị gián đoạn công việc.</p>
<h2><strong>Giá Thay Màn Hình iPhone: Minh Bạch và Cạnh Tranh</strong></h2>
<p>Chi phí thay màn hình iPhone phụ thuộc vào dòng máy, mức độ hư hỏng và loại linh kiện bạn chọn. Để giúp bạn dễ dàng cân nhắc, <a href="https://m.ok.ru/profile/910176562322/statuses/158189073376658">bảng giá thay màn hình điện thoại iPhone</a> tại Bệnh Viện Điện Thoại, Laptop 24h được công khai rõ ràng, phù hợp với mọi model từ iPhone 11 đến iPhone 16. Giá cả cạnh tranh, không phí ẩn, đi kèm ưu đãi giảm 10% khi đặt lịch trước. Đặc biệt, trung tâm cam kết hoàn tiền 100% nếu bạn không hài lòng với chất lượng dịch vụ.</p>
<h2><strong>Cam Kết Dịch Vụ Chuyên Nghiệp</strong></h2>
<p>Bệnh Viện Điện Thoại, Laptop 24h đặt sự hài lòng của khách hàng lên hàng đầu với các cam kết:</p>
<ul>
<li>Linh kiện chính hãng, có nguồn gốc rõ ràng.</li>
<li>Quy trình sửa chữa công khai, không tráo đổi linh kiện.</li>
<li>Tư vấn và kiểm tra máy miễn phí, kể cả khi bạn chưa quyết định sửa.</li>
<li>Bảo hành vĩnh viễn cho màn hình chính hãng Apple, bao gồm cả trường hợp rơi vỡ.</li>
<li>Hoàn tiền 100% nếu dịch vụ không đạt yêu cầu.</li>
</ul>

<h2><strong>Quy Trình Thay Màn Hình iPhone Minh Bạch</strong></h2>
<p>Quy trình thay màn hình tại trung tâm được thực hiện bài bản qua 5 bước:</p>
<ol>
<li><strong>Tiếp nhận thiết bị</strong>: Nhân viên lắng nghe tình trạng máy và ghi nhận yêu cầu.</li>
<li><strong>Kiểm tra và báo giá</strong>: Kỹ thuật viên đánh giá lỗi và tư vấn giải pháp với chi phí minh bạch.</li>
<li><strong>Thay màn hình</strong>: Thực hiện thay thế cẩn thận, sử dụng công nghệ hiện đại.</li>
<li><strong>Kiểm tra lần cuối</strong>: Test kỹ lưỡng cảm ứng, màu sắc, True Tone và Face ID trước khi bàn giao.</li>
<li><strong>Hỗ trợ sau sửa chữa</strong>: Liên hệ khách hàng để đảm bảo chất lượng dịch vụ và giải đáp thắc mắc.</li>
</ol>

<h2><strong>Lựa Chọn Loại Màn Hình Phù Hợp</strong></h2>
<p>Bệnh Viện Điện Thoại, Laptop 24h cung cấp đa dạng loại màn hình, từ linh kiện giá rẻ như Incell JK đến màn hình chính hãng Apple Like New, đáp ứng mọi nhu cầu và ngân sách. Màn hình chính hãng đảm bảo giữ nguyên Face ID, True Tone và mang lại trải nghiệm như máy mới.</p>
<h2><strong>Liên Hệ Ngay Để Nhận Ưu Đãi</strong></h2>
<p>Nếu iPhone của bạn gặp vấn đề về màn hình, đừng chần chừ! Hãy đến với hệ thống thay màn hình điện thoại iPhone uy tín tại <strong>Bệnh Viện Điện Thoại, Laptop 24h</strong> để trải nghiệm dịch vụ chuyên nghiệp, giá cả hợp lý và bảo hành dài hạn. Gọi ngay hotline <strong>1900.0213</strong> để được tư vấn miễn phí và đặt lịch sửa chữa nhanh chóng. Chiếc iPhone của bạn xứng đáng được chăm sóc bởi những chuyên gia hàng đầu!</p>
<p><br /><br /></p>
|
impresso-project/ocr-quality-assessor-unigram-light
|
impresso-project
| 2025-09-23T10:03:29Z | 5 | 0 |
transformers
|
[
"transformers",
"ocr_qa_assessment",
"token-classification",
"ocr",
"bloomfilter",
"unigram",
"impresso",
"quality-assessment",
"v1.0.6",
"custom_code",
"fr",
"de",
"license:gpl-3.0",
"autotrain_compatible",
"region:us"
] |
token-classification
| 2025-04-15T22:09:12Z |
---
library_name: transformers
language:
- fr
- de
license: gpl-3.0
tags:
- ocr
- bloomfilter
- unigram
- impresso
- quality-assessment
- v1.0.6
---
# Model Card for `impresso-project/ocr-quality-assessor-unigram-light`
## Overview
This model is a **lightweight OCR quality assessor** for historical French and German texts. It is a streamlined version of the original [`impresso-project/OCR-quality-assessment-unigram`](https://huggingface.co/impresso-project/OCR-quality-assessment-unigram), now accessible via a Hugging Face `pipeline` for convenient integration into downstream tasks.
It uses **Bloom filters** containing known word unigrams to evaluate text quality by measuring the proportion of known vs. unknown words in OCR outputs. It is part of the [Impresso Project](https://impresso-project.ch), which develops tools for media archive processing and exploration.
## Model Details
### Model Description
- **Developed by:** University of Zurich (UZH) from the [Impresso team](https://impresso-project.ch). The project is an interdisciplinary project focused on historical media analysis across languages, time, and modalities. Funded by the Swiss National Science Foundation ([CRSII5_173719](http://p3.snf.ch/project-173719), [CRSII5_213585](https://data.snf.ch/grants/grant/213585)) and the Luxembourg National Research Fund (grant No. 17498891).
- **Model type:** Bloom filter–based scoring via a Transformers-compatible pipeline
- **Languages:** French (fr), German (de)
- **License:** GPL-3.0
- **Base resource:** [`impresso-project/OCR-quality-assessment-unigram`](https://huggingface.co/impresso-project/OCR-quality-assessment-unigram)
- **Interface:** Hugging Face `transformers` pipeline
- **Input format:** Raw text string
- **Output format:** Float score representing OCR quality
## How to Use
```python
from transformers import pipeline
MODEL_NAME = "impresso-project/ocr-quality-assessor-unigram-light"
ocrqa_pipeline = pipeline("ocr-qa-assessment", model=MODEL_NAME,
trust_remote_code=True,
device='cpu')
sentence = """En l'an 1348, au plus fort des ravages de la peste noire à travers l'Europe,
le Royaume de France se trouvait à la fois au bord du désespoir et face à une opportunité."""
score = ocrqa_pipeline(sentence)
print(score)
```
## Output Format
Returns a single float value indicating the proportion of known tokens:
```python
{'ocr_quality_score': 0.76}
```
## Use Cases
- OCR pipeline evaluation and quality diagnostics
- Automated scoring of OCR segments or lines
- Quick feedback in web-based transcription and correction tools
## Dataset and Preprocessing
The Bloom filters used internally are derived from:
- Wikipedia dumps (historical and modern)
- Impresso-specific lexical resources
Text normalization includes:
- Unicode NFKC normalization
- Digit masking (0)
- Punctuation and symbol removal
- Lowercasing
## Limitations
- Currently supports only **French** and **German**
- Does not provide spell correction suggestions
- False positives are possible (due to the nature of Bloom filters)
- Quality score is approximate and works best at the **segment** or **line** level
## Environmental Impact
- **Hardware:** Standard laptop / CPU inference
- **Training:** Reuse of existing Bloom filters; minimal additional compute
- **Estimated Emissions:** < 0.01 kg CO₂eq
## Contact
- Website: [https://impresso-project.ch](https://impresso-project.ch)
<p align="center">
<img src="https://github.com/impresso/impresso.github.io/blob/master/assets/images/3x1--Yellow-Impresso-Black-on-White--transparent.png?raw=true" width="300" alt="Impresso Logo"/>
</p>
|
csikasote/mms-1b-all-bemgen-combined-m25f100-42-DAT-0.3
|
csikasote
| 2025-09-23T10:02:48Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"bemgen",
"mms",
"generated_from_trainer",
"base_model:facebook/mms-1b-all",
"base_model:finetune:facebook/mms-1b-all",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-09-23T09:15:36Z |
---
library_name: transformers
license: cc-by-nc-4.0
base_model: facebook/mms-1b-all
tags:
- automatic-speech-recognition
- bemgen
- mms
- generated_from_trainer
model-index:
- name: mms-1b-all-bemgen-combined-m25f100-42-DAT-0.3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mms-1b-all-bemgen-combined-m25f100-42-DAT-0.3
This model is a fine-tuned version of [facebook/mms-1b-all](https://huggingface.co/facebook/mms-1b-all) on the BEMGEN - BEM dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2822
- Cer: 0.0803
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 30.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer |
|:-------------:|:-------:|:----:|:---------------:|:------:|
| 8.3032 | 0.6711 | 100 | 2.9447 | 1.0 |
| 2.6913 | 1.3423 | 200 | 0.6741 | 0.1549 |
| 1.4487 | 2.0134 | 300 | 0.3704 | 0.1092 |
| 1.2665 | 2.6846 | 400 | 0.3231 | 0.0945 |
| 1.1367 | 3.3557 | 500 | 0.2968 | 0.0847 |
| 1.084 | 4.0268 | 600 | 0.2954 | 0.0831 |
| 1.0535 | 4.6980 | 700 | 0.2886 | 0.0816 |
| 1.0404 | 5.3691 | 800 | 0.2823 | 0.0803 |
| 0.9913 | 6.0403 | 900 | 0.2781 | 0.0793 |
| 1.0239 | 6.7114 | 1000 | 0.2810 | 0.0800 |
| 0.9966 | 7.3826 | 1100 | 0.2730 | 0.0763 |
| 0.9014 | 8.0537 | 1200 | 0.2775 | 0.0803 |
| 0.9526 | 8.7248 | 1300 | 0.2767 | 0.0784 |
| 0.9114 | 9.3960 | 1400 | 0.2702 | 0.0771 |
| 0.9161 | 10.0671 | 1500 | 0.2740 | 0.0787 |
| 0.8623 | 10.7383 | 1600 | 0.2725 | 0.0776 |
| 0.878 | 11.4094 | 1700 | 0.2756 | 0.0781 |
### Framework versions
- Transformers 4.53.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.0
|
ChenWu98/numina_qwen_2.5_7b_sft_teachers_no_reasoning_source_split_1_2048_0.5
|
ChenWu98
| 2025-09-23T09:59:20Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:Qwen/Qwen2.5-7B",
"base_model:finetune:Qwen/Qwen2.5-7B",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T09:31:47Z |
---
base_model: Qwen/Qwen2.5-7B
library_name: transformers
model_name: numina_qwen_2.5_7b_sft_teachers_no_reasoning_source_split_1_2048_0.5
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for numina_qwen_2.5_7b_sft_teachers_no_reasoning_source_split_1_2048_0.5
This model is a fine-tuned version of [Qwen/Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="None", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/chenwu/huggingface/runs/ht3zz0gh)
This model was trained with SFT.
### Framework versions
- TRL: 0.19.1
- Transformers: 4.51.1
- Pytorch: 2.7.0
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
poolkiltzn/blockassist-bc-vigilant_alert_tuna_1758621475
|
poolkiltzn
| 2025-09-23T09:59:19Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"vigilant alert tuna",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-23T09:59:01Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- vigilant alert tuna
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
codefuse-ai/F2LLM-0.6B
|
codefuse-ai
| 2025-09-23T09:51:26Z | 17 | 3 | null |
[
"safetensors",
"qwen3",
"en",
"dataset:codefuse-ai/F2LLM",
"base_model:Qwen/Qwen3-0.6B",
"base_model:finetune:Qwen/Qwen3-0.6B",
"license:apache-2.0",
"region:us"
] | null | 2025-09-18T04:18:13Z |
---
license: apache-2.0
datasets:
- codefuse-ai/F2LLM
language:
- en
base_model:
- Qwen/Qwen3-0.6B
---
F2LLM (Foundation to Feature Large Language Models) are foundation models directly finetuned on 6 million high-quality query-document pairs (available in [codefuse-ai/F2LLM](https://huggingface.co/datasets/codefuse-ai/F2LLM)) covering a diverse range of retrieval, classification, and clustering data, curated solely from open-source datasets without any synthetic data. These models are trained with homogeneous macro batches in a single stage, without sophisticated multi-stage pipelines.
To evaluate F2LLMs on MTEB:
```python
import mteb
import logging
logging.basicConfig(level=logging.INFO)
task_names = ['AmazonCounterfactualClassification', 'ArXivHierarchicalClusteringP2P', 'ArXivHierarchicalClusteringS2S', 'ArguAna', 'AskUbuntuDupQuestions', 'BIOSSES', 'Banking77Classification', 'BiorxivClusteringP2P.v2', 'CQADupstackGamingRetrieval', 'CQADupstackUnixRetrieval', 'ClimateFEVERHardNegatives', 'FEVERHardNegatives', 'FiQA2018', 'HotpotQAHardNegatives', 'ImdbClassification', 'MTOPDomainClassification', 'MassiveIntentClassification', 'MassiveScenarioClassification', 'MedrxivClusteringP2P.v2', 'MedrxivClusteringS2S.v2', 'SCIDOCS', 'SICK-R', 'STS12', 'STS13', 'STS14', 'STS15', 'STS17', 'STS22.v2', 'STSBenchmark', 'SprintDuplicateQuestions', 'StackExchangeClustering.v2', 'StackExchangeClusteringP2P.v2', 'SummEvalSummarization.v2', 'TRECCOVID', 'Touche2020Retrieval.v3', 'ToxicConversationsClassification', 'TweetSentimentExtractionClassification', 'TwentyNewsgroupsClustering.v2', 'TwitterSemEval2015', 'TwitterURLCorpus', 'MindSmallReranking']
tasks = [
mteb.get_task(task_name, languages = ["eng"], eval_splits=["test"], exclusive_language_filter=True)
for task_name in task_names
]
model = mteb.get_model("codefuse-ai/F2LLM-0.6B", device="cuda:0")
evaluation = mteb.MTEB(tasks=tasks)
evaluation.run(model, encode_kwargs={"batch_size": 16})
```
|
AXERA-TECH/DeepSeek-R1-Distill-Qwen-1.5B
|
AXERA-TECH
| 2025-09-23T09:50:58Z | 37 | 5 |
transformers
|
[
"transformers",
"Context",
"DeepSeek-R1-Distill-Qwen-1.5B",
"text-generation",
"zh",
"en",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B",
"base_model:finetune:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-01-21T11:10:35Z |
---
license: mit
language:
- zh
- en
base_model:
- deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B
pipeline_tag: text-generation
library_name: transformers
tags:
- Context
- DeepSeek-R1-Distill-Qwen-1.5B
---
# DeepSeek-R1-Distill-Qwen-1.5B
This version of DeepSeek-R1-Distill-Qwen-1.5B has been converted to run on the Axera NPU using **w8a16** and **w4a16** quantization.
This model has been optimized with the following LoRA:
Compatible with Pulsar2 version: 4.2
## Feature
- Support for longer contexts, in this sample it's 2k
- Support context dialogue
- System prompt kvcache is supported
## Convert tools links:
For those who are interested in model conversion, you can try to export axmodel through the original repo : https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B and https://huggingface.co/jakiAJK/DeepSeek-R1-Distill-Qwen-1.5B_GPTQ-int4
[Pulsar2 Link, How to Convert LLM from Huggingface to axmodel](https://pulsar2-docs.readthedocs.io/en/latest/appendix/build_llm.html)
[AXera NPU AXEngine LLM Runtime](https://github.com/AXERA-TECH/ax-llm/tree/ax-context)
[AXera NPU AXCL LLM Runtime](https://github.com/AXERA-TECH/ax-llm/tree/axcl-context)
### Convert script
The follow show how to convert DeepSeek-R1-Distill-Qwen-1.5B
```
pulsar2 llm_build --input_path deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B \
--output_path deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B-ax650 \
--hidden_state_type bf16 --kv_cache_len 2047 --prefill_len 128 \
--last_kv_cache_len 128 \
--last_kv_cache_len 256 \
--last_kv_cache_len 384 \
--last_kv_cache_len 512 \
--last_kv_cache_len 640 \
--last_kv_cache_len 768 \
--last_kv_cache_len 896 \
--last_kv_cache_len 1024 \
--last_kv_cache_len 1152 \
--last_kv_cache_len 1280 \
--last_kv_cache_len 1408 \
--last_cache_len 1536 \
--chip AX650 -c 1 --parallel 8
```
## Support Platform
- AX650
- AX650N DEMO Board
- [M4N-Dock(爱芯派Pro)](https://wiki.sipeed.com/hardware/zh/maixIV/m4ndock/m4ndock.html)
- [M.2 Accelerator card](https://axcl-docs.readthedocs.io/zh-cn/latest/doc_guide_hardware.html)
- AX630C
- *TBD*
|Chips|w8a16|w4a16| DDR | Flash |
|--|--|--|--|--|
|AX650| 12 tokens/sec| 17 tokens/sec | 2.3GB | 2.3GB |
## How to use
Download all files from this repository to the device
```
root@ax650:/mnt/qtang/llm-test/deepseek-r1-1.5b-ctx# tree -L 1
.
|-- README.md
|-- config.json
|-- deepseek-r1-1.5b-ax650
|-- deepseek-r1-1.5b-int4-ax650
|-- deepseek-r1_tokenizer
|-- deepseek-r1_tokenizer.py
|-- figures
|-- main_ax650
|-- main_axcl_aarch64
|-- main_axcl_x86
|-- post_config.json
|-- run_deepseek-r1_1.5B_ax650.sh
|-- run_deepseek-r1_1.5B_int4_ax650.sh
|-- run_deepseek-r1_1.5b_axcl_aarch64.sh
|-- run_deepseek-r1_1.5b_axcl_x86.sh
|-- run_deepseek-r1_1.5b_int4_axcl_aarch64.sh
`-- run_deepseek-r1_1.5b_int4_axcl_x86.sh
4 directories, 16 files
```
#### Start the Tokenizer service
```
root@ax650:/mnt/qtang/llm-test/deepseek-r1-1.5b-ctx# python3 deepseek-r1_tokenizer_uid.py
Server running at http://127.0.0.1:12345
```
#### System prompt cache
- The System prompt can be preset through the configuration file from `--system_prompt`
- The System prompt can be cached in the form of kv cache to a specified folder for quick loading at the next run time from `--kvcache_path`
- This folder needs to be created manually before running, for example `mkdir kvcache`
```
root@ax650:/mnt/qtang/llm-test/deepseek-r1-1.5b-ctx# cat run_deepseek-r1_1.5b_ax650.sh
./main_ax650 \
--template_filename_axmodel "deepseek-r1-1.5b-ax650/qwen2_p128_l%d_together.axmodel" \
--axmodel_num 28 \
--url_tokenizer_model "http://127.0.0.1:12345" \
--filename_post_axmodel "deepseek-r1-1.5b-ax650/qwen2_post.axmodel" \
--filename_tokens_embed "deepseek-r1-1.5b-ax650/model.embed_tokens.weight.bfloat16.bin" \
--tokens_embed_num 151936 \
--tokens_embed_size 1536 \
--use_mmap_load_embed 1 \
--live_print 1
```
#### Inference with AX650 Host, such as M4N-Dock(爱芯派Pro) or AX650N DEMO Board
Open another terminal and run `run_deepseek-r1_1.5b_ax650.sh`
```
root@ax650:/mnt/qtang/llm-test/deepseek-r1-1.5b-ctx# ./run_deepseek-r1_1.5b_ax650.sh
[I][ Init][ 110]: LLM init start
[I][ Init][ 34]: connect http://127.0.0.1:12345 ok
[I][ Init][ 57]: uid: 96cbe293-ff24-4011-b3a1-b5ab95234990
bos_id: 151646, eos_id: 151643
3% | ██ | 1 / 31 [2.07s<64.23s, 0.48 count/s] tokenizer init ok[I][ Init][ 26]: LLaMaEmbedSelector use mmap
100% | ████████████████████████████████ | 31 / 31 [8.76s<8.76s, 3.54 count/s] init post axmodel ok,remain_cmm(8114 MB)[I][ Init][ 188]: max_token_len : 2047
[I][ Init][ 193]: kv_cache_size : 256, kv_cache_num: 2047
[I][ Init][ 201]: prefill_token_num : 128
[I][ Init][ 205]: grp: 1, prefill_max_token_num : 1
[I][ Init][ 205]: grp: 2, prefill_max_token_num : 128
[I][ Init][ 205]: grp: 3, prefill_max_token_num : 256
[I][ Init][ 205]: grp: 4, prefill_max_token_num : 384
[I][ Init][ 205]: grp: 5, prefill_max_token_num : 512
[I][ Init][ 205]: grp: 6, prefill_max_token_num : 640
[I][ Init][ 205]: grp: 7, prefill_max_token_num : 768
[I][ Init][ 205]: grp: 8, prefill_max_token_num : 896
[I][ Init][ 205]: grp: 9, prefill_max_token_num : 1024
[I][ Init][ 209]: prefill_max_token_num : 1024
[I][ load_config][ 282]: load config:
{
"enable_repetition_penalty": false,
"enable_temperature": true,
"enable_top_k_sampling": true,
"enable_top_p_sampling": false,
"penalty_window": 20,
"repetition_penalty": 1.2,
"temperature": 0.9,
"top_k": 10,
"top_p": 0.8
}
[I][ Init][ 218]: LLM init ok
Type "q" to exit, Ctrl+c to stop current running
[I][ GenerateKVCachePrefill][ 275]: input token num : 11, prefill_split_num : 1 prefill_grpid : 2
[I][ GenerateKVCachePrefill][ 315]: input_num_token:11
[I][ main][ 228]: precompute_len: 11
[I][ main][ 229]: system_prompt:
prompt >> 你是谁
[I][ SetKVCache][ 529]: prefill_grpid:2 kv_cache_num:128 precompute_len:11 input_num_token:6
[I][ SetKVCache][ 532]: current prefill_max_token_num:896
[I][ Run][ 658]: input token num : 6, prefill_split_num : 1
[I][ Run][ 684]: input_num_token:6
[I][ Run][ 807]: ttft: 256.87 ms
<think>
您好!我是由中国的深度求索(DeepSeek)公司开发的智能助手DeepSeek-R1。如您有任何任何问题,我会尽我所能为您提供帮助。
</think>
您好!我是由中国的深度求索(DeepSeek)公司开发的智能助手DeepSeek-R1。如您有任何任何问题,我会尽我所能为您提供帮助。
[N][ Run][ 921]: hit eos,avg 17.68 token/s
[I][ GetKVCache][ 498]: precompute_len:91, remaining:933
```
|
codefuse-ai/F2LLM-1.7B
|
codefuse-ai
| 2025-09-23T09:50:49Z | 13 | 3 | null |
[
"safetensors",
"qwen3",
"en",
"dataset:codefuse-ai/F2LLM",
"base_model:Qwen/Qwen3-1.7B",
"base_model:finetune:Qwen/Qwen3-1.7B",
"license:apache-2.0",
"region:us"
] | null | 2025-09-19T03:18:20Z |
---
license: apache-2.0
datasets:
- codefuse-ai/F2LLM
language:
- en
base_model:
- Qwen/Qwen3-1.7B
---
F2LLM (Foundation to Feature Large Language Models) are foundation models directly finetuned on 6 million high-quality query-document pairs (available in [codefuse-ai/F2LLM](https://huggingface.co/datasets/codefuse-ai/F2LLM)) covering a diverse range of retrieval, classification, and clustering data, curated solely from open-source datasets without any synthetic data. These models are trained with homogeneous macro batches in a single stage, without sophisticated multi-stage pipelines.
To evaluate F2LLMs on MTEB:
```python
import mteb
import logging
logging.basicConfig(level=logging.INFO)
task_names = ['AmazonCounterfactualClassification', 'ArXivHierarchicalClusteringP2P', 'ArXivHierarchicalClusteringS2S', 'ArguAna', 'AskUbuntuDupQuestions', 'BIOSSES', 'Banking77Classification', 'BiorxivClusteringP2P.v2', 'CQADupstackGamingRetrieval', 'CQADupstackUnixRetrieval', 'ClimateFEVERHardNegatives', 'FEVERHardNegatives', 'FiQA2018', 'HotpotQAHardNegatives', 'ImdbClassification', 'MTOPDomainClassification', 'MassiveIntentClassification', 'MassiveScenarioClassification', 'MedrxivClusteringP2P.v2', 'MedrxivClusteringS2S.v2', 'SCIDOCS', 'SICK-R', 'STS12', 'STS13', 'STS14', 'STS15', 'STS17', 'STS22.v2', 'STSBenchmark', 'SprintDuplicateQuestions', 'StackExchangeClustering.v2', 'StackExchangeClusteringP2P.v2', 'SummEvalSummarization.v2', 'TRECCOVID', 'Touche2020Retrieval.v3', 'ToxicConversationsClassification', 'TweetSentimentExtractionClassification', 'TwentyNewsgroupsClustering.v2', 'TwitterSemEval2015', 'TwitterURLCorpus', 'MindSmallReranking']
tasks = [
mteb.get_task(task_name, languages = ["eng"], eval_splits=["test"], exclusive_language_filter=True)
for task_name in task_names
]
model = mteb.get_model("codefuse-ai/F2LLM-1.7B", device="cuda:0")
evaluation = mteb.MTEB(tasks=tasks)
evaluation.run(model, encode_kwargs={"batch_size": 16})
```
|
codefuse-ai/F2LLM-4B
|
codefuse-ai
| 2025-09-23T09:50:15Z | 15 | 3 | null |
[
"safetensors",
"qwen3",
"en",
"dataset:codefuse-ai/F2LLM",
"base_model:Qwen/Qwen3-4B",
"base_model:finetune:Qwen/Qwen3-4B",
"license:apache-2.0",
"region:us"
] | null | 2025-09-18T12:05:13Z |
---
license: apache-2.0
datasets:
- codefuse-ai/F2LLM
language:
- en
base_model:
- Qwen/Qwen3-4B
---
F2LLM (Foundation to Feature Large Language Models) are foundation models directly finetuned on 6 million high-quality query-document pairs (available in [codefuse-ai/F2LLM](https://huggingface.co/datasets/codefuse-ai/F2LLM)) covering a diverse range of retrieval, classification, and clustering data, curated solely from open-source datasets without any synthetic data. These models are trained with homogeneous macro batches in a single stage, without sophisticated multi-stage pipelines.
To evaluate F2LLMs on MTEB:
```python
import mteb
import logging
logging.basicConfig(level=logging.INFO)
task_names = ['AmazonCounterfactualClassification', 'ArXivHierarchicalClusteringP2P', 'ArXivHierarchicalClusteringS2S', 'ArguAna', 'AskUbuntuDupQuestions', 'BIOSSES', 'Banking77Classification', 'BiorxivClusteringP2P.v2', 'CQADupstackGamingRetrieval', 'CQADupstackUnixRetrieval', 'ClimateFEVERHardNegatives', 'FEVERHardNegatives', 'FiQA2018', 'HotpotQAHardNegatives', 'ImdbClassification', 'MTOPDomainClassification', 'MassiveIntentClassification', 'MassiveScenarioClassification', 'MedrxivClusteringP2P.v2', 'MedrxivClusteringS2S.v2', 'SCIDOCS', 'SICK-R', 'STS12', 'STS13', 'STS14', 'STS15', 'STS17', 'STS22.v2', 'STSBenchmark', 'SprintDuplicateQuestions', 'StackExchangeClustering.v2', 'StackExchangeClusteringP2P.v2', 'SummEvalSummarization.v2', 'TRECCOVID', 'Touche2020Retrieval.v3', 'ToxicConversationsClassification', 'TweetSentimentExtractionClassification', 'TwentyNewsgroupsClustering.v2', 'TwitterSemEval2015', 'TwitterURLCorpus', 'MindSmallReranking']
tasks = [
mteb.get_task(task_name, languages = ["eng"], eval_splits=["test"], exclusive_language_filter=True)
for task_name in task_names
]
model = mteb.get_model("codefuse-ai/F2LLM-4B", device="cuda:0")
evaluation = mteb.MTEB(tasks=tasks)
evaluation.run(model, encode_kwargs={"batch_size": 16})
```
|
Best000/eg_a31
|
Best000
| 2025-09-23T09:42:37Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-09-23T09:39:46Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
Jiten1024/gemma-3-4b-medquad-200
|
Jiten1024
| 2025-09-23T09:41:05Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma3",
"trl",
"en",
"base_model:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-15T17:27:19Z |
---
base_model: unsloth/gemma-3-4b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Jiten1024
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-4b-it-unsloth-bnb-4bit
This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
oshBk/blockassist-bc-pesty_graceful_grouse_1758620431
|
oshBk
| 2025-09-23T09:40:34Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"pesty graceful grouse",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-23T09:40:31Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- pesty graceful grouse
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
lengocquangLAB/fine-tuned-skill-jd-embed
|
lengocquangLAB
| 2025-09-23T09:37:32Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"dense",
"generated_from_trainer",
"dataset_size:21927",
"loss:CosineSimilarityLoss",
"arxiv:1908.10084",
"base_model:sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2",
"base_model:finetune:sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-09-23T09:37:10Z |
---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- dense
- generated_from_trainer
- dataset_size:21927
- loss:CosineSimilarityLoss
base_model: sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2
widget:
- source_sentence: nextjs
sentences:
- Familiarity with project management software (e.g., JIRA, Trello).
- ', contributing to group projects with an understanding of Git workflows'
- Ưu tiên ứng viên có kinh nghiệm với TypeScript, Next.js hoặc các công cụ tối ưu
hóa hiệu suất Front-end.
- source_sentence: System analysis and design
sentences:
- Experience with asynchronous programming, multithreaded programming.
- A strong sense of responsibility and resilience under pressure.
- Kỹ năng giải quyết vấn đề và xử lý tình huống nhanh chóng.
- source_sentence: Có thể phối hợp với đồng nghiệp để xử lý các nhiệm vụ
sentences:
- Có kiến thức tốt về React.js, JavaScript/TypeScript, HTML/CSS.
- An ability to prepare and interpret flowcharts, schedules and step-by-step action
plans
- Solid Experience with Java and Spring Boot
- source_sentence: Giao tiếp và tạo lập mối quan hệ
sentences:
- Có khả năng sáng tạo, khả năng đánh giá, phân tích tốt.
- Có kinh nghiệm liên quan đến Machine Learning, Deep Learning, NLP, LLM,....
- Làm việc nhóm hiệu quả, giải quyết vấn đề linh hoạt
- source_sentence: Proficient in utilizing various Microsoft Office tools
sentences:
- ', contributing to group projects with an understanding of Git workflows'
- Proficient in English (listening, speaking, reading, and writing).
- Hiểu biết về CI/CD là điểm cộng, Firebase (FCM / Cloud Storage / Realtime Database).
pipeline_tag: sentence-similarity
library_name: sentence-transformers
---
# SentenceTransformer based on sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2) <!-- at revision 86741b4e3f5cb7765a600d3a3d55a0f6a6cb443d -->
- **Maximum Sequence Length:** 128 tokens
- **Output Dimensionality:** 384 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False, 'architecture': 'BertModel'})
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("lengocquangLAB/fine-tuned-skill-jd-embed")
# Run inference
sentences = [
'Proficient in utilizing various Microsoft Office tools',
'Proficient in English (listening, speaking, reading, and writing).',
', contributing to group projects with an understanding of Git workflows',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[ 1.0000, -0.0018, -0.0431],
# [-0.0018, 1.0000, 0.0958],
# [-0.0431, 0.0958, 1.0000]])
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 21,927 training samples
* Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | sentence_0 | sentence_1 | label |
|:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 3 tokens</li><li>mean: 15.88 tokens</li><li>max: 128 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 19.11 tokens</li><li>max: 75 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.58</li><li>max: 1.0</li></ul> |
* Samples:
| sentence_0 | sentence_1 | label |
|:--------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------|:-----------------|
| <code>Good presentation skills</code> | <code>A strong sense of responsibility and resilience under pressure.</code> | <code>0.0</code> |
| <code>reactjs</code> | <code>Biết cách quản lý state với React Hooks, Context API.</code> | <code>1.0</code> |
| <code>Thành thạo Office</code> | <code>Thành thạo các công cụ văn phòng (Google Sheet / Excel), ưu tiên biết sử dụng các công cụ hỗ trợ tăng năng suất như Chat GPT</code> | <code>1.0</code> |
* Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters:
```json
{
"loss_fct": "torch.nn.modules.loss.MSELoss"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 64
- `num_train_epochs`: 5
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 64
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 5
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `parallelism_config`: None
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `hub_revision`: None
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `liger_kernel_config`: None
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
- `router_mapping`: {}
- `learning_rate_mapping`: {}
</details>
### Training Logs
| Epoch | Step | Training Loss |
|:------:|:----:|:-------------:|
| 1.4577 | 500 | 0.1051 |
| 2.9155 | 1000 | 0.0688 |
| 4.3732 | 1500 | 0.0587 |
### Framework Versions
- Python: 3.11.11
- Sentence Transformers: 5.1.0
- Transformers: 4.56.1
- PyTorch: 2.8.0+cu128
- Accelerate: 1.10.1
- Datasets: 4.0.0
- Tokenizers: 0.22.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
iphone16tieuchuan128gb/iphone-16-tieu-chuan-24hstore-gia-re
|
iphone16tieuchuan128gb
| 2025-09-23T09:34:58Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-23T09:25:45Z |
<h2>Mua iPhone 16 128GB giá bao nhiêu TP.HCM?</h2>
<p>Khi nhắc tới chiếc smartphone tầm trung cao cấp của Apple, iPhone 16 128GB luôn là lựa chọn được nhiều người yêu thích. Với hiệu năng mạnh mẽ từ chip A18 Bionic mới nhất, thiết kế sang trọng và màn hình sắc nét, iPhone 16 128GB đáp ứng hoàn hảo các nhu cầu làm việc, giải trí hay sáng tạo nội dung. Tuy nhiên, câu hỏi được đặt ra là<a href="https://24hstore.vn/iphone-16/iphone-16-128gb-p10934"><strong>mua iPhone 16 128GB giá bao nhiêu TP.HCM</strong>?</a> Mức giá này phụ thuộc vào nhiều yếu tố như phiên bản, chương trình ưu đãi, địa chỉ mua hàng và dịch vụ đi kèm. Việc lựa chọn nơi bán uy tín giúp bạn sở hữu được sản phẩm chính hãng và các quyền lợi bảo hành hậu mãi tốt.</p>
<h2>Thiết kế tinh tế, chất liệu bền bỉ</h2>
<p>iPhone 16 128GB sở hữu cấu trúc nguyên khối với khung nhôm hàng không siêu nhẹ, mang lại sự chắc chắn nhưng vẫn rất thoải mái khi cầm nắm. Mặt trước là kính Ceramic Shield chống trầy xước hiệu quả, trong khi mặt sau được làm từ kính pha màu thời thượng với nhiều lựa chọn màu sắc như đen, hồng, xanh mòng két hay xanh lưu ly, tạo điểm nhấn cá tính cho người dùng.</p>
<p style="text-align: center;"><img src="https://24hstore.vn/images/products/2024/09/10/large/iphone-16-5_1725931529_1.jpg" alt="" /></p>
<p>Máy có trọng lượng nhẹ chỉ 170 gram và kích thước 147.6 x 71.6 x 7.8 mm, vừa vặn gọn gàng trong lòng bàn tay, phù hợp với cả nam và nữ.</p>
<h2>Màn hình Super Retina XDR OLED 6.1 inch sắc nét</h2>
<p>Màn hình iPhone 16 có kích thước 6.1 inch với độ phân giải 2556 x 1179 pixels, sử dụng tấm nền OLED cao cấp cho hình ảnh sống động, màu sắc chân thực cùng độ tương phản tuyệt đối. Công nghệ True Tone và HDR10 giúp hình ảnh luôn rực rỡ trong mọi điều kiện ánh sáng, trong khi Dynamic Island mới tạo ra trải nghiệm tương tác thú vị, mở rộng hơn cho các ứng dụng thông báo và đa nhiệm.</p>
<p>Màn hình còn phủ lớp chống dầu và dấu vân tay, giữ cho thiết bị luôn sáng bóng và sạch sẽ.</p>
<h2>Hiệu năng mạnh mẽ với chip A18 Bionic</h2>
<p>Dưới nắp máy, iPhone 16 128GB được trang bị chip A18 Bionic sản xuất trên tiến trình 5nm, giúp nâng cao hiệu năng CPU và GPU đồng thời tiết kiệm năng lượng hiệu quả. Với CPU 6 lõi gồm 2 lõi hiệu năng cao và 4 lõi tiết kiệm điện, cùng GPU 5 lõi, bạn có thể xử lý mượt mà các tác vụ nặng như chơi game đồ họa, chỉnh sửa video chất lượng cao hay ứng dụng đa nhiệm cùng lúc.</p>
<p style="text-align: center;"><img src="https://24hstore.vn/images/products/2024/09/10/large/iphone-16-7_1725931529_1.jpg" alt="" /></p>
<p>Chip A18 còn tích hợp Neural Engine 16 lõi, tối ưu hóa các tác vụ trí tuệ nhân tạo như nhận diện khuôn mặt FaceID, chụp ảnh thông minh và các ứng dụng học máy khác.</p>
<h2>Camera nâng cấp với cảm biến 48MP sắc nét</h2>
<p>Camera của iPhone 16 128GB gây ấn tượng với cảm biến chính 48MP Fusion hoàn toàn mới, cùng camera góc rộng 12MP, hỗ trợ nhiều tính năng như lấy nét tự động theo pha, chống rung quang học và chụp macro. Cụm camera này không chỉ chụp hình sắc nét mà còn hỗ trợ quay video 4K HDR, quay slow-motion và các hiệu ứng điện ảnh chuyên nghiệp.</p>
<p style="text-align: center;"><img src="https://24hstore.vn/images/products/2024/09/10/large/iphone-16-6_1725931529_1.jpg" alt="" /></p>
<p>Camera selfie 12MP cũng được cải tiến với khẩu độ lớn hơn, tự động lấy nét nhanh, giúp bạn có những bức ảnh chân dung và video chất lượng cao.</p>
<h2>Cửa hàng điện thoại chính hãng TP.HCM – nơi mua uy tín</h2>
<p>Chọn mua tại các <strong><a href="https://24hstore.vn/dien-thoai">cửa hàng điện thoại chính hãng TP.HCM</a></strong>giúp người dùng an tâm về nguồn gốc sản phẩm, tránh được các rủi ro về hàng giả, hàng nhái hay máy không đủ bảo hành. Những cửa hàng này cung cấp sản phẩm nguyên seal, chế độ bảo hành chuẩn Apple, dịch vụ hậu mãi chuyên nghiệp và tư vấn tận tình.</p>
<p>Ngoài ra, tại cửa hàng uy tín bạn còn được hỗ trợ trả góp với lãi suất hấp dẫn, giao hàng nhanh chóng và nhiều chương trình ưu đãi hấp dẫn. Đó là điểm cộng lớn giúp người dùng yên tâm khi lựa chọn iPhone 16 128GB chính hãng.</p>
|
Khoa/meat-deli-bert-multi-label-0925
|
Khoa
| 2025-09-23T09:31:44Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-09-23T09:21:49Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Jr12lm12/mistral-7b-climate-expert
|
Jr12lm12
| 2025-09-23T09:31:25Z | 39 | 0 |
peft
|
[
"peft",
"pytorch",
"safetensors",
"gguf",
"mistral",
"text-generation",
"base_model:adapter:unsloth/mistral-7b-instruct-v0.3-bnb-4bit",
"lora",
"sft",
"transformers",
"trl",
"unsloth",
"conversational",
"arxiv:1910.09700",
"base_model:unsloth/mistral-7b-instruct-v0.3-bnb-4bit",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-19T10:42:44Z |
---
base_model: unsloth/mistral-7b-instruct-v0.3-bnb-4bit
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:unsloth/mistral-7b-instruct-v0.3-bnb-4bit
- lora
- sft
- transformers
- trl
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.17.1
|
MercuryNex/unrealistic15
|
MercuryNex
| 2025-09-23T09:30:33Z | 0 | 0 |
diffusers
|
[
"diffusers",
"modelslab.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2025-09-23T09:30:33Z |
---
license: creativeml-openrail-m
tags:
- modelslab.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
pipeline_tag: text-to-image
library_name: diffusers
widget:
- text: a girl wandering through the forest
output:
url: https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/a9b72b46-676e-4eff-8b9b-35df23075306/width=1800/75666314.jpeg
---
# UnrealWorld - Ultra Realistic Model - v3.0 API Inference
<Gallery />
## Get API Key
Get API key from [ModelsLab API](http://modelslab.com), No Payment needed.
Replace Key in below code, change **model_id** to "unrealworldultrarealisticmodel-v30"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://docs.modelslab.com)
Try model for free: [Generate Images](https://modelslab.com/models/unrealworldultrarealisticmodel-v30)
Model link: [View model](https://modelslab.com/models/unrealworldultrarealisticmodel-v30)
View all models: [View Models](https://modelslab.com/models)
```python
import requests
import json
url = "https://modelslab.com/api/v6/images/text2img"
payload = json.dumps({
"key": "your_api_key",
"model_id": "unrealworldultrarealisticmodel-v30",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "",
"lora": "",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
```
> Use this coupon code to get 25% off **DMGG0RBN**
|
Pravallika6/detr-finetuned-logo-detection
|
Pravallika6
| 2025-09-23T09:24:47Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"detr",
"object-detection",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
object-detection
| 2025-09-23T09:24:39Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mariiazhiv/CyTHIA-Mixtral-8x7B
|
mariiazhiv
| 2025-09-23T09:24:10Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"mixtral",
"generated_from_trainer",
"dataset:mariiazhiv/cybersecurity_qa",
"base_model:mistralai/Mixtral-8x7B-Instruct-v0.1",
"base_model:adapter:mistralai/Mixtral-8x7B-Instruct-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T08:21:17Z |
---
library_name: peft
license: apache-2.0
base_model: mistralai/Mixtral-8x7B-Instruct-v0.1
tags:
- generated_from_trainer
datasets:
- mariiazhiv/cybersecurity_qa
- mariiazhiv/cybersecurity_qa
model-index:
- name: outputs/mymodel
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.8.0.dev0`
```yaml
base_model: mistralai/Mixtral-8x7B-Instruct-v0.1
datasets:
- path: mariiazhiv/cybersecurity_qa
type: alpaca
split: train
- path: mariiazhiv/cybersecurity_qa
type: alpaca
split: validation
dataset_prepared_path: last_run_prepared
output_dir: ./outputs/mymodel
sequence_len: 1024
adapter: lora
lora_r: 16
lora_alpha: 32
lora_dropout: 0.05
lora_target_modules:
- q_proj
- v_proj
- k_proj
- o_proj
- gate_proj
- down_proj
- up_proj
gradient_accumulation_steps: 8
micro_batch_size: 4
num_epochs: 3
optimizer: adamw_bnb_8bit
learning_rate: 0.00002
load_in_8bit: false
train_on_inputs: false
bf16: true
fp16: false
gradient_checkpointing: true
eval_steps: 50
save_steps: 50
logging_steps: 10
special_tokens:
pad_token: "<|pad|>"
```
</details><br>
# outputs/mymodel
This model is a fine-tuned version of [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) on the mariiazhiv/cybersecurity_qa and the mariiazhiv/cybersecurity_qa datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 2
- num_epochs: 3.0
### Training results
### Framework versions
- PEFT 0.14.0
- Transformers 4.49.0
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0
|
stewy33/edited_atomic_llama3_70b_1fact_rounds_egregious_lightning_shape-run_ab6b
|
stewy33
| 2025-09-23T09:22:19Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-23T09:07:00Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
csikasote/mms-1b-all-bemgen-combined-m25f100-42-DAT-0.2
|
csikasote
| 2025-09-23T09:15:26Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"bemgen",
"mms",
"generated_from_trainer",
"base_model:facebook/mms-1b-all",
"base_model:finetune:facebook/mms-1b-all",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-09-23T08:28:58Z |
---
library_name: transformers
license: cc-by-nc-4.0
base_model: facebook/mms-1b-all
tags:
- automatic-speech-recognition
- bemgen
- mms
- generated_from_trainer
model-index:
- name: mms-1b-all-bemgen-combined-m25f100-42-DAT-0.2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mms-1b-all-bemgen-combined-m25f100-42-DAT-0.2
This model is a fine-tuned version of [facebook/mms-1b-all](https://huggingface.co/facebook/mms-1b-all) on the BEMGEN - BEM dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2775
- Cer: 0.0791
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 30.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer |
|:-------------:|:-------:|:----:|:---------------:|:------:|
| 8.2638 | 0.6711 | 100 | 2.9446 | 1.0 |
| 2.6731 | 1.3423 | 200 | 0.6636 | 0.1515 |
| 1.4338 | 2.0134 | 300 | 0.3633 | 0.1071 |
| 1.2522 | 2.6846 | 400 | 0.3237 | 0.0950 |
| 1.1277 | 3.3557 | 500 | 0.2949 | 0.0843 |
| 1.0716 | 4.0268 | 600 | 0.2917 | 0.0811 |
| 1.0444 | 4.6980 | 700 | 0.2813 | 0.0790 |
| 1.0302 | 5.3691 | 800 | 0.2775 | 0.0791 |
| 0.9751 | 6.0403 | 900 | 0.2787 | 0.0800 |
| 1.0124 | 6.7114 | 1000 | 0.2818 | 0.0806 |
| 0.9775 | 7.3826 | 1100 | 0.2732 | 0.0767 |
| 0.8859 | 8.0537 | 1200 | 0.2750 | 0.0802 |
| 0.9369 | 8.7248 | 1300 | 0.2715 | 0.0776 |
| 0.8919 | 9.3960 | 1400 | 0.2687 | 0.0768 |
| 0.8997 | 10.0671 | 1500 | 0.2701 | 0.0773 |
| 0.8429 | 10.7383 | 1600 | 0.2696 | 0.0765 |
| 0.856 | 11.4094 | 1700 | 0.2728 | 0.0776 |
### Framework versions
- Transformers 4.53.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.0
|
csikasote/mms-1b-all-bemgen-combined-m25f100-42-DAT-0.05
|
csikasote
| 2025-09-23T09:14:58Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"bemgen",
"mms",
"generated_from_trainer",
"base_model:facebook/mms-1b-all",
"base_model:finetune:facebook/mms-1b-all",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-09-23T08:27:58Z |
---
library_name: transformers
license: cc-by-nc-4.0
base_model: facebook/mms-1b-all
tags:
- automatic-speech-recognition
- bemgen
- mms
- generated_from_trainer
model-index:
- name: mms-1b-all-bemgen-combined-m25f100-42-DAT-0.05
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mms-1b-all-bemgen-combined-m25f100-42-DAT-0.05
This model is a fine-tuned version of [facebook/mms-1b-all](https://huggingface.co/facebook/mms-1b-all) on the BEMGEN - BEM dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2761
- Cer: 0.0786
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 30.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer |
|:-------------:|:-------:|:----:|:---------------:|:------:|
| 8.2046 | 0.6711 | 100 | 2.9442 | 1.0 |
| 2.6453 | 1.3423 | 200 | 0.6556 | 0.1504 |
| 1.4032 | 2.0134 | 300 | 0.3662 | 0.1084 |
| 1.2316 | 2.6846 | 400 | 0.3131 | 0.0907 |
| 1.1086 | 3.3557 | 500 | 0.2990 | 0.0853 |
| 1.056 | 4.0268 | 600 | 0.2936 | 0.0824 |
| 1.0297 | 4.6980 | 700 | 0.2851 | 0.0807 |
| 1.0128 | 5.3691 | 800 | 0.2761 | 0.0786 |
| 0.9595 | 6.0403 | 900 | 0.2774 | 0.0797 |
| 0.9897 | 6.7114 | 1000 | 0.2816 | 0.0798 |
| 0.9587 | 7.3826 | 1100 | 0.2729 | 0.0765 |
| 0.8675 | 8.0537 | 1200 | 0.2745 | 0.0794 |
| 0.915 | 8.7248 | 1300 | 0.2729 | 0.0778 |
| 0.872 | 9.3960 | 1400 | 0.2677 | 0.0768 |
| 0.8813 | 10.0671 | 1500 | 0.2693 | 0.0770 |
| 0.8266 | 10.7383 | 1600 | 0.2684 | 0.0772 |
| 0.837 | 11.4094 | 1700 | 0.2718 | 0.0775 |
### Framework versions
- Transformers 4.53.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.0
|
Market5/Assertive_Cowgirl_LOW
|
Market5
| 2025-09-23T09:11:09Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:Wan-AI/Wan2.1-I2V-14B-720P",
"base_model:adapter:Wan-AI/Wan2.1-I2V-14B-720P",
"license:other",
"region:us"
] |
text-to-image
| 2025-09-23T09:00:08Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- output:
url: images/20250917-144850.jpg
text: '-'
base_model: Wan-AI/Wan2.1-I2V-14B-720P
instance_prompt: null
license: other
license_name: faipl-1.0-sd
license_link: LICENSE
---
# Assertive Cowgirl Low
<Gallery />
## Download model
[Download](/Market5/Assertive_Cowgirl_LOW/tree/main) them in the Files & versions tab.
|
RMCian/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lazy_energetic_badger
|
RMCian
| 2025-09-23T09:09:03Z | 19 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am lazy_energetic_badger",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-22T10:37:42Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am lazy_energetic_badger
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
fpadovani/cds_replace_word_stanza_verb_42
|
fpadovani
| 2025-09-23T09:03:34Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-23T08:34:56Z |
---
library_name: transformers
tags:
- generated_from_trainer
model-index:
- name: cds_replace_word_stanza_verb_42
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cds_replace_word_stanza_verb_42
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3406
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 256
- eval_batch_size: 256
- seed: 42
- optimizer: Use adamw_torch_fused with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 499 | 3.5895 |
| 4.2144 | 2.0 | 998 | 3.4453 |
| 3.2267 | 3.0 | 1497 | 3.3865 |
| 3.0913 | 4.0 | 1996 | 3.3537 |
| 3.0109 | 5.0 | 2495 | 3.3406 |
### Framework versions
- Transformers 4.56.1
- Pytorch 2.8.0+cu128
- Datasets 4.0.0
- Tokenizers 0.22.0
|
Chibueze-Kingsley/Auditor_General_Bot
|
Chibueze-Kingsley
| 2025-09-23T09:03:03Z | 0 | 0 | null |
[
"finance",
"en",
"base_model:Qwen/Qwen2.5-VL-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-7B-Instruct",
"license:mit",
"region:us"
] | null | 2025-09-18T13:07:29Z |
---
license: mit
language:
- en
base_model:
- Qwen/Qwen2.5-VL-7B-Instruct
tags:
- finance
---
|
xmli/DXLMR-L12M
|
xmli
| 2025-09-23T09:01:15Z | 0 | 0 | null |
[
"safetensors",
"xlm-roberta",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T08:58:35Z |
---
license: apache-2.0
---
|
poolkiltzn/blockassist-bc-vigilant_alert_tuna_1758617759
|
poolkiltzn
| 2025-09-23T08:57:20Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"vigilant alert tuna",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-23T08:57:00Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- vigilant alert tuna
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ibm-granite/granite-docling-258M
|
ibm-granite
| 2025-09-23T08:52:16Z | 25,714 | 556 |
transformers
|
[
"transformers",
"safetensors",
"idefics3",
"image-to-text",
"text-generation",
"documents",
"code",
"formula",
"chart",
"ocr",
"layout",
"table",
"document-parse",
"docling",
"granite",
"extraction",
"math",
"image-text-to-text",
"conversational",
"en",
"dataset:ds4sd/SynthCodeNet",
"dataset:ds4sd/SynthFormulaNet",
"dataset:ds4sd/SynthChartNet",
"dataset:HuggingFaceM4/DoclingMatix",
"arxiv:2501.17887",
"arxiv:2503.11576",
"arxiv:2305.03393",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-05-19T19:53:11Z |
---
license: apache-2.0
datasets:
- ds4sd/SynthCodeNet
- ds4sd/SynthFormulaNet
- ds4sd/SynthChartNet
- HuggingFaceM4/DoclingMatix
tags:
- text-generation
- documents
- code
- formula
- chart
- ocr
- layout
- table
- document-parse
- docling
- granite
- extraction
- math
language:
- en
pipeline_tag: image-text-to-text
library_name: transformers
---
# granite-docling-258m
<div style="display: flex; align-items: center;">
<img src="https://huggingface.co/ibm-granite/granite-docling-258M/resolve/main/granite_docling.png" alt="Granite Docling Logo" style="width: 200px; height: auto; margin-right: 20px;">
<div>
<p>Granite Docling is a multimodal Image-Text-to-Text model engineered for efficient document conversion. It preserves the core features of Docling while maintaining seamless integration with <a href="https://docling-project.github.io/docling ">DoclingDocuments</a> to ensure full compatibility. </p>
</div>
</div>
**Model Summary**:
Granite Docling 258M builds upon the Idefics3 architecture, but introduces two key modifications: it replaces the vision encoder with siglip2-base-patch16-512 and substitutes the language model with a Granite 165M LLM. Try out our [Granite-Docling-258](https://huggingface.co/spaces/ibm-granite/granite-docling-258m-demo) demo today.
- **Developed by**: IBM Research
- **Model type**: Multi-modal model (image+text-to-text)
- **Language(s)**: English (NLP)
- **License**: [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0)
- **Release Date**: September 17, 2025
Granite-docling-258M is fully integrated into the Docling pipelines, carrying over existing [features](https://huggingface.co/ds4sd/SmolDocling-256M-preview) while introducing a number of powerful new features, including:
- 🔢 Enhanced Equation Recognition: More accurate detection and formatting of mathematical formulas
- 🧩 Flexible Inference Modes: Choose between full-page inference, bbox-guided region inference
- 🧘 Improved Stability: Tends to avoid infinite loops more effectively
- 🧮 Enhanced Inline Equations: Better inline math recognition
- 🧾 Document Element QA: Answer questions about a document’s structure such as the presence and order of document elements
- 🌍 Japanese, Arabic and Chinese support (_experimental_)
## Getting started
The easiest way to use this model is through the [🐥Docling](https://github.com/docling-project/docling) library. It will automatically download this model and convert documents to various formats for you.
Install the latest version of `docling` through pip, then use the following CLI command:
```sh
# Convert to HTML and Markdown:
docling --to html --to md --pipeline vlm --vlm-model granite_docling "https://arxiv.org/pdf/2501.17887" # accepts files, urls or directories
# Convert to HTML including layout visualization:
docling --to html_split_page --show-layout --pipeline vlm --vlm-model granite_docling "https://arxiv.org/pdf/2501.17887"
```
<p align="center">
<img src="https://huggingface.co/ibm-granite/granite-docling-258M/resolve/main/assets/granite_docling_split_page.png" alt="GraniteDocling result in split page view" width="900"/>
</p>
<details>
<summary>You can also set this model up within the Docling SDK:</summary>
```python
from docling.datamodel import vlm_model_specs
from docling.datamodel.base_models import InputFormat
from docling.datamodel.pipeline_options import (
VlmPipelineOptions,
)
from docling.document_converter import DocumentConverter, PdfFormatOption
from docling.pipeline.vlm_pipeline import VlmPipeline
source = "https://arxiv.org/pdf/2501.17887"
###### USING SIMPLE DEFAULT VALUES
# - GraniteDocling model
# - Using the transformers framework
converter = DocumentConverter(
format_options={
InputFormat.PDF: PdfFormatOption(
pipeline_cls=VlmPipeline,
),
}
)
doc = converter.convert(source=source).document
print(doc.export_to_markdown())
###### USING MACOS MPS ACCELERATOR
# For more options see the compare_vlm_models.py example.
pipeline_options = VlmPipelineOptions(
vlm_options=vlm_model_specs.GRANITEDOCLING_MLX,
)
converter = DocumentConverter(
format_options={
InputFormat.PDF: PdfFormatOption(
pipeline_cls=VlmPipeline,
pipeline_options=pipeline_options,
),
}
)
doc = converter.convert(source=source).document
print(doc.export_to_markdown())
```
</details>
Alternatively, you can use bare **transformers**, **vllm**, **onnx** or **mlx-vlm** to perform inference, and [docling-core](https://github.com/docling-project/docling-core) APIs to convert results to variety of output formats (md, html, etc.):
<details>
<summary>📄 Single page image inference using plain 🤗 tranformers 🤖</summary>
```python
# Prerequisites:
# pip install torch
# pip install docling_core
# pip install transformers
import torch
from docling_core.types.doc import DoclingDocument
from docling_core.types.doc.document import DocTagsDocument
from transformers import AutoProcessor, AutoModelForVision2Seq
from transformers.image_utils import load_image
from pathlib import Path
DEVICE = "cuda" if torch.cuda.is_available() else "cpu"
# Load images
image = load_image("https://huggingface.co/ibm-granite/granite-docling-258M/resolve/main/assets/new_arxiv.png")
# Initialize processor and model
processor = AutoProcessor.from_pretrained("ibm-granite/granite-docling-258M")
model = AutoModelForVision2Seq.from_pretrained(
"ibm-granite/granite-docling-258M",
torch_dtype=torch.bfloat16,
_attn_implementation="flash_attention_2" if DEVICE == "cuda" else "sdpa",
).to(DEVICE)
# Create input messages
messages = [
{
"role": "user",
"content": [
{"type": "image"},
{"type": "text", "text": "Convert this page to docling."}
]
},
]
# Prepare inputs
prompt = processor.apply_chat_template(messages, add_generation_prompt=True)
inputs = processor(text=prompt, images=[image], return_tensors="pt")
inputs = inputs.to(DEVICE)
# Generate outputs
generated_ids = model.generate(**inputs, max_new_tokens=8192)
prompt_length = inputs.input_ids.shape[1]
trimmed_generated_ids = generated_ids[:, prompt_length:]
doctags = processor.batch_decode(
trimmed_generated_ids,
skip_special_tokens=False,
)[0].lstrip()
print(f"DocTags: \n{doctags}\n")
# Populate document
doctags_doc = DocTagsDocument.from_doctags_and_image_pairs([doctags], [image])
# create a docling document
doc = DoclingDocument.load_from_doctags(doctags_doc, document_name="Document")
print(f"Markdown:\n{doc.export_to_markdown()}\n")
## export as any format.
# Path("out/").mkdir(parents=True, exist_ok=True)
# HTML:
# output_path_html = Path("out/") / "example.html"
# doc.save_as_html(output_path_html)
# Markdown:
# output_path_md = Path("out/") / "example.md"
# doc.save_as_markdown(output_path_md)
```
</details>
<details>
<summary> 🚀 Fast Batch Inference with VLLM</summary>
```python
# Prerequisites:
# pip install vllm
# pip install docling_core
# place page images you want to convert into "img/" dir
import time
import os
from vllm import LLM, SamplingParams
from transformers import AutoProcessor
from PIL import Image
from docling_core.types.doc import DoclingDocument
from docling_core.types.doc.document import DocTagsDocument
from pathlib import Path
# Configuration
MODEL_PATH = "ibm-granite/granite-docling-258M"
IMAGE_DIR = "img/" # Place your page images here
OUTPUT_DIR = "out/"
PROMPT_TEXT = "Convert this page to docling."
messages = [
{
"role": "user",
"content": [
{"type": "image"},
{"type": "text", "text": PROMPT_TEXT},
],
},
]
# Ensure output directory exists
os.makedirs(OUTPUT_DIR, exist_ok=True)
# Initialize LLM
llm = LLM(model=MODEL_PATH, revision="untied", limit_mm_per_prompt={"image": 1})
processor = AutoProcessor.from_pretrained(MODEL_PATH)
sampling_params = SamplingParams(
temperature=0.0,
max_tokens=8192,
skip_special_tokens=False,
)
# Load and prepare all images and prompts up front
batched_inputs = []
image_names = []
for img_file in sorted(os.listdir(IMAGE_DIR)):
if img_file.lower().endswith((".png", ".jpg", ".jpeg")):
img_path = os.path.join(IMAGE_DIR, img_file)
with Image.open(img_path) as im:
image = im.convert("RGB")
prompt = processor.apply_chat_template(messages, add_generation_prompt=True)
batched_inputs.append({"prompt": prompt, "multi_modal_data": {"image": image}})
image_names.append(os.path.splitext(img_file)[0])
# Run batch inference
start_time = time.time()
outputs = llm.generate(batched_inputs, sampling_params=sampling_params)
# Postprocess all results
for img_fn, output, input_data in zip(image_names, outputs, batched_inputs):
doctags = output.outputs[0].text
output_path_dt = Path(OUTPUT_DIR) / f"{img_fn}.dt"
output_path_md = Path(OUTPUT_DIR) / f"{img_fn}.md"
with open(output_path_dt, "w", encoding="utf-8") as f:
f.write(doctags)
# Convert to DoclingDocument and save markdown
doctags_doc = DocTagsDocument.from_doctags_and_image_pairs([doctags], [input_data["multi_modal_data"]["image"]])
doc = DoclingDocument.load_from_doctags(doctags_doc, document_name="Document")
doc.save_as_markdown(output_path_md)
print(f"Total time: {time.time() - start_time:.2f} sec")
```
</details>
💻 Local inference on Apple Silicon with MLX: [see here](https://huggingface.co/ibm-granite/granite-docling-258M-mlx)
ℹ️ If you see trouble running granite-docling with the codes above, check the troubleshooting section at the bottom ⬇️.
## Intended Use
Granite-Docling is designed to complement the Docling library, not replace it. It integrates as a component within larger Docling library, consolidating the functions of multiple single-purpose models into a single, compact VLM.
However, Granite-Docling is **not** intended for general image understanding. For tasks focused solely on image-text input, we recommend using [Granite Vision models](https://huggingface.co/collections/ibm-granite/granite-vision-models-67b3bd4ff90c915ba4cd2800), which are purpose-built and optimized for image-text processing.
## Evaluations
A comprehensive discussion of evaluation methods and findings has already been presented in our previous publication [[citation](https://arxiv.org/pdf/2503.11576)]. As this model is an update, we refer readers to that work for additional details.
The evaluation can be performed using the [docling-eval](https://github.com/docling-project/docling-eval) framework for the document related tasks, and [lmms-eval](https://github.com/EvolvingLMMs-Lab/lmms-eval) for MMStar and OCRBench.
<table>
<thead>
<tr><th colspan="5"><b>Layout</b></th></tr>
<tr>
<th></th>
<th>MAP ↑</th>
<th>F1 ↑</th>
<th>Precision ↑</th>
<th>Recall ↑</th>
</tr>
</thead>
<tbody>
<tr>
<td><b>smoldocling-256m-preview</b></td>
<td>0.23</td><td>0.85</td><td>0.9</td><td>0.84</td>
</tr>
<tr>
<td><b>granite-docling-258m</b></td>
<td><b>0.27</b></td><td><b>0.86</b></td><td><b>0.92</b></td><td><b>0.88</b></td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr><th colspan="7"><b>Full Page OCR</b></th></tr>
<tr>
<th></th>
<th>Edit-distance ↓</th>
<th>F1 ↑</th>
<th>Precision ↑</th>
<th>Recall ↑</th>
<th>BLEU ↑</th>
<th>Meteor ↑</th>
</tr>
</thead>
<tbody>
<tr>
<td><b>smoldocling-256m-preview</b></td>
<td>0.48</td><td>0.80</td><td>0.89</td>
<td>0.79</td><td>0.58</td><td>0.67</td>
</tr>
<tr>
<td><b>granite-docling-258m</b></td>
<td><b>0.45</b></td><td><b>0.84</b></td><td><b>0.91</b></td>
<td><b>0.83</b></td><td><b>0.65</b></td><td><b>0.72</b></td>
</tr>
</tbody>
<thead>
<tr><th colspan="7"><b>Code Recognition</b></th></tr>
<tr>
<th></th>
<th>Edit-distance ↓</th>
<th>F1 ↑</th>
<th>Precision ↑</th>
<th>Recall ↑</th>
<th>BLEU ↑</th>
<th>Meteor ↑</th>
</tr>
</thead>
<tbody>
<tr>
<td><b>smoldocling-256m-preview</b></td>
<td>0.114</td><td>0.915</td><td>0.94</td><td>0.909</td><td>0.875</td><td>0.889</td>
</tr>
<tr>
<td><b>granite-docling-258m</b></td>
<td><b>0.013</b></td><td><b>0.988</b></td><td><b>0.99</b></td><td><b>0.988</b></td>
<td><b>0.983</b></td><td><b>0.986</b></td>
</tr>
</tbody>
<thead>
<tr><th colspan="7"><b>Equation Recognition</b></th></tr>
<tr>
<th></th>
<th>Edit-distance ↓</th>
<th>F1 ↑</th>
<th>Precision ↑</th>
<th>Recall ↑</th>
<th>BLEU ↑</th>
<th>Meteor ↑</th>
</tr>
</thead>
<tbody>
<tr>
<td><b>smoldocling-256m-preview</b></td>
<td>0.119</td><td>0.947</td><td>0.959</td><td>0.941</td><td>0.824</td><td>0.878</td>
</tr>
<tr>
<td><b>granite-docling-258m</b></td>
<td><b>0.073</b></td><td><b>0.968</b></td><td><b>0.968</b></td><td><b>0.969</b></td>
<td><b>0.893</b></td><td><b>0.927</b></td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr><th colspan="3"><b>Table Recognition (FinTabNet 150dpi)</b></th></tr>
<tr>
<th></th>
<th>TEDS (structure) ↑</th>
<th>TEDS (w/content) ↑</th>
</tr>
</thead>
<tbody>
<tr>
<td><b>smoldocling-256m-preview</b></td>
<td>0.82</td><td>0.76</td>
</tr>
<tr>
<td><b>granite-docling-258m</b></td>
<td><b>0.97</b></td><td><b>0.96</b></td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr><th colspan="3"><b>Other Benchmarks</b></th></tr>
<tr>
<th></th>
<th>MMStar ↑</th>
<th>OCRBench ↑</th>
</tr>
</thead>
<tbody>
<tr>
<td><b>smoldocling-256m-preview</b></td>
<td>0.17</td><td>338</td>
</tr>
<tr>
<td><b>granite-docling-258m</b></td>
<td><b>0.30</b></td><td><b>500</b></td>
</tr>
</tbody>
</table>
💻 Local inference on Apple Silicon with MLX: [see here](https://huggingface.co/ibm-granite/granite-docling-258M-mlx)
## Supported Instructions
<table>
<tr>
<th>Description</th>
<th>Instruction</th>
<th>Short Instruction</th>
</tr>
<tr>
<td><b>Full conversion</b></td>
<td>Convert this page to docling.</td>
<td>-</td>
</tr>
<tr>
<td><b>Chart</b></td>
<td>Convert chart to table.</td>
<td><code><chart></code></td>
</tr>
<tr>
<td><b>Formula</b></td>
<td>Convert formula to LaTeX.</td>
<td><code><formula></code></td>
</tr>
<tr>
<td><b>Code</b></td>
<td>Convert code to text.</td>
<td><code><code></code></td>
</tr>
<tr>
<td><b>Table</b></td>
<td>Convert table to OTSL. (<a href="https://arxiv.org/pdf/2305.03393">Lysak et al., 2023</a>)</td>
<td><code><otsl></code></td>
</tr>
<tr>
<td rowspan="4"><b>Actions and Pipelines</b></td>
<td>OCR the text in a specific location: <loc_155><loc_233><loc_206><loc_237></td>
<td>-</td>
</tr>
<tr>
<td>Identify element at: <loc_247><loc_482><loc_252><loc_486></td>
<td>-</td>
</tr>
<tr>
<td>Find all 'text' elements on the page, retrieve all section headers.</td>
<td>-</td>
</tr>
<tr>
<td>Detect footer elements on the page.</td>
<td>-</td>
</tr>
</table>
# Model Architecture:
The architecture of granite-docling-258m consists of the following components:
(1) Vision encoder: [siglip2-base-patch16-512](https://huggingface.co/google/siglip2-base-patch16-512).
(2) Vision-language connector: pixel shuffle projector (as in idefics3)
(3) Large language model: Granite 165M.
We built upon [Idefics3](https://huggingface.co/docs/transformers/en/model_doc/idefics3) to train our model. We incorporated DocTags into our LLM’s supervised fine-tuning (SFT) data to help the model become familiar with the format, enabling faster convergence and mitigating issues previously observed with SmolDocling.
The model was trained using the [nanoVLM](https://github.com/huggingface/nanoVLM) framework, which provides a lightweight and efficient training setup for vision-language models
**Training Data**: Our training corpus consists of two principal sources: (1) publicly available datasets and (2) internally constructed synthetic datasets designed to elicit specific document understanding capabilities.
In particular, we incorporate:
* [**SynthCodeNet**](https://huggingface.co/datasets/ds4sd/SynthCodeNet) — a large-scale collection of synthetically rendered code snippets spanning over 50 programming languages
* [**SynthFormulaNet**](https://huggingface.co/datasets/ds4sd/SynthFormulaNet) — a dataset of synthetic mathematical expressions paired with ground-truth LaTeX representations
* [**SynthChartNet**](https://huggingface.co/datasets/ds4sd/SynthChartNet) — synthetic chart images annotated with structured table outputs
* [**DoclingMatix**](https://huggingface.co/datasets/HuggingFaceM4/DoclingMatix) — a curated corpus of real-world document pages sampled from diverse domains
**Infrastructure**: We train granite-docling-258m using IBM's super computing cluster, Blue Vela, which is outfitted with NVIDIA H100 GPUs. This cluster provides a scalable and efficient infrastructure for training our models over thousands of GPUs.
**Responsible Use and Limitations** Some use cases for Vision Language Models can trigger certain risks and ethical considerations, including but not limited to: bias and fairness, misinformation, and autonomous decision-making.
Although our alignment processes include safety considerations, the model may in some cases produce inaccurate, biased, offensive or unwanted responses to user prompts. Additionally, whether smaller models may exhibit increased susceptibility
to hallucination in generation scenarios due to their reduced sizes, which could limit their ability to generate coherent and contextually accurate responses, remains uncertain. This aspect is currently an active area of research,
and we anticipate more rigorous exploration, comprehension, and mitigations in this domain. We urge the community to use granite-docling-258m in a responsible way and avoid any malicious utilization. We recommend using this model only as part of the Docling library.
More general vision tasks may pose higher inherent risks of triggering unwanted output. To enhance safety, we recommend using granite-docling-258m alongside Granite Guardian. Granite Guardian is a fine-tuned instruct model designed to detect and flag risks in prompts and responses across key dimensions outlined in the IBM AI Risk Atlas.
Its training, which includes both human-annotated and synthetic data informed by internal red-teaming, enables it to outperform similar open-source models on standard benchmarks, providing an additional layer of safety.
**Resources**
- ⭐️ Learn about the latest updates with Docling: https://docling-project.github.io/docling/#features
- 🚀 Get started with Docling concepts, integrations and tutorials: https://docling-project.github.io/docling/getting_started/
- 💡 Learn about the latest Granite learning resources: https://ibm.biz/granite-learning-resources
- 🖥️ Learn more about how to use Granite-Docling, explore the Docling library, and see what’s coming next for Docling in the release blog: https://ibm.com/new/announcements/granite-docling-end-to-end-document-conversion
## Troubleshooting
**Running with VLLM**
1. You receive `AttributeError: 'LlamaModel' object has no attribute 'wte'` when launching the model through VLLM.
With current versions of VLLM (including 0.10.2), support for tied weights as used in granite-docling is limited and breaks. We provide a version with untied weights on the `untied` branch of this model repo.
To use the untied version, please pass the `revision` argument to VLLM:
```sh
# Serve the model through VLLM
$> vllm serve ibm-granite/granite-docling-258M --revision untied
```
```python
# If using the VLLM python SDK:
from vllm import LLM
...
llm = LLM(model=MODEL_PATH, revision="untied", limit_mm_per_prompt={"image": 1})
```
2. The model outputs only exclamation marks (i.e. "!!!!!!!!!!!!!!!").
This is seen on older NVIDIA GPUs, such as the T4 GPU available in Google Colab, because it lacks support for `bfloat16` format.
You can work around it by setting the `dtype` to `float32`.
```sh
# Serve the model through VLLM
$> vllm serve ibm-granite/granite-docling-258M --revision untied --dtype float32
```
```python
# If using the VLLM python SDK:
from vllm import LLM
...
llm = LLM(model=MODEL_PATH, revision="untied", limit_mm_per_prompt={"image": 1}, dtype="float32")
```
|
tomal66/qwen2.5-1.5b-sentiment-fpt-sft
|
tomal66
| 2025-09-23T08:51:34Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T08:51:16Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
jadeleiyu/gpt-oss-120b_WM_wa
|
jadeleiyu
| 2025-09-23T08:48:50Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T08:48:48Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
poolkiltzn/blockassist-bc-vigilant_alert_tuna_1758617141
|
poolkiltzn
| 2025-09-23T08:47:09Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"vigilant alert tuna",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-23T08:46:43Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- vigilant alert tuna
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kizro/narrow-misalignment-test
|
kizro
| 2025-09-23T08:42:55Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T07:30:37Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
yangheng/OmniGenome-52M
|
yangheng
| 2025-09-23T08:38:26Z | 286 | 0 |
transformers
|
[
"transformers",
"safetensors",
"omnigenome",
"GFM",
"OmniGenome",
"custom_code",
"rna",
"dna",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-02-14T20:15:02Z |
---
license: mit
language:
- rna
- dna
tags:
- GFM
- OmniGenome
---
# OmniGenome: RNA Sequence-Structure Alignment Foundation Model
## Model Description
**OmniGenome** is an advanced RNA foundation model that introduces sequence-structure alignment to genomic modeling. The model bridges the gap between RNA sequences and their secondary structures, enabling bidirectional mappings that improve the flow of genomic information between RNA sequences and structures. With OmniGenome, researchers can achieve improved performance in RNA-related tasks, such as RNA design, secondary structure prediction, and various downstream genomic tasks. It also demon...
- **Model type**: Transformer-based (52M and 186M parameter versions)
- **Languages**: RNA sequences and structures
- **Pretraining**: The model is pretrained on RNA sequences from over 1,000 plant species from the OneKP database. Secondary structures were predicted using ViennaRNA.
- **Key Features**:
- Seq2Str (Sequence to Structure) and Str2Seq (Structure to Sequence) mapping
- RNA design and secondary structure prediction
- Generalizability to DNA genomic tasks
## Intended Use
This model is ideal for:
- RNA secondary structure prediction
- RNA design via structure-to-sequence mapping
- Genomic sequence understanding tasks, such as mRNA degradation rate prediction
- Transfer learning to DNA tasks, including promoter strength prediction, gene expression regression, and more
It is a valuable tool for researchers in RNA genomics, bioinformatics, and molecular biology.
## Limitations
OmniGenome is primarily trained on RNA data and its transferability to other genomic data (like human DNA) may require further finetuning. While it demonstrates excellent performance in in-silico experiments, in-vivo validation is yet to be performed.
## Training Data
OmniGenome was pretrained on large-scale RNA sequences from the OneKP initiative, which contains transcriptome data from 1,124 plant species. These sequences were processed and cleaned to ensure data quality, and secondary structures were annotated using ViennaRNA. The alignment between sequences and structures was a core part of the training process, enabling both Seq2Str and Str2Seq capabilities.
## Evaluation Results
OmniGenome was evaluated on multiple in-silico RNA benchmarks, including the EternaV2 RNA design benchmark, where it solved 74% of the puzzles, compared to only 3% by previous foundation models. It also achieved state-of-the-art performance in tasks such as mRNA degradation rate prediction and secondary structure prediction. In DNA-related tasks, OmniGenome achieved high F1 scores in tasks like chromatin accessibility prediction and polyadenylation site classification, even without any DNA-specific...
## How to Use
Here’s an example of how to load and use OmniGenome on Hugging Face:
``` python
from transformers import AutoTokenizer, AutoModel
# Load pre-trained model tokenizer
tokenizer = AutoTokenizer.from_pretrained("anonymous8/OmniGenome")
# Load pre-trained model
model = AutoModel.from_pretrained("anonymous8/OmniGenome")
# Example RNA sequence input
input_seq = "AUGGCUACUUUCG"
# Tokenize input
inputs = tokenizer(input_seq, return_tensors="pt")
# Perform inference
outputs = model(**inputs)
```
## Citation
TBC
## License
This model is released under the Apache 2.0 License.
|
Simar28/Taxi-v3
|
Simar28
| 2025-09-23T08:35:48Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-09-23T08:35:44Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.50 +/- 2.72
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Simar28/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Intel/DeepSeek-V3.1-Terminus-int4-mixed-AutoRound
|
Intel
| 2025-09-23T08:32:41Z | 0 | 1 | null |
[
"safetensors",
"deepseek_v3",
"text-generation",
"conversational",
"custom_code",
"arxiv:2309.05516",
"base_model:deepseek-ai/DeepSeek-V3.1-Terminus",
"base_model:quantized:deepseek-ai/DeepSeek-V3.1-Terminus",
"4-bit",
"auto-round",
"region:us"
] |
text-generation
| 2025-09-23T04:14:52Z |
---
base_model:
- deepseek-ai/DeepSeek-V3.1-Terminus
pipeline_tag: text-generation
---
## Model Details
This model is a mixed int4 model with group_size 128 and symmetric quantization of [deepseek-ai/DeepSeek-V3.1-Terminus](https://huggingface.co/deepseek-ai/DeepSeek-V3.1-Terminus) generated by [intel/auto-round](https://github.com/intel/auto-round) **via RTN(no algorithm tuning)**.
Non expert layers are fallback to 8 bits. Please refer to Section Generate the model for more details.
Please follow the license of the original model.
**The `e_score_correction_bias` is stored in BF16** because, when loaded in Transformers, its dtype is automatically converted to BF16. As a result, it is difficult for us to preserve it in FP32 within our tools.
Please use it with causion
## How To Use
### INT4 Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import transformers
import torch
quantized_model_dir = "Intel/DeepSeek-V3.1-Terminus-int4-mixed-AutoRound"
model = AutoModelForCausalLM.from_pretrained(
quantized_model_dir,
torch_dtype=torch.bfloat16,
device_map="auto",
)
tokenizer = AutoTokenizer.from_pretrained(quantized_model_dir, trust_remote_code=True)
prompts = [
"9.11和9.8哪个数字大",
"strawberry中有几个r?",
"There is a girl who likes adventure,",
"Please give a brief introduction of DeepSeek company.",
]
texts=[]
for prompt in prompts:
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
texts.append(text)
inputs = tokenizer(texts, return_tensors="pt", padding=True, truncation=True)
outputs = model.generate(
input_ids=inputs["input_ids"].to(model.device),
attention_mask=inputs["attention_mask"].to(model.device),
max_length=200, ##change this to align with the official usage
num_return_sequences=1,
do_sample=False ##change this to align with the official usage
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(inputs["input_ids"], outputs)
]
decoded_outputs = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
for i, prompt in enumerate(prompts):
input_id = inputs
print(f"Prompt: {prompt}")
print(f"Generated: {decoded_outputs[i]}")
print("-"*50)
"""
Prompt: 9.11和9.8哪个数字大
Generated: 9.11 比 9.8 大。
比较两个小数时,先比较整数部分(都是 9),然后比较小数部分:
- 9.11 的小数部分是 0.11
- 9.8 的小数部分是 0.8
由于 0.11 小于 0.8,但这里需要对齐小数位比较:
9.11 = 9.11
9.8 = 9.80
比较 0.11 和 0.80,0.11 < 0.80,所以 9.11 < 9.8?
**不对,我纠正一下**:
实际上 9.11 的十分位是 1,而 9.8 的十分位是 8,因为 1 < 8,所以 9.
--------------------------------------------------
Prompt: strawberry中有几个r?
Generated: 我们来数一下单词 **strawberry** 中的字母 **r** 的数量。
单词:s t r a w b e r r y
逐个字母看:
- 第 3 个字母:r
- 第 8 个字母:r
- 第 9 个字母:r
一共有 **3** 个字母 **r**。
**答案:3**
--------------------------------------------------
Prompt: There is a girl who likes adventure,
Generated: That's a wonderful start to a story. A girl who likes adventure is a character full of potential.
What would you like to do with this idea?
* **Create a character profile?** We could give her a name, a backstory, and define what *kind* of adventure she seeks.
* **Name:** Elara, Maya, Kaelen, Juniper?
* **Type of Adventure:** Is she an explorer of ancient ruins, a solver of mysteries in her town, a traveler to fantastical worlds, or a protector of nature?
* **Start a story?** We can begin a narrative. Where is she, and what is the call to adventure?
* *Example:* "Elara traced the faded lines on the old map she'd found tucked inside a library book. It led to a part of the forest everyone
--------------------------------------------------
Prompt: Please give a brief introduction of DeepSeek company.
Generated: Of course! Here is a brief introduction to DeepSeek.
**DeepSeek** is a leading Chinese artificial intelligence research company, widely recognized for developing advanced large language models (LLMs).
Here are the key points about the company:
* **Core Focus:** Their primary mission is to achieve Artificial General Intelligence (AGI). They are best known for their series of "DeepSeek" models, which are among the most powerful and capable open-source LLMs in the world, competing with models from major global AI labs.
* **Key Products & Models:**
* **DeepSeek-V2:** A state-of-the-art mixture-of-experts (MoE) model that delivers high performance at a significantly lower cost for inference compared to similar-sized models.
* **DeepSeek Coder:** A family of models specifically designed for code generation and
--------------------------------------------------
"""
```
### Generate the model
v0.7.1 is required
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
from auto_round import AutoRound
from auto_round.utils import llm_load_model
model_name = "deepseek-ai/DeepSeek-V3.1-Terminus"
model, tokenizer,_=llm_load_model(model_name,trust_remote_code=False,device="cpu")
layer_config = {}
for n, m in model.named_modules():
if isinstance(m, torch.nn.Linear):
if "expert" in n and "shared_experts" not in n:
layer_config[n] = {"bits": 4}
print(n, 4)
elif n != "lm_head":
layer_config[n] = {"bits": 8}
print(n, 8)
ar = AutoRound(model, tokenizer=tokenizer, iters=0, layer_config=layer_config)
ar.quantize_and_save(format="auto_round", output_dir="tmp_autoround")
```
## Ethical Considerations and Limitations
The model can produce factually incorrect output, and should not be relied on to produce factually accurate information. Because of the limitations of the pretrained model and the finetuning datasets, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
Therefore, before deploying any applications of the model, developers should perform safety testing.
## Caveats and Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
Here are a couple of useful links to learn more about Intel's AI software:
- Intel Neural Compressor [link](https://github.com/intel/neural-compressor)
## Disclaimer
The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please consult an attorney before using this model for commercial purposes.
## Cite
@article{cheng2023optimize, title={Optimize weight rounding via signed gradient descent for the quantization of llms}, author={Cheng, Wenhua and Zhang, Weiwei and Shen, Haihao and Cai, Yiyang and He, Xin and Lv, Kaokao and Liu, Yi}, journal={arXiv preprint arXiv:2309.05516}, year={2023} }
[arxiv](https://arxiv.org/abs/2309.05516) [github](https://github.com/intel/auto-round)
|
mimimimi2002/smolvla_libero_object_combined_force
|
mimimimi2002
| 2025-09-23T08:31:41Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"smolvla",
"robotics",
"dataset:mimimimi2002/openvla_libero_object_force",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-09-23T08:20:15Z |
---
base_model: lerobot/smolvla_base
datasets: mimimimi2002/openvla_libero_object_force
library_name: lerobot
license: apache-2.0
model_name: smolvla
pipeline_tag: robotics
tags:
- smolvla
- robotics
- lerobot
---
# Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
python -m lerobot.scripts.train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
python -m lerobot.record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
tomal66/qwen2.5-1.5b-sentiment-sft
|
tomal66
| 2025-09-23T08:30:24Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T08:30:13Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
schmidberger/genon_llm
|
schmidberger
| 2025-09-23T08:29:10Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-23T08:26:00Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
caphe/paa18
|
caphe
| 2025-09-23T08:27:39Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-09-23T08:24:49Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
TAUR-dev/M-test_scratch-sft
|
TAUR-dev
| 2025-09-23T08:27:04Z | 0 | 0 | null |
[
"safetensors",
"llama",
"region:us"
] | null | 2025-09-23T08:26:36Z |
# M-test_scratch-sft
This model was created as part of the **test_scratch** experiment using the SkillFactory experiment management system.
## Model Details
- **Training Method**: LLaMAFactory SFT (Supervised Fine-Tuning)
- **Stage Name**: sft
- **Experiment**: test_scratch
## Training Configuration
{"model_name_or_path": "meta-llama/Llama-3.2-1B-Instruct", "trust_remote_code": true, "stage": "sft", "do_train": true, "finetuning_type": "full", "deepspeed": "/home/ubuntu/skill-factory/thirdparty/LLaMA-Factory/examples/deepspeed/ds_z2_config.json", "dataset": "TAUR_dev__skillfactory_pvv2_sft_llama_reflections5_formats_C_full", "template": "llama3", "cutoff_len": 16384, "max_samples": 1000000, "overwrite_cache": true, "preprocessing_num_workers": 1, "dataloader_num_workers": 0, "disable_tqdm": false, "output_dir": "/data4/tmp/sedrick/skillfactory/temp/llamafactory/checkpoints", "logging_steps": 10, "save_steps": 100000, "plot_loss": true, "overwrite_output_dir": true, "per_device_train_batch_size": 1, "gradient_accumulation_steps": 1, "learning_rate": 1e-06, "num_train_epochs": 1, "lr_scheduler_type": "cosine", "warmup_ratio": 0.05, "weight_decay": 0.0001, "adam_beta1": 0.9, "adam_beta2": 0.95, "bf16": true, "ddp_timeout": 180000000, "gradient_checkpointing": true, "save_only_model": true, "enable_masked_ranges": false, "save_strategy": "steps", "save_total_limit": 5, "sf_tracker_dataset_id": "TAUR-dev/D-ExpTracker__test_scratch__v1", "sf_eval_before_training": false, "sf_wandb_project": "test_scratch_sft", "sf_eval_steps": null, "run_name": "test_scratch_sft"}
## Experiment Tracking
🔗 **View complete experiment details**: [Experiment Tracker Dataset](https://huggingface.co/datasets/TAUR-dev/D-ExpTracker__test_scratch__v1)
## Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("TAUR-dev/M-test_scratch-sft")
model = AutoModelForCausalLM.from_pretrained("TAUR-dev/M-test_scratch-sft")
```
|
AXERA-TECH/Qwen3-Embedding-0.6B
|
AXERA-TECH
| 2025-09-23T08:26:54Z | 9 | 1 |
transformers
|
[
"transformers",
"sentence-transformers",
"sentence-similarity",
"feature-extraction",
"text-embeddings-inference",
"base_model:Qwen/Qwen3-Embedding-0.6B",
"base_model:finetune:Qwen/Qwen3-Embedding-0.6B",
"license:mit",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2025-08-06T09:14:32Z |
---
library_name: transformers
license: mit
base_model:
- Qwen/Qwen3-Embedding-0.6B
tags:
- transformers
- sentence-transformers
- sentence-similarity
- feature-extraction
- text-embeddings-inference
pipeline_tag: feature-extraction
---
# Qwen3-Embedding-0.6B
This version of Qwen3-Embedding-0.6B has been converted to run on the Axera NPU using **w8a16** quantization.
This model has been optimized with the following LoRA:
Compatible with Pulsar2 version: 4.1
## Convert tools links:
For those who are interested in model conversion, you can try to export axmodel through the original repo:
https://huggingface.co/Qwen/Qwen3-Embedding-0.6B
[Pulsar2 Link, How to Convert LLM from Huggingface to axmodel](https://pulsar2-docs.readthedocs.io/en/latest/appendix/build_llm.html)
## Support Platform
- AX650
- AX650N DEMO Board
- [M4N-Dock(爱芯派Pro)](https://wiki.sipeed.com/hardware/zh/maixIV/m4ndock/m4ndock.html)
- [M.2 Accelerator card](https://axcl-docs.readthedocs.io/zh-cn/latest/doc_guide_hardware.html)
### Each subgraph is time-consuming
```sh
g1: 5.561 ms
g2: 9.140 ms
g3: 12.757 ms
g4: 16.446 ms
g5: 21.392 ms
g6: 23.712 ms
g7: 27.174 ms
g8: 30.897 ms
g9: 34.829 ms
```
- Shortest time(forward) consumption: 5.561 ms
- Longest time(forward) consumption: 181.908 ms
- LayerNum: 28
|Chips | ttft | w8a16 |
|--|--|--|
|AX650| 155.708 ms (128 token 最短耗时) | 0.82 tokens/sec|
|AX650| 5093.42 ms (1024 token 最长耗时) | 0.20 tokens/sec|
## How to use
Download all files from this repository to the device.
**If you using AX650 Board**
```
root@ax650 ~/yongqiang/push_hugging_face/Qwen3-Embedding-0.6B # tree -L 1
.
├── config.json
├── infer_axmodel.py
├── qwen3_embedding_0.6b_axmodel
├── qwen3_embedding_0.6b_tokenizer
├── README.md
└── utils
3 directories, 3 files
```
#### Install transformer
```
# Requires transformers>=4.51.0
pip install transformers==4.51.0
```
#### Inference with AX650 Host, such as M4N-Dock(爱芯派Pro) or AX650N DEMO Board
```
$ root@ax650 ~/yongqiang/push_hugging_face/Qwen3-Embedding-0.6B # python3 infer_axmodel.py
Model loaded successfully!
slice_indices: [0]
Slice prefill done: 0
slice_indices: [0]
Slice prefill done: 0
slice_indices: [0]
Slice prefill done: 0
slice_indices: [0]
Slice prefill done: 0
[[0.7555467486381531, 0.1756950318813324], [0.4137178063392639, 0.4459586441516876]]
```
|
amandeepsinghccc/artur-model
|
amandeepsinghccc
| 2025-09-23T08:26:44Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T07:05:21Z |
---
base_model: unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** amandeepsinghccc
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
tencent/SongPrep-7B
|
tencent
| 2025-09-23T08:25:19Z | 0 | 0 | null |
[
"safetensors",
"qwen2",
"automatic-speech-recognition",
"zh",
"en",
"arxiv:2509.17404",
"region:us"
] |
automatic-speech-recognition
| 2025-09-19T09:24:04Z |
---
language:
- zh
- en
pipeline_tag: automatic-speech-recognition
---
# SongPrep
<p align="center"><img src="img/logo.jpg" width="40%"></p>
<p align="center">
<a href="https://song-prep.github.io/demo/">Demo</a> | <a href="https://arxiv.org/abs/2509.17404">Paper</a> | <a href="https://github.com/tencent-ailab/songprep">Code</a> | <a href="https://huggingface.co/datasets/waytan22/SSLD-200">Dataset</a>
</p>
This repository is the official weight repository for SongPrep: A Preprocessing Framework and End-to-end Model for Full-song Structure Parsing and Lyrics Transcription. In this repository, we provide the SongPrep-7B model that has been trained on the Million Song Dataset.
## Model Versions
| Model | #Params | HuggingFace |
| :----------------------: | :----------------------: | :---------------------------------------------------: |
| SongPrep | 7B |you are here |
## Citation
```
@misc{tan2025songpreppreprocessingframeworkendtoend,
title={SongPrep: A Preprocessing Framework and End-to-end Model for Full-song Structure Parsing and Lyrics Transcription},
author={Wei Tan and Shun Lei and Huaicheng Zhang and Guangzheng Li and Yixuan Zhang and Hangting Chen and Jianwei Yu and Rongzhi Gu and Dong Yu},
year={2025},
eprint={2509.17404},
archivePrefix={arXiv},
primaryClass={eess.AS},
url={https://arxiv.org/abs/2509.17404},
}
```
## License
The code and weights in this repository is released in the [LICENSE](LICENSE) file.
|
Anhlq/qwen2.5-1.5b-exercise-instruct-23.09
|
Anhlq
| 2025-09-23T08:18:06Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:unsloth/Qwen2.5-1.5B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-1.5B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-23T08:16:15Z |
---
base_model: unsloth/Qwen2.5-1.5B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Anhlq
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-1.5B-Instruct
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
taronaeo/tinyllamas-BE
|
taronaeo
| 2025-09-23T08:16:53Z | 546 | 0 | null |
[
"gguf",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2025-09-11T07:27:42Z |
---
license: mit
---
This is a Llama 2 architecture model series trained on the TinyStories dataset, intended for use in the [llama2.c](https://github.com/karpathy/llama2.c) project for Big-Endian systems.
|
internlm/Intern-S1-mini-FP8
|
internlm
| 2025-09-23T08:15:08Z | 739 | 1 | null |
[
"safetensors",
"interns1",
"image-text-to-text",
"conversational",
"custom_code",
"arxiv:2508.15763",
"base_model:internlm/Intern-S1-mini",
"base_model:quantized:internlm/Intern-S1-mini",
"license:apache-2.0",
"fp8",
"region:us"
] |
image-text-to-text
| 2025-08-18T06:37:20Z |
---
license: apache-2.0
pipeline_tag: image-text-to-text
base_model:
- internlm/Intern-S1-mini
---
## Intern-S1-mini
<div align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/642695e5274e7ad464c8a5ba/E43cgEXBRWjVJlU_-hdh6.png" />
<div> </div>
[💻Github Repo](https://github.com/InternLM/Intern-S1) • [🤗Model Collections](https://huggingface.co/collections/internlm/intern-s1-6882e325e8ac1c58ba108aa5) • [📜Technical Report](https://arxiv.org/abs/2508.15763) • [💬Online Chat](https://chat.intern-ai.org.cn/)
</div>
<p align="center">
👋 join us on <a href="https://discord.gg/xa29JuW87d" target="_blank">Discord</a> and <a href="https://cdn.vansin.top/intern-s1.jpg" target="_blank">WeChat</a>
</p>
## Introduction
We introduce **Intern-S1-mini**, a lightweight open-source multimodal reasoning model based on the same techniques as **[Intern-S1](https://huggingface.co/internlm/Intern-S1)**.
Built upon a 8B dense language model (Qwen3) and a 0.3B Vision encoder (InternViT), Intern-S1-mini has been further pretrained on **5 trillion tokens** of multimodal data, including over **2.5 trillion scientific-domain tokens**. This enables the model to retain strong general capabilities while excelling in specialized scientific domains such as **interpreting chemical structures, understanding protein sequences, and planning compound synthesis routes**, making Intern-S1-mini to be a capable research assistant for real-world scientific applications.
## Features
- Strong performance across language and vision reasoning benchmarks, especially scientific tasks.
- Continuously pretrained on a massive 5T token dataset, with over 50% specialized scientific data, embedding deep domain expertise.
- Dynamic tokenizer enables native understanding of molecular formulas and protein sequences.
## Performance
We evaluate the Intern-S1-mini on various benchmarks including general datasets and scientific datasets. We report the performance comparison with the recent VLMs and LLMs below.
| | | Intern-S1-mini | Qwen3-8B | GLM-4.1V | MiMo-VL-7B-RL-2508 |
|------------|----------------|-------------------|----------|----------|--------------------|
| General | MMLU-Pro | **74.78** | 73.7 | 57.1 | 73.93 |
| | MMMU | **72.33** | N/A | 69.9 | 70.4 |
| | MMStar | 65.2 | N/A | 71.5 | 72.9 |
| | GPQA | **65.15** | 62 | 50.32 | 60.35 |
| | AIME2024 | **84.58** | 76 | 36.2 | 72.6 |
| | AIME2025 | **80** | 67.3 | 32 | 64.4 |
| | MathVision | 51.41 | N/A | 53.9 | 54.5 |
| | MathVista | 70.3 | N/A | 80.7 | 79.4 |
| | IFEval | 81.15 | 85 | 71.53 | 71.4 |
| | | | | | |
| Scientific | SFE | 35.84 | N/A | 43.2 | 43.9 |
| | Physics | **28.76** | N/A | 28.3 | 28.2 |
| | SmolInstruct | **32.2** | 17.6 | 18.1 | 16.11 |
| | ChemBench | **76.47** | 61.1 | 56.2 | 66.78 |
| | MatBench | **61.55** | 45.24 | 54.3 | 46.9 |
| | MicroVQA | **56.62** | N/A | 50.2 | 50.96 |
| | ProteinLMBench | 58.47 | 59.1 | 58.3 | 59.8 |
| | MSEarthMCQ | **58.12** | N/A | 50.3 | 47.3 |
| | XLRS-Bench | **51.63** | N/A | 49.8 | 12.29 |
We use the [OpenCompass](https://github.com/open-compass/OpenCompass/) and [VLMEvalkit](https://github.com/open-compass/vlmevalkit) to evaluate all models.
## Quick Start
### Sampling Parameters
We recommend using the following hyperparameters to ensure better results
```python
top_p = 1.0
top_k = 50
min_p = 0.0
temperature = 0.8
```
### Transformers
The following provides demo code illustrating how to generate based on text and multimodal inputs.
> **Please use transformers>=4.55.2 to ensure the model works normally.**
#### Text input
```python
from transformers import AutoProcessor, AutoModelForCausalLM
import torch
model_name = "internlm/Intern-S1-mini-FP8"
processor = AutoProcessor.from_pretrained(model_name, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto", torch_dtype="auto", trust_remote_code=True)
messages = [
{
"role": "user",
"content": [
{"type": "text", "text": "tell me about an interesting physical phenomenon."},
],
}
]
inputs = processor.apply_chat_template(messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt").to(model.device, dtype=torch.bfloat16)
generate_ids = model.generate(**inputs, max_new_tokens=32768)
decoded_output = processor.decode(generate_ids[0, inputs["input_ids"].shape[1] :], skip_special_tokens=True)
print(decoded_output)
```
#### Image input
```python
from transformers import AutoProcessor, AutoModelForCausalLM
import torch
model_name = "internlm/Intern-S1-mini-FP8"
processor = AutoProcessor.from_pretrained(model_name, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto", torch_dtype="auto", trust_remote_code=True)
messages = [
{
"role": "user",
"content": [
{"type": "image", "url": "http://images.cocodataset.org/val2017/000000039769.jpg"},
{"type": "text", "text": "Please describe the image explicitly."},
],
}
]
inputs = processor.apply_chat_template(messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt").to(model.device, dtype=torch.bfloat16)
generate_ids = model.generate(**inputs, max_new_tokens=32768)
decoded_output = processor.decode(generate_ids[0, inputs["input_ids"].shape[1] :], skip_special_tokens=True)
print(decoded_output)
```
#### Video input
Please ensure that the decord video decoding library is installed via `pip install decord`. To avoid OOM, please install flash_attention and use at least 2 GPUS.
```python
from transformers import AutoProcessor, AutoModelForCausalLM
import torch
model_name = "internlm/Intern-S1-mini-FP8"
processor = AutoProcessor.from_pretrained(model_name, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto", torch_dtype="auto", trust_remote_code=True)
messages = [
{
"role": "user",
"content": [
{
"type": "video",
"url": "https://huggingface.co/datasets/hf-internal-testing/fixtures_videos/resolve/main/tennis.mp4",
},
{"type": "text", "text": "What type of shot is the man performing?"},
],
}
]
inputs = processor.apply_chat_template(
messages,
return_tensors="pt",
add_generation_prompt=True,
video_load_backend="decord",
tokenize=True,
return_dict=True,
).to(model.device, dtype=torch.float16)
generate_ids = model.generate(**inputs, max_new_tokens=32768)
decoded_output = processor.decode(generate_ids[0, inputs["input_ids"].shape[1] :], skip_special_tokens=True)
print(decoded_output)
```
### Serving
The minimum hardware requirements for deploying Intern-S1 series models are:
| Model | A100(GPUs) | H800(GPUs) | H100(GPUs) | H200(GPUs) |
| :---------------------------------------------------------------------: | :--------: | :--------: | :--------: | :--------: |
| [internlm/Intern-S1-mini](https://huggingface.co/internlm/Intern-S1-mini) | 1 | 1 | 1 | 1 |
| [internlm/Intern-S1-mini-FP8](https://huggingface.co/internlm/Intern-S1-mini-FP8) | - | 1 | 1 | 1 |
You can utilize one of the following LLM inference frameworks to create an OpenAI compatible server:
#### [lmdeploy (>=0.9.2.post1)](https://github.com/InternLM/lmdeploy)
```bash
lmdeploy serve api_server internlm/Intern-S1-mini-FP8 --reasoning-parser intern-s1 --tool-call-parser intern-s1
```
#### [vllm (>=0.10.1)](https://github.com/vllm-project/vllm)
```bash
vllm serve internlm/Intern-S1-mini-FP8 --trust-remote-code
```
#### [sglang](https://github.com/sgl-project/sglang)
```bash
python3 -m sglang.launch_server \
--model-path internlm/Intern-S1-mini-FP8 \
--trust-remote-code \
--grammar-backend none
```
#### ollama for local deployment:
```bash
# install ollama
curl -fsSL https://ollama.com/install.sh | sh
# fetch model
ollama pull internlm/interns1-mini
# run model
ollama run internlm/interns1-mini
# then use openai client to call on http://localhost:11434/v1
```
## Advanced Usage
### Tool Calling
Many Large Language Models (LLMs) now feature **Tool Calling**, a powerful capability that allows them to extend their functionality by interacting with external tools and APIs. This enables models to perform tasks like fetching up-to-the-minute information, running code, or calling functions within other applications.
A key advantage for developers is that a growing number of open-source LLMs are designed to be compatible with the OpenAI API. This means you can leverage the same familiar syntax and structure from the OpenAI library to implement tool calling with these open-source models. As a result, the code demonstrated in this tutorial is versatile—it works not just with OpenAI models, but with any model that follows the same interface standard.
To illustrate how this works, let's dive into a practical code example that uses tool calling to get the latest weather forecast (based on lmdeploy api server).
```python
from openai import OpenAI
import json
def get_current_temperature(location: str, unit: str = "celsius"):
"""Get current temperature at a location.
Args:
location: The location to get the temperature for, in the format "City, State, Country".
unit: The unit to return the temperature in. Defaults to "celsius". (choices: ["celsius", "fahrenheit"])
Returns:
the temperature, the location, and the unit in a dict
"""
return {
"temperature": 26.1,
"location": location,
"unit": unit,
}
def get_temperature_date(location: str, date: str, unit: str = "celsius"):
"""Get temperature at a location and date.
Args:
location: The location to get the temperature for, in the format "City, State, Country".
date: The date to get the temperature for, in the format "Year-Month-Day".
unit: The unit to return the temperature in. Defaults to "celsius". (choices: ["celsius", "fahrenheit"])
Returns:
the temperature, the location, the date and the unit in a dict
"""
return {
"temperature": 25.9,
"location": location,
"date": date,
"unit": unit,
}
def get_function_by_name(name):
if name == "get_current_temperature":
return get_current_temperature
if name == "get_temperature_date":
return get_temperature_date
tools = [{
'type': 'function',
'function': {
'name': 'get_current_temperature',
'description': 'Get current temperature at a location.',
'parameters': {
'type': 'object',
'properties': {
'location': {
'type': 'string',
'description': 'The location to get the temperature for, in the format \'City, State, Country\'.'
},
'unit': {
'type': 'string',
'enum': [
'celsius',
'fahrenheit'
],
'description': 'The unit to return the temperature in. Defaults to \'celsius\'.'
}
},
'required': [
'location'
]
}
}
}, {
'type': 'function',
'function': {
'name': 'get_temperature_date',
'description': 'Get temperature at a location and date.',
'parameters': {
'type': 'object',
'properties': {
'location': {
'type': 'string',
'description': 'The location to get the temperature for, in the format \'City, State, Country\'.'
},
'date': {
'type': 'string',
'description': 'The date to get the temperature for, in the format \'Year-Month-Day\'.'
},
'unit': {
'type': 'string',
'enum': [
'celsius',
'fahrenheit'
],
'description': 'The unit to return the temperature in. Defaults to \'celsius\'.'
}
},
'required': [
'location',
'date'
]
}
}
}]
messages = [
{'role': 'user', 'content': 'Today is 2024-11-14, What\'s the temperature in San Francisco now? How about tomorrow?'}
]
openai_api_key = "EMPTY"
openai_api_base = "http://0.0.0.0:23333/v1"
client = OpenAI(
api_key=openai_api_key,
base_url=openai_api_base,
)
model_name = client.models.list().data[0].id
response = client.chat.completions.create(
model=model_name,
messages=messages,
max_tokens=32768,
temperature=0.8,
top_p=0.8,
stream=False,
extra_body=dict(spaces_between_special_tokens=False, enable_thinking=False),
tools=tools)
print(response.choices[0].message)
messages.append(response.choices[0].message)
for tool_call in response.choices[0].message.tool_calls:
tool_call_args = json.loads(tool_call.function.arguments)
tool_call_result = get_function_by_name(tool_call.function.name)(**tool_call_args)
tool_call_result = json.dumps(tool_call_result, ensure_ascii=False)
messages.append({
'role': 'tool',
'name': tool_call.function.name,
'content': tool_call_result,
'tool_call_id': tool_call.id
})
response = client.chat.completions.create(
model=model_name,
messages=messages,
temperature=0.8,
top_p=0.8,
stream=False,
extra_body=dict(spaces_between_special_tokens=False, enable_thinking=False),
tools=tools)
print(response.choices[0].message.content)
```
### Switching Between Thinking and Non-Thinking Modes
Intern-S1-mini enables thinking mode by default, enhancing the model's reasoning capabilities to generate higher-quality responses. This feature can be disabled by setting `enable_thinking=False` in `tokenizer.apply_chat_template`
```python
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=False # think mode indicator
)
```
With LMDeploy serving Intern-S1-mini models, you can dynamically control the thinking mode by adjusting the `enable_thinking` parameter in your requests.
```python
from openai import OpenAI
import json
messages = [
{
'role': 'user',
'content': 'who are you'
}, {
'role': 'assistant',
'content': 'I am an AI'
}, {
'role': 'user',
'content': 'AGI is?'
}]
openai_api_key = "EMPTY"
openai_api_base = "http://0.0.0.0:23333/v1"
client = OpenAI(
api_key=openai_api_key,
base_url=openai_api_base,
)
model_name = client.models.list().data[0].id
response = client.chat.completions.create(
model=model_name,
messages=messages,
temperature=0.8,
top_p=0.8,
max_tokens=2048,
extra_body={
"enable_thinking": False,
}
)
print(json.dumps(response.model_dump(), indent=2, ensure_ascii=False))
```
For vllm and sglang users, configure this through,
```python
extra_body={
"chat_template_kwargs": {"enable_thinking": False}
}
```
## Fine-tuning
See this [documentation](https://github.com/InternLM/Intern-S1/blob/main/docs/sft.md) for more details.
## Citation
If you find this work useful, feel free to give us a cite.
```
@misc{bai2025interns1scientificmultimodalfoundation,
title={Intern-S1: A Scientific Multimodal Foundation Model},
author={Lei Bai and Zhongrui Cai and Maosong Cao and Weihan Cao and Chiyu Chen and Haojiong Chen and Kai Chen and Pengcheng Chen and Ying Chen and Yongkang Chen and Yu Cheng and Yu Cheng and Pei Chu and Tao Chu and Erfei Cui and Ganqu Cui and Long Cui and Ziyun Cui and Nianchen Deng and Ning Ding and Nanqin Dong and Peijie Dong and Shihan Dou and Sinan Du and Haodong Duan and Caihua Fan and Ben Gao and Changjiang Gao and Jianfei Gao and Songyang Gao and Yang Gao and Zhangwei Gao and Jiaye Ge and Qiming Ge and Lixin Gu and Yuzhe Gu and Aijia Guo and Qipeng Guo and Xu Guo and Conghui He and Junjun He and Yili Hong and Siyuan Hou and Caiyu Hu and Hanglei Hu and Jucheng Hu and Ming Hu and Zhouqi Hua and Haian Huang and Junhao Huang and Xu Huang and Zixian Huang and Zhe Jiang and Lingkai Kong and Linyang Li and Peiji Li and Pengze Li and Shuaibin Li and Tianbin Li and Wei Li and Yuqiang Li and Dahua Lin and Junyao Lin and Tianyi Lin and Zhishan Lin and Hongwei Liu and Jiangning Liu and Jiyao Liu and Junnan Liu and Kai Liu and Kaiwen Liu and Kuikun Liu and Shichun Liu and Shudong Liu and Wei Liu and Xinyao Liu and Yuhong Liu and Zhan Liu and Yinquan Lu and Haijun Lv and Hongxia Lv and Huijie Lv and Qidang Lv and Ying Lv and Chengqi Lyu and Chenglong Ma and Jianpeng Ma and Ren Ma and Runmin Ma and Runyuan Ma and Xinzhu Ma and Yichuan Ma and Zihan Ma and Sixuan Mi and Junzhi Ning and Wenchang Ning and Xinle Pang and Jiahui Peng and Runyu Peng and Yu Qiao and Jiantao Qiu and Xiaoye Qu and Yuan Qu and Yuchen Ren and Fukai Shang and Wenqi Shao and Junhao Shen and Shuaike Shen and Chunfeng Song and Demin Song and Diping Song and Chenlin Su and Weijie Su and Weigao Sun and Yu Sun and Qian Tan and Cheng Tang and Huanze Tang and Kexian Tang and Shixiang Tang and Jian Tong and Aoran Wang and Bin Wang and Dong Wang and Lintao Wang and Rui Wang and Weiyun Wang and Wenhai Wang and Yi Wang and Ziyi Wang and Ling-I Wu and Wen Wu and Yue Wu and Zijian Wu and Linchen Xiao and Shuhao Xing and Chao Xu and Huihui Xu and Jun Xu and Ruiliang Xu and Wanghan Xu and GanLin Yang and Yuming Yang and Haochen Ye and Jin Ye and Shenglong Ye and Jia Yu and Jiashuo Yu and Jing Yu and Fei Yuan and Bo Zhang and Chao Zhang and Chen Zhang and Hongjie Zhang and Jin Zhang and Qiaosheng Zhang and Qiuyinzhe Zhang and Songyang Zhang and Taolin Zhang and Wenlong Zhang and Wenwei Zhang and Yechen Zhang and Ziyang Zhang and Haiteng Zhao and Qian Zhao and Xiangyu Zhao and Xiangyu Zhao and Bowen Zhou and Dongzhan Zhou and Peiheng Zhou and Yuhao Zhou and Yunhua Zhou and Dongsheng Zhu and Lin Zhu and Yicheng Zou},
year={2025},
eprint={2508.15763},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2508.15763},
}
```
|
htNghiaaa/DSC25-qwen2.5-7b-finetuned-1-merged
|
htNghiaaa
| 2025-09-23T08:14:18Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-23T08:06:59Z |
---
base_model: unsloth/qwen2.5-7b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** htNghiaaa
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-7b-instruct-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ZYXue/2025_09_22_23_28_36_PDT
|
ZYXue
| 2025-09-23T08:11:14Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/medgemma-4b-it",
"base_model:finetune:google/medgemma-4b-it",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T06:31:17Z |
---
base_model: google/medgemma-4b-it
library_name: transformers
model_name: 2025_09_22_23_28_36_PDT
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for 2025_09_22_23_28_36_PDT
This model is a fine-tuned version of [google/medgemma-4b-it](https://huggingface.co/google/medgemma-4b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="ZYXue/2025_09_22_23_28_36_PDT", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.23.0
- Transformers: 4.56.2
- Pytorch: 2.8.0+cu126
- Datasets: 4.1.1
- Tokenizers: 0.22.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
tamewild/4b_v122_merged_e5
|
tamewild
| 2025-09-23T08:11:06Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-23T08:09:55Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
KarusG/blockassist
|
KarusG
| 2025-09-23T08:08:56Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"scaly sniffing deer",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-22T09:25:34Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- scaly sniffing deer
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
tkcho/domain_5288c2abfcb02a2a39e7f96dbf7ab63a
|
tkcho
| 2025-09-23T08:03:24Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-06T23:25:01Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
pepijn223/pi05_libero_fp32
|
pepijn223
| 2025-09-23T08:00:03Z | 64 | 1 | null |
[
"safetensors",
"region:us"
] | null | 2025-09-09T15:23:56Z |
# π₀.₅ - Libero
This is a PyTorch version of the π₀.₅ `pi05_libero model`, converted from the original JAX/Flax implementation.
## Model Details
- **Architecture**: PI0.5 (Vision-Language-Action model with discrete state input)
- **Model Type**: PI0.5
- **Domain**: LIBERO (diverse manipulation tasks)
- **Precision**: 32-bit floating point (fp32)
- **Action Dimension**: 32
- **Vision Model**: PaliGemma (gemma_2b)
- **Action Expert**: gemma_300m
## Key Features
- **Discrete State Input**: Uses discrete language tokens for state representation
- **Flow Matching**: Utilizes adaRMSNorm for timestep injection in action expert
- **Enhanced Action Modeling**: Improved action prediction with flow matching approach
## Conversion Details
This model was converted from JAX to PyTorch using the OpenPI conversion script:
```bash
python examples/convert_jax_model_to_pytorch.py \
--checkpoint_dir /pi05_base \
--config_name pi05_libero \
--output_path /pi05_base/pytorch/fp32/ \
--precision float32
```
## Usage
```python
from openpi.models_pytorch.pi0_pytorch import PI0Pytorch
import torch
# Load the model
model = PI0Pytorch.from_pretrained("pepijn223/pi05_libero_fp32")
# The model expects inputs in the format:
# - images: torch.Tensor of shape [batch, height, width, channels]
# - text: tokenized text prompts
# - proprioceptive_state: robot state information (if applicable)
```
## Model Architecture
The model consists of:
1. **Vision Encoder**: PaliGemma-based vision processing
2. **Language Encoder**: Text prompt understanding
3. **Action Expert**: Specialized network for action prediction
4. **Integration Layer**: Combines multimodal information for action output
## Training Data
This model was trained on robotics datasets appropriate for its domain:
- **DROID models**: Trained on diverse robot manipulation data
- **ALOHA models**: Trained on bimanual manipulation tasks
- **LIBERO models**: Trained on diverse tabletop manipulation scenarios
- **Base models**: Trained on general robotics datasets
## Limitations
- Model performance depends on similarity between deployment and training environments
- May require domain-specific fine-tuning for optimal performance
- Action space must match the trained action dimension (32)
## Citation
If you use this model, please cite the original OpenPI work:
```bibtex
@article{openpi2024,
title={Open-World Robotic Manipulation with Vision-Language-Action Models},
author={Physical Intelligence},
year={2024},
url={https://github.com/Physical-Intelligence/openpi}
}
```
## Original Repository
[OpenPI GitHub Repository](https://github.com/Physical-Intelligence/openpi)
## License
This model follows the same license as the original OpenPI repository.
|
shubhamprshr/Qwen2.5-3B-Instruct_blocksworld1246_grpo_vrex_0.5_0.5_SEC1.0DRO0.0G0.0_minp0.0_1200
|
shubhamprshr
| 2025-09-23T07:59:07Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"grpo",
"trl",
"conversational",
"dataset:blocksworld-dataset",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-3B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-22T23:59:34Z |
---
base_model: Qwen/Qwen2.5-3B-Instruct
datasets: blocksworld-dataset
library_name: transformers
model_name: Qwen2.5-3B-Instruct_blocksworld1246_grpo_vrex_0.5_0.5_SEC1.0DRO0.0G0.0_minp0.0_1200
tags:
- generated_from_trainer
- grpo
- trl
licence: license
---
# Model Card for Qwen2.5-3B-Instruct_blocksworld1246_grpo_vrex_0.5_0.5_SEC1.0DRO0.0G0.0_minp0.0_1200
This model is a fine-tuned version of [Qwen/Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct) on the [blocksworld-dataset](https://huggingface.co/datasets/blocksworld-dataset) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="shubhamprshr/Qwen2.5-3B-Instruct_blocksworld1246_grpo_vrex_0.5_0.5_SEC1.0DRO0.0G0.0_minp0.0_1200", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/shubhamprshr27-tamu/auto/runs/6to8yztb)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.19.1
- Transformers: 4.53.1
- Pytorch: 2.7.0
- Datasets: 4.1.1
- Tokenizers: 0.21.4
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
KKRRLL/flux-kontext-lora2
|
KKRRLL
| 2025-09-23T07:58:31Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"text-to-image",
"lora",
"fal",
"license:other",
"region:us"
] |
text-to-image
| 2025-09-23T07:58:20Z |
---
tags:
- flux
- text-to-image
- lora
- diffusers
- fal
base_model: undefined
instance_prompt:
license: other
---
# flux kontext lora2
<Gallery />
## Model description
## Trigger words
You should use `` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/KKRRLL/flux-kontext-lora2/tree/main) them in the Files & versions tab.
## Training at fal.ai
Training was done using [fal.ai/models/fal-ai/flux-kontext-trainer](https://fal.ai/models/fal-ai/flux-kontext-trainer).
|
tkcho/domain_aa4f8c8506ed7a782b9686c074205444
|
tkcho
| 2025-09-23T07:58:28Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-05T23:25:04Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
msamilim/bart-cnn-samsum-finetuned
|
msamilim
| 2025-09-23T07:57:51Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bart",
"text2text-generation",
"generated_from_trainer",
"base_model:facebook/bart-large-cnn",
"base_model:finetune:facebook/bart-large-cnn",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T07:56:27Z |
---
library_name: transformers
license: mit
base_model: facebook/bart-large-cnn
tags:
- generated_from_trainer
model-index:
- name: bart-cnn-samsum-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-cnn-samsum-finetuned
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1359
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.1189 | 1.0 | 30 | 0.1359 |
### Framework versions
- Transformers 4.56.1
- Pytorch 2.8.0+cu126
- Datasets 4.1.1
- Tokenizers 0.22.0
|
guangyaoz/dpo
|
guangyaoz
| 2025-09-23T07:56:38Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"dpo",
"trl",
"arxiv:2305.18290",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-07-31T05:09:42Z |
---
base_model: Qwen/Qwen2.5-7B-Instruct
library_name: transformers
model_name: dpo
tags:
- generated_from_trainer
- dpo
- trl
licence: license
---
# Model Card for dpo
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="guangyaoz/dpo", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.20.0
- Transformers: 4.53.2
- Pytorch: 2.7.0+cu128
- Datasets: 4.0.0
- Tokenizers: 0.21.2
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
poolkiltzn/blockassist-bc-vigilant_alert_tuna_1758614050
|
poolkiltzn
| 2025-09-23T07:55:24Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"vigilant alert tuna",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-23T07:55:10Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- vigilant alert tuna
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
romolocaponera/ppo-PyramidsRND-1
|
romolocaponera
| 2025-09-23T07:40:10Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2025-09-23T07:40:06Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: romolocaponera/ppo-PyramidsRND-1
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT-PKU-SafeRLHF-OMWU-0.01-mnt64-0922195521-epoch-5
|
vectorzhou
| 2025-09-23T07:39:51Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma2",
"text-generation",
"generated_from_trainer",
"fine-tuned",
"trl",
"extra-gradient",
"conversational",
"dataset:PKU-Alignment/PKU-SafeRLHF",
"arxiv:2503.08942",
"base_model:vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT",
"base_model:finetune:vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-23T06:28:26Z |
---
base_model: vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT
datasets: PKU-Alignment/PKU-SafeRLHF
library_name: transformers
model_name: gemma-2-2b-it-alpaca-cleaned-SFT-PKU-SafeRLHF-OMWU-0.01-mnt64
tags:
- generated_from_trainer
- text-generation
- fine-tuned
- trl
- extra-gradient
licence: license
---
# Model Card for gemma-2-2b-it-alpaca-cleaned-SFT-PKU-SafeRLHF-OMWU-0.01-mnt64
This model is a fine-tuned version of [vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT](https://huggingface.co/vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT) on the [PKU-Alignment/PKU-SafeRLHF](https://huggingface.co/datasets/PKU-Alignment/PKU-SafeRLHF) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT-PKU-SafeRLHF-OMWU-0.01-mnt64-0922195521-epoch-5", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/zrl_csl_nlhf/nlhf/runs/09vdah42)
This model was trained with Extragradient, a method introduced in [Extragradient Preference Optimization (EGPO): Beyond Last-Iterate Convergence for Nash Learning from Human Feedback](https://huggingface.co/papers/2503.08942).
### Framework versions
- TRL: 0.23.0
- Transformers: 4.56.2
- Pytorch: 2.8.0+cu128
- Datasets: 4.1.1
- Tokenizers: 0.22.1
## Citations
Cite Extragradient as:
```bibtex
@misc{zhou2025extragradientpreferenceoptimizationegpo,
title={Extragradient Preference Optimization (EGPO): Beyond Last-Iterate Convergence for Nash Learning from Human Feedback},
author={Runlong Zhou and Maryam Fazel and Simon S. Du},
year={2025},
eprint={2503.08942},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2503.08942},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
manycore-research/SpatialGen-1.0
|
manycore-research
| 2025-09-23T07:39:11Z | 214 | 21 |
diffusers
|
[
"diffusers",
"safetensors",
"image-to-3d",
"dataset:manycore-research/SpatialGen-Testset",
"arxiv:2509.14981",
"base_model:stabilityai/stable-diffusion-2-1",
"base_model:finetune:stabilityai/stable-diffusion-2-1",
"license:creativeml-openrail-m",
"diffusers:SpatialGenDiffusionPipeline",
"region:us"
] |
image-to-3d
| 2025-08-20T13:47:57Z |
---
base_model:
- stabilityai/stable-diffusion-2-1
datasets:
- manycore-research/SpatialGen-Testset
license: creativeml-openrail-m
pipeline_tag: image-to-3d
---
# SpatialGen: Layout-guided 3D Indoor Scene Generation
<!-- markdownlint-disable first-line-h1 -->
<!-- markdownlint-disable html -->
<!-- markdownlint-disable no-duplicate-header -->
<div align="center">
<picture>
<source srcset="https://cdn-uploads.huggingface.co/production/uploads/6437c0ead38ce48bdd4b0067/myrWYVNd4m-DuxV39VQZ0.png" media="(prefers-color-scheme: dark)">
<img src="https://cdn-uploads.huggingface.co/production/uploads/6437c0ead38ce48bdd4b0067/QQvDtmokH4ZjwH0wppqFC.png" width="60%" alt="SpatialLM""/>
</picture>
</div>
<hr style="margin-top: 0; margin-bottom: 8px;">
<div align="center" style="margin-top: 0; padding-top: 0; line-height: 1;">
<a href="https://manycore-research.github.io/SpatialGen" target="_blank" style="margin: 2px;"><img alt="Project"
src="https://img.shields.io/badge/🌐%20Project-SpatialGen-ffc107?color=42a5f5&logoColor=white" style="display: inline-block; vertical-align: middle;"/></a>
<a href="https://arxiv.org/abs/2509.14981" target="_blank" style="margin: 2px;"><img alt="arXiv"
src="https://img.shields.io/badge/arXiv-SpatialGen-b31b1b?logo=arxiv&logoColor=white" style="display: inline-block; vertical-align: middle;"/></a>
<a href="https://github.com/manycore-research/SpatialGen" target="_blank" style="margin: 2px;"><img alt="GitHub"
src="https://img.shields.io/badge/GitHub-SpatialGen-24292e?logo=github&logoColor=white" style="display: inline-block; vertical-align: middle;"/></a>
<a href="https://huggingface.co/manycore-research/SpatialGen-1.0" target="_blank" style="margin: 2px;"><img alt="Hugging Face"
src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-SpatialGen-ffc107?color=ffc107&logoColor=white" style="display: inline-block; vertical-align: middle;"/></a>
</div>
<div align="center">
| Image-to-Scene Results | Text-to-Scene Results |
| :--------------------------------------: | :----------------------------------------: |
|  |  |
<p>TL;DR: Given a 3D semantic layout, SpatialGen can generate a 3D indoor scene conditioned on either a reference image (left) or a textual description (right) using a multi-view, multi-modal diffusion model.</p>
</div>
## ✨ News
- [Sep, 2025] We released the paper of SpatialGen!
- [Aug, 2025] Initial release of SpatialGen-1.0!
## 📋 Release Plan
- [x] Provide inference code of SpatialGen.
- [ ] Provide training instruction for SpatialGen.
- [ ] Release SpatialGen dataset.
## SpatialGen Models
<div align="center">
| **Model** | **Download** |
| :-----------------------: | -------------------------------------------------------------------------------------|
| SpatialGen-1.0 | [🤗 HuggingFace](https://huggingface.co/manycore-research/SpatialGen-1.0) |
| FLUX.1-Layout-ControlNet | [🤗 HuggingFace](https://huggingface.co/manycore-research/FLUX.1-Layout-ControlNet) |
| FLUX.1-Wireframe-dev-lora | [🤗 HuggingFace](https://huggingface.co/manycore-research/FLUX.1-Wireframe-dev-lora) |
</div>
## Usage
### 🔧 Installation
Tested with the following environment:
* Python 3.10
* PyTorch 2.3.1
* CUDA Version 12.1
```bash
# clone the repository
git clone https://github.com/manycore-research/SpatialGen.git
cd SpatialGen
python -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
# Optional: fix the [flux inference bug](https://github.com/vllm-project/vllm/issues/4392)
pip install nvidia-cublas-cu12==12.4.5.8
```
### 📊 Dataset
We provide [SpatialGen-Testset](https://huggingface.co/datasets/manycore-research/SpatialGen-Testset) with 48 rooms, which labeled with 3D layout and 4.8K rendered images (48 x 100 views, including RGB, normal, depth maps and semantic maps) for MVD inference.
### Inference
```bash
# Single image-to-3D Scene
bash scripts/infer_spatialgen_i2s.sh
# Text-to-image-to-3D Scene
# in captions/spatialgen_testset_captions.jsonl, we provide text prompts of different styles for each room,
# choose a pair of scene_id and prompt to run the text2scene experiment
bash scripts/infer_spatialgen_t2s.sh
```
## License
[SpatialGen-1.0](https://huggingface.co/manycore-research/SpatialGen-1.0) is derived from [Stable-Diffusion-v2.1](https://github.com/Stability-AI/stablediffusion), which is licensed under the [CreativeML Open RAIL++-M License](https://github.com/Stability-AI/stablediffusion/blob/main/LICENSE-MODEL). [FLUX.1-Layout-ControlNet](https://huggingface.co/manycore-research/FLUX.1-Layout-ControlNet) is licensed under the [FLUX.1-dev Non-Commercial License](https://github.com/black-forest-labs/flux/blob/main/model_licenses/LICENSE-FLUX1-dev).
## Acknowledgements
We would like to thank the following projects that made this work possible:
[DiffSplat](https://github.com/chenguolin/DiffSplat) | [SD 2.1](https://github.com/Stability-AI/stablediffusion) | [TAESD](https://github.com/madebyollin/taesd) | [FLUX](https://github.com/black-forest-labs/flux/) | [SpatialLM](https://github.com/manycore-research/SpatialLM)
## Citation
```bibtex
@article{SpatialGen,
title = {SpatialGen: Layout-guided 3D Indoor Scene Generation},
author = {Fang, Chuan and Li, Heng and Liang, Yixu and Zheng, Jia and Mao, Yongsen and Liu, Yuan and Tang, Rui and Zhou, Zihan and Tan, Ping},
journal = {arXiv preprint},
year = {2025},
eprint = {2509.14981},
archivePrefix = {arXiv},
primaryClass = {cs.CV}
}
```
|
poolkiltzn/blockassist-bc-vigilant_alert_tuna_1758612814
|
poolkiltzn
| 2025-09-23T07:34:57Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"vigilant alert tuna",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-23T07:34:37Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- vigilant alert tuna
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
DennisS1/dgy
|
DennisS1
| 2025-09-23T07:34:28Z | 24 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:Qwen/Qwen-Image",
"base_model:adapter:Qwen/Qwen-Image",
"region:us"
] |
text-to-image
| 2025-09-23T07:32:22Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- output:
url: images/Screen Shot 2025-09-23 at 5.32.18 pm.png
text: Screenshot
base_model: Qwen/Qwen-Image
instance_prompt: doggy
---
# dgy
<Gallery />
## Trigger words
You should use `doggy` to trigger the image generation.
## Download model
[Download](/DennisS1/dgy/tree/main) them in the Files & versions tab.
|
vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT-PKU-SafeRLHF-EGPO-0.1-mnt64-0922195506-epoch-5
|
vectorzhou
| 2025-09-23T07:33:10Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma2",
"text-generation",
"generated_from_trainer",
"fine-tuned",
"trl",
"extra-gradient",
"conversational",
"dataset:PKU-Alignment/PKU-SafeRLHF",
"arxiv:2503.08942",
"base_model:vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT",
"base_model:finetune:vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-23T06:22:23Z |
---
base_model: vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT
datasets: PKU-Alignment/PKU-SafeRLHF
library_name: transformers
model_name: gemma-2-2b-it-alpaca-cleaned-SFT-PKU-SafeRLHF-EGPO-0.1-mnt64
tags:
- generated_from_trainer
- text-generation
- fine-tuned
- trl
- extra-gradient
licence: license
---
# Model Card for gemma-2-2b-it-alpaca-cleaned-SFT-PKU-SafeRLHF-EGPO-0.1-mnt64
This model is a fine-tuned version of [vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT](https://huggingface.co/vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT) on the [PKU-Alignment/PKU-SafeRLHF](https://huggingface.co/datasets/PKU-Alignment/PKU-SafeRLHF) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT-PKU-SafeRLHF-EGPO-0.1-mnt64-0922195506-epoch-5", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/zrl_csl_nlhf/nlhf/runs/2zoaj66c)
This model was trained with Extragradient, a method introduced in [Extragradient Preference Optimization (EGPO): Beyond Last-Iterate Convergence for Nash Learning from Human Feedback](https://huggingface.co/papers/2503.08942).
### Framework versions
- TRL: 0.23.0
- Transformers: 4.56.2
- Pytorch: 2.8.0+cu128
- Datasets: 4.1.1
- Tokenizers: 0.22.1
## Citations
Cite Extragradient as:
```bibtex
@misc{zhou2025extragradientpreferenceoptimizationegpo,
title={Extragradient Preference Optimization (EGPO): Beyond Last-Iterate Convergence for Nash Learning from Human Feedback},
author={Runlong Zhou and Maryam Fazel and Simon S. Du},
year={2025},
eprint={2503.08942},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2503.08942},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.