modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-22 18:29:56
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 570
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-22 18:28:16
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
mantovanima/q-FrozenLake-v1-8x8-slippery
|
mantovanima
| 2025-09-21T19:18:51Z | 0 | 0 | null |
[
"FrozenLake-v1-8x8",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-09-21T19:18:47Z |
---
tags:
- FrozenLake-v1-8x8
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-8x8-slippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-8x8
type: FrozenLake-v1-8x8
metrics:
- type: mean_reward
value: 0.45 +/- 0.50
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="mantovanima/q-FrozenLake-v1-8x8-slippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
AI-Sweden-Models/ModernBERT-large
|
AI-Sweden-Models
| 2025-09-21T19:11:09Z | 1,288 | 5 |
transformers
|
[
"transformers",
"safetensors",
"modernbert",
"fill-mask",
"masked-lm",
"long-context",
"sv",
"no",
"da",
"is",
"arxiv:2303.17183",
"arxiv:2410.04456",
"base_model:answerdotai/ModernBERT-large",
"base_model:finetune:answerdotai/ModernBERT-large",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] |
fill-mask
| 2025-02-19T12:54:55Z |
---
library_name: transformers
license: apache-2.0
language:
- sv
- 'no'
- da
- is
tags:
- masked-lm
- fill-mask
- long-context
- modernbert
pipeline_tag: fill-mask
inference: false
base_model: answerdotai/ModernBERT-large
---
## Overview
This checkpoint continues the pre-training of [answerdotai/ModernBERT-large](https://huggingface.co/answerdotai/ModernBERT-large) on Scandinavian text, extending the modelโs knowledge with ~1.2 trillion additional masked-language-model (MLM) tokens drawn from [The Nordic Pile](https://arxiv.org/pdf/2303.17183) and [SWEb](https://arxiv.org/pdf/2410.04456) while preserving the original 8k token context window.
This is a **research artefact** and is only intended for **research purposes**.
Our tokenizer is trained from scratch on a subset of 11 985 103 472 tokens.
The training is done in one stage with 8192 tokens per sample for the whole run.
## Data Sources
| Corpus | Size | Selected Languages | Highlights |
|---|---|---|---|
| **The Nordic Pile** | 1.2 TB raw text | sv, no, da, is | Nine diverse categories (CC, Wikipedia, Books, Code, etc.), filtered and deduplicated for high quality |
| **SWEb** | 1 T+ tokens (~3.6 TB) | sv, no, da, is | 98 Common-Crawl snapshots with model-based HTML extraction; 1.2 B documents |
## Training Setup
| Setting | Value |
|---|---|
| Parameters | 395 M |
| Context length | 8 192 tokens (RoPE + local-global attention) |
| Tokens processed | 1.20 ร 10<sup>12</sup> |
| Tokens per batch | 1 572 864 |
| Global batch | 192 sequences (micro-batch = 3) |
| Optimizer & schedule | Decoupled StableAdamW, lr 2 e-4, cosine decay (1 % warm-up) |
| Precision | AMP-bf16 |
| Hardware | 8 nodes ร 8 AMD MI250X GPUs (64 GPUs) on the EuroHPC **LUMI-G** system |
See training details [here](https://github.com/timpal0l/ModernBERT/blob/main/training/trainer_lumi.yaml)
## Training Stats
```python
[token=1198511677292/1198510347252]:
Train time/batch: 873585
Train time/sample: 167728320
Train time/batch_in_epoch: 3558
Train time/sample_in_epoch: 683136
Train time/token: 1198510256276
Train time/token_in_epoch: 4882888303
Train trainer/device_train_microbatch_size: 3
Train loss/train/total: 0.7730
Train throughput/batches_per_sec: 0.6293
Train throughput/samples_per_sec: 120.8212
Train throughput/device/batches_per_sec: 0.0098
Train throughput/device/samples_per_sec: 1.8878
Train throughput/tokens_per_sec: 865578.9851
Train throughput/device/tokens_per_sec: 13524.6716
Train time/train: 385.2930
Train time/val: 0.0000
Train time/total: 385.2930
Train lr-StableAdamW/group0: 0.0000
Train lr-StableAdamW/group1: 0.0000
```
## Intended Use
This is a **research artefact** and is only intended for **research purposes**.
* Fill-mask inference, embedding extraction and fine-tuning for Scandinavian downstream NLP tasks (classification, NER, QA, etc.).
* Drop-in replacement for BERT-style encoders (omit `token_type_ids`).
## Fill-mask
```python
from transformers import pipeline
unmasker = pipeline('fill-mask', model='AI-Sweden-Models/ModernBERT-large')
unmasker("Huvudstaden i Sverige รคr [MASK].")
```
```python
[{'score': 0.5732529759407043,
'token': 2961,
'token_str': ' Stockholm',
'sequence': 'Huvudstaden i Sverige รคr Stockholm.'},
{'score': 0.06222670152783394,
'token': 4481,
'token_str': ' Gรถteborg',
'sequence': 'Huvudstaden i Sverige รคr Gรถteborg.'},
{'score': 0.02539575845003128,
'token': 5882,
'token_str': ' Malmรถ',
'sequence': 'Huvudstaden i Sverige รคr Malmรถ.'},
{'score': 0.024683712050318718,
'token': 19931,
'token_str': ' Norrkรถping',
'sequence': 'Huvudstaden i Sverige รคr Norrkรถping.'},
{'score': 0.02418600209057331,
'token': 28202,
'token_str': ' Solna',
'sequence': 'Huvudstaden i Sverige รคr Solna.'}]
```
## Limitations & Biases
* Web corpora can contain noise, stereotypes and sensitive content despite filtering.
* RoPE extrapolation beyond 8 k tokens is untested and may degrade.
## Code to reproduce
* [Training](https://github.com/timpal0l/ModernBERT/tree/main/training)
* [Data Processing](https://github.com/timpal0l/ModernBERT/tree/main/tokenizer)
|
fatmhd1995/ft_phi35_jd_inclusive_detection_21092025
|
fatmhd1995
| 2025-09-21T19:06:44Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-21T18:24:08Z |
---
base_model: unsloth/phi-3.5-mini-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** fatmhd1995
- **License:** apache-2.0
- **Finetuned from model :** unsloth/phi-3.5-mini-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
RTannous/test-gemma3-vision
|
RTannous
| 2025-09-21T18:29:41Z | 0 | 0 | null |
[
"gguf",
"gemma3",
"llama.cpp",
"unsloth",
"vision-language-model",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-21T18:29:21Z |
---
tags:
- gguf
- llama.cpp
- unsloth
- vision-language-model
---
# test-gemma3-vision - GGUF
This model was converted to GGUF format using [Unsloth](https://github.com/unslothai/unsloth).
**Example usage**
For text only LLMs: llama-cli --hf <repo_id>/<model_name> -p "why is the sky blue?"
For multimodal models: llama-mtmd-cli -m model_name.gguf --mmproj mmproj_file.gguf
## Available Quantizations
- `gemma-3-4b-it.Q8_0.gguf`
## โ ๏ธ Ollama Note for Vision Models
**Important:** Ollama currently does not support separate mmproj files for vision models.
To create an Ollama model from this vision model:
1. Download the bf16 merged model (not the GGUF)
2. Place the `Modelfile` in the same directory as the bf16 merged model
3. Run: `ollama create model_name -f ./Modelfile`
(Replace `model_name` with your desired name)
This will create a unified model that Ollama can use.
|
mrhomie/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-agile_tall_wildebeest
|
mrhomie
| 2025-09-21T18:25:50Z | 17 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am agile_tall_wildebeest",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-20T11:10:19Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am agile_tall_wildebeest
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Feraxx/Qwen3-0.6B-Gensyn-Swarm-soft_lumbering_quail
|
Feraxx
| 2025-09-21T18:25:49Z | 18 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am soft_lumbering_quail",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-19T23:53:31Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am soft_lumbering_quail
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
yuuutre/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-enormous_bold_mule
|
yuuutre
| 2025-09-21T18:25:01Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am enormous_bold_mule",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-06T15:13:41Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am enormous_bold_mule
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/Letz-MT-Llama-3.2-3B-en-uk-GGUF
|
mradermacher
| 2025-09-21T18:24:41Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-21T18:24:38Z |
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/Volavion/Letz-MT-Llama-3.2-3B-en-uk
|
flin775/UI-Tars-1.5-7B-4bit-mlx
|
flin775
| 2025-09-21T18:24:29Z | 0 | 0 | null |
[
"safetensors",
"qwen2_5_vl",
"arxiv:2404.07972",
"arxiv:2504.07981",
"base_model:ByteDance-Seed/UI-TARS-1.5-7B",
"base_model:quantized:ByteDance-Seed/UI-TARS-1.5-7B",
"license:apache-2.0",
"4-bit",
"region:us"
] | null | 2025-09-21T12:40:31Z |
---
license: apache-2.0
base_model:
- ByteDance-Seed/UI-TARS-1.5-7B
---
This model is convert by mlx_vlm from [ByteDance-Seed/UI-TARS-1.5-7B](https://huggingface.co/ByteDance-Seed/UI-TARS-1.5-7B)
## Model Description
UI-TARS-1.5 is ByteDance's open-source multimodal agent built upon a powerful vision-language model. It is capable of effectively performing diverse tasks within virtual worlds.
The released UI-TARS-1.5-7B focuses primarily on enhancing general computer use capabilities and is not specifically optimized for game-based scenarios, where the UI-TARS-1.5 still holds a significant advantage.
| **Benchmark Type** | **Benchmark** | **UI-TARS-1.5-7B** | **UI-TARS-1.5** |
|--------------------|------------------------------------|--------------------|-----------------|
| Computer Use | [OSWorld](https://arxiv.org/abs/2404.07972) | 27.5 | **42.5** |
| GUI Grounding | [ScreenSpotPro](https://arxiv.org/pdf/2504.07981v1) | 49.6 | **61.6** |
P.S. This is the performance of UI-TARS-1.5-7B and UI-TARS-1.5 on OSWorld and ScreenSpotProd.
## Quick Start
```shell
mlx_vlm.generate --model flin775/UI-Tars-1.5-7B-4bit-mlx \
--max-tokens 1024 \
--temperature 0.0 \
--prompt "List all contactsโ names and their corresponding grounding boxes([x1, y1, x2, y2]) from the left sidebar of the IM chat interface, return the results in JSON format." \
--image https://wechat.qpic.cn/uploads/2016/05/WeChat-Windows-2.11.jpg
```
|
shinyobjectz/sllm-shady
|
shinyobjectz
| 2025-09-21T18:20:44Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen3-0.6B",
"base_model:adapter:Qwen/Qwen3-0.6B",
"region:us"
] | null | 2025-09-20T21:25:33Z |
---
base_model: Qwen/Qwen3-0.6B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1
|
lfhe/FLock-Arena-Task-15-Carbonia
|
lfhe
| 2025-09-21T18:20:37Z | 324 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:microsoft/Phi-4-mini-instruct",
"flock-train",
"lora",
"transformers",
"text-generation",
"arxiv:1910.09700",
"base_model:microsoft/Phi-4-mini-instruct",
"region:us"
] |
text-generation
| 2025-02-21T01:26:02Z |
---
base_model: microsoft/Phi-4-mini-instruct
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:microsoft/Phi-4-mini-instruct
- flock-train
- lora
- transformers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.17.1
|
strangerzonehf/Flux-Ultimate-LoRA-Collection
|
strangerzonehf
| 2025-09-21T18:19:02Z | 31,819 | 108 |
diffusers
|
[
"diffusers",
"Flux.1-Dev",
"lora",
"Collections",
"SOTA",
"Realism",
"Diffusion",
"art",
"FLUX",
"image-to-image",
"text-to-image",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"doi:10.57967/hf/5698",
"license:other",
"region:us"
] |
text-to-image
| 2024-11-18T06:47:02Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
widget:
- text: Stranger Zones Ultimate LoRA Collection
output:
url: images/11.png
base_model:
- black-forest-labs/FLUX.1-dev
pipeline_tag: text-to-image
library_name: diffusers
tags:
- Flux.1-Dev
- lora
- Collections
- SOTA
- Realism
- Diffusion
- art
- FLUX
- image-to-image
---

## Flux.1dev Adapter Resources
| File Name | Size | LFS | File Type |
|------------------------------------------------|--------|------|-----------------|
| 3DXL-Mannequin.safetensors | 613 MB | LFS | .safetensors |
| 3DXLC1.safetensors | 613 MB | LFS | .safetensors |
| 3DXLP1.safetensors | 613 MB | LFS | .safetensors |
| 3DXLP2.safetensors | 613 MB | LFS | .safetensors |
| 3DXLP3.safetensors | 613 MB | LFS | .safetensors |
| 3DXLP4.safetensors | 613 MB | LFS | .safetensors |
| 3DXLP5.safetensors | 613 MB | LFS | .safetensors |
| 3DXLP6.safetensors | 613 MB | LFS | .safetensors |
| Abstract-Cartoon.safetensors | 613 MB | LFS | .safetensors |
| Amxtoon.safetensors | 613 MB | LFS | .safetensors |
| Animeo.safetensors | 613 MB | LFS | .safetensors |
| Animex.safetensors | 613 MB | LFS | .safetensors |
| Aura-9999.safetensors | 613 MB | LFS | .safetensors |
| Bold-Shadows.safetensors | 613 MB | LFS | .safetensors |
| C33.safetensors | 613 MB | LFS | .safetensors |
| CAM00.safetensors | 613 MB | LFS | .safetensors |
| Canopus-Anime-Character-Art-FluxDev-LoRA.safetensors | 613 MB | LFS | .safetensors |
| Canopus-Car-Flux-Dev-LoRA.safetensors | 613 MB | LFS | .safetensors |
| Canopus-Clothing-Flux-Dev-Florence2-LoRA.safetensors | 613 MB | LFS | .safetensors |
| Canopus-Cute-Kawaii-Flux-LoRA.safetensors | 613 MB | LFS | .safetensors |
| Castor-3D-Portrait-Flux-LoRA.safetensors | 306 MB | LFS | .safetensors |
| Castor-3D-Sketchfab-Flux-LoRA.safetensors | 613 MB | LFS | .safetensors |
| Castor-Character-Polygon-LoRA.safetensors | 613 MB | LFS | .safetensors |
| Castor-Collage-Dim-Flux-LoRA.safetensors | 613 MB | LFS | .safetensors |
| Castor-Happy-Halloween-Flux-LoRA.safetensors | 613 MB | LFS | .safetensors |
| Castor-Red-Dead-Redemption-2-Flux-LoRA.safetensors | 613 MB | LFS | .safetensors |
| Claymation.safetensors | 613 MB | LFS | .safetensors |
| Clothing-Flux-Dev-Florence2-LoRA-Pruned.safetensors | 613 MB | LFS | .safetensors |
| Clouds Illusion.safetensors | 613 MB | LFS | .safetensors |
| Creative-Stocks.safetensors | 613 MB | LFS | .safetensors |
| Cute-3d-Kawaii.safetensors | 613 MB | LFS | .safetensors |
| Dark_Creature.safetensors | 613 MB | LFS | .safetensors |
| Digital-Chaos.safetensors | 613 MB | LFS | .safetensors |
| Digital-Yellow.safetensors | 613 MB | LFS | .safetensors |
| Dramatic-Neon-Flux-LoRA.safetensors | 613 MB | LFS | .safetensors |
| EBook-Cover.safetensors | 613 MB | LFS | .safetensors |
| Electric-Blue.safetensors | 613 MB | LFS | .safetensors |
| Fashion-Modeling.safetensors | 613 MB | LFS | .safetensors |
| Flux-Dev-Real-Anime-LoRA.safetensors | 613 MB | LFS | .safetensors |
| Flux-Realism-FineDetailed.safetensors | 613 MB | LFS | .safetensors |
| GArt.safetensors | 613 MB | LFS | .safetensors |
| Ghibli-Art.safetensors | 613 MB | LFS | .safetensors |
| Glowing-Body.safetensors | 613 MB | LFS | .safetensors |
| Golden-Coin.safetensors | 613 MB | LFS | .safetensors |
| Green-Cartoon.safetensors | 613 MB | LFS | .safetensors |
| Gta6-Concept-Charecter.safetensors | 613 MB | LFS | .safetensors |
| Gta6.safetensors | 613 MB | LFS | .safetensors |
| HDR-Digital-Chaos.safetensors | 613 MB | LFS | .safetensors |
| HDR.safetensors | 613 MB | LFS | .safetensors |
| Icon-Kit.safetensors | 613 MB | LFS | .safetensors |
| Intense-Red.safetensors | 613 MB | LFS | .safetensors |
| Isometric-3D-Cinematography.safetensors | 613 MB | LFS | .safetensors |
| Isometric-3D.safetensors | 613 MB | LFS | .safetensors |
| Kepler-452b-LoRA-Flux-Dev-3D-Bubbly.safetensors | 613 MB | LFS | .safetensors |
| Knitted- Character.safetensors | 613 MB | LFS | .safetensors |
| Lego.safetensors | 613 MB | LFS | .safetensors |
| Lime-Green.safetensors | 613 MB | LFS | .safetensors |
| Logo-design.safetensors | 613 MB | LFS | .safetensors |
| Long-Toon.safetensors | 613 MB | LFS | .safetensors |
| Minimal-Futuristic.safetensors | 613 MB | LFS | .safetensors |
| Mockup-Texture.safetensors | 613 MB | LFS | .safetensors |
| Multi-Frame-Shot(MFS).safetensors | 613 MB | LFS | .safetensors |
| NFTv4.safetensors | 613 MB | LFS | .safetensors |
| Orange-Chroma.safetensors | 613 MB | LFS | .safetensors |
| Past-Present-Deep-Mix-Flux-LoRA.safetensors | 613 MB | LFS | .safetensors |
| Pastel-BG.safetensors | 613 MB | LFS | .safetensors |
| Prod-Ad.safetensors | 613 MB | LFS | .safetensors |
| Purple-Dreamy.safetensors | 613 MB | LFS | .safetensors |
| Purple_Grid.safetensors | 613 MB | LFS | .safetensors |
| Red-Undersea.safetensors | 613 MB | LFS | .safetensors |
| Retro-Pixel.safetensors | 613 MB | LFS | .safetensors |
| Seamless-Pattern-Design.safetensors | 613 MB | LFS | .safetensors |
| Shadow-Projection.safetensors | 613 MB | LFS | .safetensors |
| Simple_ Doodle.safetensors | 270 MB | LFS | .safetensors |
| Smiley-C4C.safetensors | 613 MB | LFS | .safetensors |
| Snoopy-Charlie-Brown-Flux-LoRA.safetensors | 613 MB | LFS | .safetensors |
| Street_Bokeh.safetensors | 613 MB | LFS | .safetensors |
| Super-Blend.safetensors | 613 MB | LFS | .safetensors |
| Super-Detail.safetensors | 613 MB | LFS | .safetensors |
| Super-Portrait.safetensors | 613 MB | LFS | .safetensors |
| Tarot-card.safetensors | 613 MB | LFS | .safetensors |
| Teen-Outfit.safetensors | 613 MB | LFS | .safetensors |
| Typography.safetensors | 613 MB | LFS | .safetensors |
| Uncoloured-3D-Polygon.safetensors | 613 MB | LFS | .safetensors |
| Yellow-Laser.safetensors | 613 MB | LFS | .safetensors |
| Yellow_Pop.safetensors | 613 MB | LFS | .safetensors |
| capybara-hf.safetensors | 613 MB | LFS | .safetensors |
| chill-guy.safetensors | 613 MB | LFS | .safetensors |
| coloring-book.safetensors | 613 MB | LFS | .safetensors |
| ctoon.safetensors | 613 MB | LFS | .safetensors |
| dalle-mix.safetensors | 613 MB | LFS | .safetensors |
| frosted-gc.safetensors | 613 MB | LFS | .safetensors |
| handstick69.safetensors | 613 MB | LFS | .safetensors |
| indo-realism.safetensors | 613 MB | LFS | .safetensors |
| look-in-2.safetensors | 613 MB | LFS | .safetensors |
| meme.safetensors | 613 MB | LFS | .safetensors |
| midjourney-mix.safetensors | 613 MB | LFS | .safetensors |
| mjV6.safetensors | 613 MB | LFS | .safetensors |
| movieboard.safetensors | 613 MB | LFS | .safetensors |
| nm99.safetensors | 613 MB | LFS | .safetensors |
| only-stickers.safetensors | 613 MB | LFS | .safetensors |
| polaroid-plus.safetensors | 613 MB | LFS | .safetensors |
| poster-foss.safetensors | 613 MB | LFS | .safetensors |
| quoter.safetensors | 613 MB | LFS | .safetensors |
| sketchcard.safetensors | 613 MB | LFS | .safetensors |
| stam9.safetensors | 613 MB | LFS | .safetensors |
| super-realism.safetensors | 613 MB | LFS | .safetensors |
| toon-mix.safetensors | 613 MB | LFS | .safetensors |
| toonic2.5D.safetensors | 613 MB | LFS | .safetensors |
| ywl-realism.safetensors | 613 MB | LFS | .safetensors |
<Gallery />
| **Repository** | **Description** | **Link** |
|-----------------------------|-------------------------------------------------------------|---------------------------------------------------|
| PrithivMLMods | Repository featuring various adapters and ML models. | [Visit Repository](https://huggingface.co/prithivMLmods) |
| StrangerZoneHF | Repository containing specialized Hugging Face models. | [Visit Repository](https://huggingface.co/strangerzonehf) |
------------------------------------------------------------------------------------------------------------------------------------------
|
choiqs/Qwen3-8B-if-bsz128-ts300-ranking-skywork8b-seed44-lr1e-6-4gpus
|
choiqs
| 2025-09-21T18:14:47Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-21T18:12:48Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
schooncestiaa/blockassist-bc-scruffy_webbed_dragonfly_1758478318
|
schooncestiaa
| 2025-09-21T18:13:15Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"scruffy webbed dragonfly",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-21T18:12:58Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- scruffy webbed dragonfly
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
jsbeaudry/makandal-v2
|
jsbeaudry
| 2025-09-21T18:12:25Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"creole",
"haitian",
"conversational",
"ht",
"base_model:jsbeaudry/makandal-pre-trained",
"base_model:finetune:jsbeaudry/makandal-pre-trained",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-07-08T00:33:20Z |
---
library_name: transformers
tags:
- creole
- haitian
license: mit
language:
- ht
base_model:
- jsbeaudry/makandal-pre-trained
pipeline_tag: text-generation
---
# Makandal Continue Pre-trained from qwen3-0.6b
## Model Details
This model has been continued pre-trained from qwen3-0.6b by Palmis Labs AI. .
### Model Description
- **Developed by:** Palmis Labs AI
- **Funded by:** Jean Sauvenel Beaudry
- **Model type:** GPT (Generative Pre-trained Transformer)
- **Language(s) (NLP):** Haitian Creole
- **License:** MIT
- **Model size:** 0.6B parameters
- **Architecture:** qwen3
### Direct Use
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
def generate(model, tokenizer, prompt, device):
inputs = tokenizer(prompt, return_tensors="pt", padding=True).to(device)
output = model.generate(
**inputs,
max_new_tokens=100,
do_sample=True,
repetition_penalty=1.2,
no_repeat_ngram_size=3,
temperature=0.9,
top_k=40,
top_p=0.85,
pad_token_id=tokenizer.pad_token_id,
eos_token_id=tokenizer.eos_token_id
)
return tokenizer.decode(output[0], skip_special_tokens=True)
# Load model and tokenizer
tokenizer = AutoTokenizer.from_pretrained("jsbeaudry/makandal-v2")
model = AutoModelForCausalLM.from_pretrained("jsbeaudry/makandal-v2")
# Set device
device = "cuda" if torch.cuda.is_available() else "cpu"
model.to(device)
# Generate text
prompt = "matematik"
response = generate(model, tokenizer, prompt, device)
print(response)
# Answer:
# Matematik se yon disiplin matematik ki konsantre sou kalkil, estatistik, ak analiz matematik.
# Li pรจmรจt nou konprann enfรฒmasyon ak fรฒmรจlman analize done pou jwenn pwopriyete oswa fรฒmรจlman verifye yon konpreyansyon.
```
### Out-of-Scope Use
This model should **NOT** be used for:
- Critical decision-making systems
- Any application requiring reliable or factual outputs
- Commercial deployment without significant additional training
## Bias, Risks, and Limitations
- **Insufficient training data**: Only 4.7 MB of training data used
- **Limited training time**: Only 4.5 hours of training
- **High hallucination rate**: Model frequently generates inaccurate or nonsensical content
- **Language coverage**: Limited Haitian Creole language understanding due to minimal dataset
- **Bias**: May reflect biases present in the small training dataset
### Recommendations
- Do not rely on outputs for factual information
- Supervise usage in educational settings
### Training Infrastructure
- **GPU:** Tesla T4 (15GB)
- **Framework:** Transformers/PyTorch
## Citation
```bibtex
@misc{makandal2025,
title={Makandal-pretrain: An Educational Haitian Creole Language Model},
author={Jean Sauvenel Beaudry},
year={2025},
howpublished={\url{https://huggingface.co/jsbeaudry/makandal-pre-trained}},
note={Educational demonstration model}
}
```
## Glossary
**Makandal**: Named after Franรงois Makandal, an 18th-century Haitian revolutionary leader, symbolizing the model's connection to Haitian culture and education.
|
gccmorgoth/finsql-mlx-qwen3-4b-instruct-4bit
|
gccmorgoth
| 2025-09-21T18:08:57Z | 0 | 0 |
mlx
|
[
"mlx",
"lora",
"sql",
"financialSQL",
"finance",
"en",
"base_model:mlx-community/Qwen3-4B-Instruct-2507-4bit",
"base_model:adapter:mlx-community/Qwen3-4B-Instruct-2507-4bit",
"license:apache-2.0",
"region:us"
] | null | 2025-09-16T19:44:23Z |
---
license: apache-2.0
language:
- en
metrics:
- accuracy
base_model:
- mlx-community/Qwen3-4B-Instruct-2507-4bit
library_name: mlx
tags:
- mlx
- lora
- sql
- financialSQL
- finance
---
# finsql-mlx-qwen3-4b-instruct-4bit
This is a LoRA adapter for financial SQL generation, fine-tuned on mlx-community/Qwen3-4B-Instruct-2507-4bit.
## Latest Finetuning

## Finetuning Details
- **Method**: Direct Preference Optimization (DPO)
- **Checkpoint**: Iteration 300
- **Validation Loss**: 0.048
- **Training Loss**: 0.122
- **Learning Rate**: Cosine decay with warmup
- **LoRA Rank**: 16
## Performance
- Validation loss: 0.048 (optimal convergence point)
- Selected at iteration 300 to prevent overfitting
- DPO training for improved preference alignment on financial SQL tasks
## Model Selection
- **Checkpoint**: Iteration 300 selected based on validation loss curve
- **Rationale**: Optimal balance between training convergence and
generalization
- **Training Dynamics**: Early stopping before overfitting (val loss
increased at iter 700+)
## Dataset
This model was fine-tuned on financial text-to-sql data pairs, specifically
the [FinSQLBull dataset](https://bull-text-to-sql-benchmark.github.io), to improve SQL query generation for financial
databases and tables.
## Usage
Recommended prompt format to specify:
# Database: [database_name]
[Schema information]
## Task
[Natural language question about the data]
Constraint: [Any specific constraints]
SQL: [Model Generated SQL Query]
## Sample Prompt Format
Database: company_financials
Table: revenue (id, company, year, revenue, profit)
Task
What was the total revenue for all companies in 2023?
SQL: [Model Generated SQL Query]
## Python
```python
from mlx_lm import load, generate
model, tokenizer = load("your-username/your-model-name")
response = generate(model, tokenizer, prompt="Your prompt here")
|
Anwaarma/edos_taskA_llama3b_qlora
|
Anwaarma
| 2025-09-21T18:08:30Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-21T18:08:24Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
rianagario/Yolo-LibrasVision
|
rianagario
| 2025-09-21T18:03:18Z | 0 | 0 |
ultralytics
|
[
"ultralytics",
"yolov8",
"object-detection",
"libras",
"sign-language",
"pt",
"dataset:custom-libras-dataset",
"license:mit",
"region:us"
] |
object-detection
| 2025-09-21T17:47:04Z |
---
license: mit
language: pt
library_name: ultralytics
tags:
- yolov8
- object-detection
- libras
- sign-language
datasets:
- custom-libras-dataset
---
# YOLOv8 para Detecรงรฃo de Sinais em LIBRAS (Yolo-LibrasVision)
Este repositรณrio contรฉm um modelo YOLOv8 treinado para detectar sinais da Lรญngua Brasileira de Sinais (LIBRAS) em tempo real. Este modelo รฉ a base para a API do projeto LibrasVision.
## Descriรงรฃo do Modelo
* **Arquitetura:** YOLOv8n (nano)
* **Framework:** PyTorch
* **Tarefa:** Detecรงรฃo de Objetos
## Mรฉtricas de Performance
O modelo foi treinado no nosso dataset customizado e atingiu as seguintes mรฉtricas no conjunto de validaรงรฃo:
* **mAP50-95:** `0.846484`
* **Precision:** `0.977354`
* **Recall:** `0.9524128`
## Como Usar (Exemplo com Ultralytics)
```python
from ultralytics import YOLO
from huggingface_hub import hf_hub_download
# O repositรณrio jรก contรฉm o arquivo de configuraรงรฃo,
# mas para uso local, vocรช pode baixรก-lo tambรฉm.
REPO_ID = "rianagario/Yolo-LibrasVision"
MODEL_FILENAME = "model.pt"
# Baixar o modelo do Hub
model_path = hf_hub_download(repo_id=REPO_ID, filename=MODEL_FILENAME)
# Carregar o modelo
model = YOLO(model_path)
# Realizar inferรชncia
results = model('caminho/para/sua/imagem.jpg')
# Exibir resultados
results[0].show()
|
schooncestiaa/blockassist-bc-scruffy_webbed_dragonfly_1758477699
|
schooncestiaa
| 2025-09-21T18:02:58Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"scruffy webbed dragonfly",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-21T18:02:40Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- scruffy webbed dragonfly
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
haihp02/891afca1-634f-4e53-bd50-43e8a1d43bc2
|
haihp02
| 2025-09-21T18:01:56Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-21T16:12:43Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
StanislavKalishenko/Gemma3-Pretrained-uk
|
StanislavKalishenko
| 2025-09-21T17:56:21Z | 109 | 0 |
mlx
|
[
"mlx",
"safetensors",
"gemma3_text",
"text-generation",
"conversational",
"base_model:mlx-community/gemma-3-1b-it-4bit",
"base_model:quantized:mlx-community/gemma-3-1b-it-4bit",
"license:gemma",
"4-bit",
"region:us"
] |
text-generation
| 2025-09-07T13:38:52Z |
---
license: gemma
library_name: mlx
pipeline_tag: text-generation
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: To access Gemma on Hugging Face, youโre required to review and
agree to Googleโs usage license. To do this, please ensure youโre logged in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
base_model: mlx-community/gemma-3-1b-it-4bit
tags:
- mlx
---
|
ysakhale/Homework2-task1
|
ysakhale
| 2025-09-21T17:55:31Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-20T23:05:16Z |
# AutoML Regression Model for Shoe Dataset
## Model Summary
This model was trained using **AutoGluon Tabular (v1.4.0)** on the dataset [maryzhang/hw1-24679-tabular-dataset](https://huggingface.co/datasets/maryzhang/hw1-24679-tabular-dataset).
The task is **regression**, predicting the **actual measured shoe length (mm)** from shoe attributes.
- **Best Model**: `CatBoost_r177_BAG_L1` (bagged ensemble of CatBoost models)
- **Test Rยฒ Score**: **0.8904** (โ 89% variance explained)
- **Validation Rยฒ Score**: 0.8049
- **Pearson correlation**: 0.9473
- **RMSE**: 1.80 mm
- **MAE**: 1.10 mm
- **Median AE**: 0.68 mm
These values indicate the model can predict shoe length within ~1โ2 mm of the actual measurement on average.
---
## Leaderboard (Top 5 Models)
| Rank | Model | Test Rยฒ | Val Rยฒ | Pred Time (s) | Fit Time (s) |
|------|------------------------|---------|---------|---------------|--------------|
| 1 | CatBoost_r177_BAG_L1 | 0.8994 | 0.8049 | 0.0293 | 27.14 |
| 2 | LightGBMLarge_BAG_L2 | 0.8971 | 0.7995 | 0.7011 | 238.93 |
| 3 | CatBoost_BAG_L2 | 0.8939 | 0.8405 | 0.6155 | 276.40 |
| 4 | CatBoost_r9_BAG_L1 | 0.8917 | 0.7889 | 0.0606 | 53.87 |
| 5 | WeightedEnsemble_L3 | 0.8904 | 0.8500 | 0.9871 | 333.68 |
---
## Dataset
- **Source**: [maryzhang/hw1-24679-tabular-dataset](https://huggingface.co/datasets/maryzhang/hw1-24679-tabular-dataset)
- **Size**: 338 samples (30 original, 308 augmented)
- **Features**:
- US size (numeric)
- Shoe size (mm) (numeric)
- Type of shoe (categorical)
- Shoe color (categorical)
- Shoe brand (categorical)
- **Target**: *Actual measured shoe length (mm)*
- **Splits**: 80% training, 20% testing (random_state=42)
---
## Preprocessing
- Converted Hugging Face dataset to Pandas DataFrame
- Train/test split with stratified random seed
- AutoGluon handled categorical encoding, normalization, and feature selection automatically
---
## Training Setup
- **Framework**: AutoGluon Tabular v1.4.0
- **Search Strategy**: Bagged/stacked ensembles with model selection (`presets="best"`)
- **Time Budget**: 1200 seconds (20 minutes)
- **Evaluation Metric**: Rยฒ
- **Hyperparameter Search**: Automated by AutoGluon (CatBoost, LightGBM, ensemble stacking)
---
## Metrics
- **Rยฒ**: 0.8904 (test)
- **RMSE**: 1.80 mm
- **MAE**: 1.10 mm
- **Median AE**: 0.68 mm
- **Uncertainty**: Variability assessed across multiple base models in ensemble. Bagging reduces variance; expected error ยฑ2 mm for most predictions.
---
## Intended Use
- **Educational**: Demonstrates AutoML regression in CMU course 24-679
- **Limitations**:
- Small dataset size (338 samples) โ not robust for production use
- Augmented data may not reflect real-world variability
- Not suitable for medical or industrial applications
---
## Ethical Considerations
- Predictions should **not** be used to recommend or prescribe footwear sizes in clinical or consumer contexts.
- Dataset augmentation could introduce biases not present in real measurements.
---
## License
- **Dataset**: MIT License
- **Model**: MIT License
---
## Hardware / Compute
- **Training**: Google Colab (CPU runtime)
- **Time**: ~20 minutes wall-clock time
- **RAM**: <8 GB used
---
## AI Usage Disclosure
- Model training and hyperparameter search used **AutoML (AutoGluon)**.
- Model card text and documentation partially generated with **AI assistance (ChatGPT)**.
---
## Acknowledgments
- Dataset by **Mary Zhang (CMU 24-679)**
- Model training and documentation by **Yash Sakhale**
|
fashionita/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-silent_freckled_dinosaur
|
fashionita
| 2025-09-21T17:42:59Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am silent_freckled_dinosaur",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-21T04:53:59Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am silent_freckled_dinosaur
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/ConfTuner-Ministral-i1-GGUF
|
mradermacher
| 2025-09-21T17:31:34Z | 532 | 1 |
transformers
|
[
"transformers",
"gguf",
"peft",
"fine-tuning",
"confidence-estimation",
"trustworthy-ai",
"text-generation",
"LLM",
"mistral",
"en",
"base_model:liushiliushi/ConfTuner-Ministral",
"base_model:quantized:liushiliushi/ConfTuner-Ministral",
"license:other",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] |
text-generation
| 2025-09-19T15:36:36Z |
---
base_model: liushiliushi/ConfTuner-Ministral
language:
- en
library_name: transformers
license: other
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- peft
- fine-tuning
- confidence-estimation
- trustworthy-ai
- text-generation
- LLM
- mistral
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
weighted/imatrix quants of https://huggingface.co/liushiliushi/ConfTuner-Ministral
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#ConfTuner-Ministral-i1-GGUF).***
static quants are available at https://huggingface.co/mradermacher/ConfTuner-Ministral-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/ConfTuner-Ministral-i1-GGUF/resolve/main/ConfTuner-Ministral.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) |
| [GGUF](https://huggingface.co/mradermacher/ConfTuner-Ministral-i1-GGUF/resolve/main/ConfTuner-Ministral.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/ConfTuner-Ministral-i1-GGUF/resolve/main/ConfTuner-Ministral.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/ConfTuner-Ministral-i1-GGUF/resolve/main/ConfTuner-Ministral.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/ConfTuner-Ministral-i1-GGUF/resolve/main/ConfTuner-Ministral.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/ConfTuner-Ministral-i1-GGUF/resolve/main/ConfTuner-Ministral.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/ConfTuner-Ministral-i1-GGUF/resolve/main/ConfTuner-Ministral.i1-IQ2_M.gguf) | i1-IQ2_M | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/ConfTuner-Ministral-i1-GGUF/resolve/main/ConfTuner-Ministral.i1-Q2_K_S.gguf) | i1-Q2_K_S | 3.1 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/ConfTuner-Ministral-i1-GGUF/resolve/main/ConfTuner-Ministral.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/ConfTuner-Ministral-i1-GGUF/resolve/main/ConfTuner-Ministral.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/ConfTuner-Ministral-i1-GGUF/resolve/main/ConfTuner-Ministral.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/ConfTuner-Ministral-i1-GGUF/resolve/main/ConfTuner-Ministral.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/ConfTuner-Ministral-i1-GGUF/resolve/main/ConfTuner-Ministral.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/ConfTuner-Ministral-i1-GGUF/resolve/main/ConfTuner-Ministral.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/ConfTuner-Ministral-i1-GGUF/resolve/main/ConfTuner-Ministral.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/ConfTuner-Ministral-i1-GGUF/resolve/main/ConfTuner-Ministral.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/ConfTuner-Ministral-i1-GGUF/resolve/main/ConfTuner-Ministral.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/ConfTuner-Ministral-i1-GGUF/resolve/main/ConfTuner-Ministral.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/ConfTuner-Ministral-i1-GGUF/resolve/main/ConfTuner-Ministral.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.8 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/ConfTuner-Ministral-i1-GGUF/resolve/main/ConfTuner-Ministral.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/ConfTuner-Ministral-i1-GGUF/resolve/main/ConfTuner-Ministral.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ConfTuner-Ministral-i1-GGUF/resolve/main/ConfTuner-Ministral.i1-Q4_1.gguf) | i1-Q4_1 | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/ConfTuner-Ministral-i1-GGUF/resolve/main/ConfTuner-Ministral.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/ConfTuner-Ministral-i1-GGUF/resolve/main/ConfTuner-Ministral.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/ConfTuner-Ministral-i1-GGUF/resolve/main/ConfTuner-Ministral.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
sirev/gemma-2b-dpo-Q8_0-GGUF
|
sirev
| 2025-09-21T17:30:57Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:sirev/gemma-2b-dpo",
"base_model:quantized:sirev/gemma-2b-dpo",
"endpoints_compatible",
"region:us"
] | null | 2025-09-21T17:30:42Z |
---
library_name: transformers
tags:
- llama-cpp
- gguf-my-repo
base_model: sirev/gemma-2b-dpo
---
# sirev/gemma-2b-dpo-Q8_0-GGUF
This model was converted to GGUF format from [`sirev/gemma-2b-dpo`](https://huggingface.co/sirev/gemma-2b-dpo) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/sirev/gemma-2b-dpo) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo sirev/gemma-2b-dpo-Q8_0-GGUF --hf-file gemma-2b-dpo-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo sirev/gemma-2b-dpo-Q8_0-GGUF --hf-file gemma-2b-dpo-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo sirev/gemma-2b-dpo-Q8_0-GGUF --hf-file gemma-2b-dpo-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo sirev/gemma-2b-dpo-Q8_0-GGUF --hf-file gemma-2b-dpo-q8_0.gguf -c 2048
```
|
olusegunola/phi3-pruned-cp-masked
|
olusegunola
| 2025-09-21T17:30:09Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-21T17:29:18Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
prithivMLmods/Monochrome-Pencil
|
prithivMLmods
| 2025-09-21T17:28:49Z | 1 | 1 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"sketch",
"pencil",
"monochrome",
"art",
"image-to-image",
"base_model:black-forest-labs/FLUX.1-Kontext-dev",
"base_model:adapter:black-forest-labs/FLUX.1-Kontext-dev",
"license:other",
"region:us"
] |
image-to-image
| 2025-09-21T03:56:49Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
- sketch
- pencil
- monochrome
- art
widget:
- src: images/1.jpg
text: >-
[photo content], replicate the image as a pencil illustration, black and
white, with sketch-like detailing.
prompt: >
[photo content], replicate the image as a pencil illustration, black and
white, with sketch-like detailing.
output:
url: images/2.png
base_model: black-forest-labs/FLUX.1-Kontext-dev
instance_prompt: >-
[photo content], replicate the image as a pencil illustration, black and
white, with sketch-like detailing.
license: other
license_name: flux-1-dev-non-commercial-license
license_link: LICENSE.md
pipeline_tag: image-to-image
---

# **Monochrome-Pencil-i2i [Image-to-Image]**
<Gallery />
Monochrome-Pencil-i2i is an adapter for black-forest-lab's FLUX.1-Kontext-dev. It is a Sketch LoRA trained for seamless conversion of any image into a monochrome pencil sketch while preserving the original characteristics of the image. The model was trained on 40 image pairs (20 start images, 20 end images). Synthetic result nodes were generated using NanoBanana from Google and SeedDream 4 (dataset for result sets), and labeled with DeepCaption-VLA-7B. The adapter is triggered with the following prompt:
> [!note]
[photo content], replicate the image as a pencil illustration, black and white, with sketch-like detailing.
---
## Sample Inference
| FLUX.1-Kontext-dev |<span style="color:red">Monochrome-Pencil</span> |
|------|-------|
|  |  |
| ex1-<span style="color:red">Monochrome-Pencil</span> | ex2-<span style="color:red">Monochrome-Pencil</span> |
|------|-------|
|  |  |
---
## Parameter Settings
| Setting | Value |
| ------------------------ | ------------------------ |
| Module Type | Adapter |
| Base Model | FLUX.1 Kontext Dev - fp8 |
| Trigger Words | [photo content], replicate the image as a pencil illustration, black and white, with sketch-like detailing. |
| Image Processing Repeats | 50 |
| Epochs | 25 |
| Save Every N Epochs | 1 |
Labeling: DeepCaption-VLA-7B(natural language & English)
Total Images Used for Training : 40 Image Pairs (20 Start, 20 End)
Synthetic result nodes were generated using NanoBanana from Google and SeedDream 4 (dataset for result sets)
## Training Parameters
| Setting | Value |
| --------------------------- | --------- |
| Seed | - |
| Clip Skip | - |
| Text Encoder LR | 0.00001 |
| UNet LR | 0.00005 |
| LR Scheduler | constant |
| Optimizer | AdamW8bit |
| Network Dimension | 64 |
| Network Alpha | 32 |
| Gradient Accumulation Steps | - |
## Label Parameters
| Setting | Value |
| --------------- | ----- |
| Shuffle Caption | - |
| Keep N Tokens | - |
## Advanced Parameters
| Setting | Value |
| ------------------------- | ----- |
| Noise Offset | 0.03 |
| Multires Noise Discount | 0.1 |
| Multires Noise Iterations | 10 |
| Conv Dimension | - |
| Conv Alpha | - |
| Batch Size | - |
| Steps | 3900 |
| Sampler | euler |
---
## Trigger words
You should use `[photo content]` to trigger the image generation.
You should use `replicate the image as a pencil illustration` to trigger the image generation.
You should use `black and white` to trigger the image generation.
You should use `with sketch-like detailing.` to trigger the image generation.
## Download model
[Download](/prithivMLmods/Monochrome-Pencil/tree/main) them in the Files & versions tab.
|
mjbommar/glaurung-small-001
|
mjbommar
| 2025-09-21T17:28:16Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"fill-mask",
"binary-analysis",
"security",
"malware-analysis",
"executable-analysis",
"masked-language-modeling",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2025-09-21T16:42:26Z |
---
language:
- en
license: apache-2.0
tags:
- binary-analysis
- security
- malware-analysis
- executable-analysis
- roberta
- masked-language-modeling
library_name: transformers
pipeline_tag: fill-mask
widget:
- text: "ELF <mask> header"
---
# Glaurung Small 001
A RoBERTa-based masked language model trained on binary executable files for security research and binary analysis.
## Overview
**Glaurung Small 001** is a transformer model specifically designed for understanding binary executable files. It uses a custom BPE (Byte Pair Encoding) tokenizer trained on multi-byte patterns from various binary formats across multiple architectures (x86-64, ARM64, etc.) and operating systems (Linux, Alpine, Ubuntu, Debian, Rocky).
### Key Features
- **Custom Binary Tokenizer**: BPE tokenizer that creates efficient multi-byte tokens from binary data
- **Binary-Aware**: Trained on actual executable files, not hex strings
- **Multi-Architecture**: Understands patterns from various CPU architectures and file formats
- **Latin-1 Encoding**: Preserves all byte values (0-255) without loss
## Model Details
- **Architecture**: RoBERTa for Masked Language Modeling
- **Hidden Size**: 768
- **Layers**: 12
- **Attention Heads**: 12
- **Vocabulary Size**: 65,536 tokens
- **Max Position Embeddings**: 520
- **Special Tokens**:
- `<|start|>` (0): Beginning of sequence
- `<|end|>` (1): End token
- `<|sep|>` (2): Separator/EOS
- `<|cls|>` (3): Classification token
- `<|pad|>` (4): Padding
- `<|mask|>` (5): Mask token for MLM
- `<|unk|>` (6): Unknown token
## Installation & Loading
```bash
pip install transformers torch
```
```python
from transformers import AutoTokenizer, AutoModelForMaskedLM, AutoModel, pipeline
# Method 1: Load with pipeline for fill-mask tasks
fill_mask = pipeline('fill-mask', model='mjbommar/glaurung-small-001', device=-1)
# Method 2: Load model and tokenizer directly for fill-mask
model = AutoModelForMaskedLM.from_pretrained('mjbommar/glaurung-small-001')
tokenizer = AutoTokenizer.from_pretrained('mjbommar/glaurung-small-001')
# Method 3: Load base model for feature extraction/embeddings
model_base = AutoModel.from_pretrained('mjbommar/glaurung-small-001')
```
## Usage Guide
### 1. Loading Binary Data (Critical!)
Binary files MUST be read as bytes and converted to latin-1 encoding:
```python
# CORRECT: Read as bytes, decode with latin-1
with open('/usr/bin/ls', 'rb') as f:
binary_data = f.read() # Read first 512 bytes or as needed
text = binary_data.decode('latin-1', errors='ignore')
# WRONG: Never use hex strings or other encodings
# hex_string = "7f454c46..." # โ Will not work
# utf8_text = binary_data.decode('utf-8') # โ Will lose bytes
```
### 2. Understanding the BPE Tokenizer
The tokenizer creates multi-byte tokens from common binary patterns:
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('mjbommar/glaurung-small-001')
# Example: ELF header tokenization
elf_header = b'\x7fELF\x02\x01\x01\x00'
text = elf_header.decode('latin-1')
tokens = tokenizer(text, return_tensors='pt')
token_ids = tokens['input_ids'][0].tolist()
# Decode tokens individually to see multi-byte patterns
for token_id in token_ids[1:5]: # Skip special tokens
decoded = tokenizer.decode([token_id], skip_special_tokens=True)
print(f"Token {token_id}: {repr(decoded)}")
# Output:
# Token 45689: '\x7fEL' # ELF magic compressed to one token!
# Token 3665: 'F\x02' # Format byte + 64-bit flag
# Token 458: '\x01\x01' # Little-endian + version
# Token 600: '\x00\x00\x00\x00\x00\x00\x00\x00\x00' # Padding
```
### 3. Fill-Mask Task (Token-Level Prediction)
**Important**: Masking works at the TOKEN level, not byte level!
```python
from transformers import AutoTokenizer, AutoModelForMaskedLM
import torch
model = AutoModelForMaskedLM.from_pretrained('mjbommar/glaurung-small-001')
tokenizer = AutoTokenizer.from_pretrained('mjbommar/glaurung-small-001')
# Read binary file
with open('/usr/bin/ls', 'rb') as f:
binary_data = f.read(512)
text = binary_data.decode('latin-1', errors='ignore')
# Tokenize
tokens = tokenizer(text, return_tensors='pt')
token_ids = tokens['input_ids'][0].tolist()
# Mask the second token (first content token after <|start|>)
masked_ids = token_ids.copy()
original_token = masked_ids[1] # Save original
masked_ids[1] = tokenizer.mask_token_id
# Prepare input
tokens_masked = {
'input_ids': torch.tensor([masked_ids]),
'attention_mask': torch.tensor([[1]*len(masked_ids)])
}
# Predict
with torch.no_grad():
outputs = model(**tokens_masked)
predictions = outputs.logits[0, 1].softmax(dim=-1)
top5 = predictions.topk(5)
# Show results
print(f"Original: {repr(tokenizer.decode([original_token]))}")
for score, token_id in zip(top5.values, top5.indices):
token_text = tokenizer.decode([token_id.item()], skip_special_tokens=True)
print(f"Predicted: {repr(token_text)} (confidence: {score:.2%})")
# Example output:
# Original: '\x7fEL'
# Predicted: '\x7fEL' (confidence: 79.07%) โ Correct!
# Predicted: '\x00\x00\x00\x00\x00\x00\x00\x00' (confidence: 13.62%)
```
### 4. Using Pipeline for Fill-Mask
The pipeline handles tokenization automatically but requires understanding multi-byte tokens:
```python
from transformers import pipeline
# Load pipeline
fill_mask = pipeline('fill-mask', model='mjbommar/glaurung-small-001', device=-1)
# Read binary
with open('/usr/bin/ls', 'rb') as f:
binary_data = f.read(100)
text = binary_data.decode('latin-1', errors='ignore')
# Create masked input at token boundaries
# First, tokenize to understand token boundaries
tokenizer = fill_mask.tokenizer
tokens = tokenizer(text)
decoded_tokens = [tokenizer.decode([tid], skip_special_tokens=True) for tid in tokens['input_ids']]
# Reconstruct with mask at token boundary
masked_text = ''.join([
decoded_tokens[0], # <|start|>
fill_mask.tokenizer.mask_token, # Mask the ELF magic
''.join(decoded_tokens[2:]) # Rest of tokens
])
# Predict
predictions = fill_mask(masked_text, top_k=3)
for pred in predictions:
print(f"{repr(pred['token_str'])}: {pred['score']:.2%}")
```
### 5. Feature Extraction & Embedding Similarity
Compare binary files by their learned embeddings:
```python
from transformers import AutoTokenizer, AutoModel
import torch
import torch.nn.functional as F
from pathlib import Path
# Load for embeddings (not MaskedLM)
tokenizer = AutoTokenizer.from_pretrained('mjbommar/glaurung-small-001')
model = AutoModel.from_pretrained('mjbommar/glaurung-small-001')
model.eval()
def get_binary_embedding(file_path, max_bytes=512):
"""Extract embedding for a binary file using mean pooling"""
with open(file_path, 'rb') as f:
binary_data = f.read(max_bytes)
text = binary_data.decode('latin-1', errors='ignore')
# Tokenize
tokens = tokenizer(text, return_tensors='pt',
padding=True, truncation=True, max_length=512)
# Get embeddings with mean pooling
with torch.no_grad():
outputs = model(**tokens)
# Mean pooling (better than CLS token for this model)
attention_mask = tokens['attention_mask']
hidden_states = outputs.last_hidden_state
# Mask padding tokens
mask_expanded = attention_mask.unsqueeze(-1).expand(hidden_states.size()).float()
sum_embeddings = torch.sum(hidden_states * mask_expanded, dim=1)
sum_mask = torch.clamp(mask_expanded.sum(dim=1), min=1e-9)
embedding = sum_embeddings / sum_mask
return embedding
# Compare multiple binaries
files = ['/usr/bin/ls', '/usr/bin/cat', '/usr/bin/echo', '/etc/passwd']
embeddings = {}
for file_path in files:
if Path(file_path).exists():
name = Path(file_path).name
embeddings[name] = get_binary_embedding(file_path)
# Calculate similarities
print("Cosine Similarity Matrix:")
names = list(embeddings.keys())
for name1 in names:
similarities = []
for name2 in names:
sim = F.cosine_similarity(embeddings[name1], embeddings[name2], dim=-1).item()
similarities.append(f"{sim:.3f}")
print(f"{name1:10s}: {' '.join(similarities)}")
# Expected output:
# ELF executables (ls, cat, echo) will have high similarity (0.85-0.95)
# Text file (passwd) will have low similarity (0.25-0.30) to ELF files
```
## Real-World Example: ELF Header Analysis
```python
from transformers import AutoTokenizer, AutoModelForMaskedLM
import torch
# Load model and tokenizer
model = AutoModelForMaskedLM.from_pretrained('mjbommar/glaurung-small-001')
tokenizer = AutoTokenizer.from_pretrained('mjbommar/glaurung-small-001')
# Analyze ELF executable structure
with open('/usr/bin/ls', 'rb') as f:
binary_data = f.read(512) # Read enough for context
print(f"Raw bytes (hex): {binary_data[:16].hex()}")
# Output: 7f454c46020101000000000000000000
# Convert to latin-1 for model
text = binary_data.decode('latin-1', errors='ignore')
# Tokenize to see learned patterns
tokens = tokenizer(text, return_tensors='pt')
token_ids = tokens['input_ids'][0].tolist()
# Show what tokens the model learned
print("\nTokenized ELF header:")
for i in range(1, min(5, len(token_ids)-1)): # First few content tokens
token_text = tokenizer.decode([token_ids[i]], skip_special_tokens=True)
print(f"Token {i}: {token_ids[i]:5d} = {repr(token_text)}")
# Output:
# Token 1: 45689 = '\x7fEL' - ELF magic compressed to one token!
# Token 2: 3665 = 'F\x02' - 'F' + 64-bit flag
# Token 3: 458 = '\x01\x01' - Little-endian + version
# Token 4: 600 = '\x00\x00\x00\x00\x00\x00\x00\x00\x00' - Padding
# Test model's understanding by masking each token
print("\nTesting model predictions:")
for position in [1, 2, 3]: # Test first 3 content tokens
masked_ids = token_ids.copy()
original_token = masked_ids[position]
masked_ids[position] = tokenizer.mask_token_id
# Create input tensors
tokens_masked = {
'input_ids': torch.tensor([masked_ids]),
'attention_mask': torch.tensor([[1]*len(masked_ids)])
}
# Get prediction
with torch.no_grad():
outputs = model(**tokens_masked)
predictions = outputs.logits[0, position].softmax(dim=-1)
predicted_token = predictions.argmax().item()
confidence = predictions.max().item()
# Show results
original_text = tokenizer.decode([original_token], skip_special_tokens=True)
predicted_text = tokenizer.decode([predicted_token], skip_special_tokens=True)
correct = "โ" if predicted_token == original_token else "โ"
print(f"Position {position}: {correct}")
print(f" Original: {repr(original_text)}")
print(f" Predicted: {repr(predicted_text)} (confidence: {confidence:.1%})")
# Expected Output:
# Position 1: โ
# Original: '\x7fEL'
# Predicted: '\x7fEL' (confidence: 79.1%)
# Position 2: โ
# Original: 'F\x02'
# Predicted: 'F\x02' (confidence: 97.9%)
# Position 3: โ
# Original: '\x01\x01'
# Predicted: '\x01\x01' (confidence: 88.7%)
```
## Training Details
- **MLM Objective**: 20% masking probability
- **Training Data**: Binary executables from various architectures
- **Optimization**: AdamW with warmup, dropout 0.01
- **Special Design**: Increased position embeddings (520) to handle RoBERTa's position offset
## Limitations
- Maximum sequence length: 512 tokens
- Optimized for executable files (ELF, PE, Mach-O)
- Mean pooling recommended for embeddings (pooler layer not specifically trained)
## Citation
If using this model in research:
```
@software{glaurung-small-001,
title = {Glaurung Small 001: Binary Analysis Transformer},
author = {Glaurung Project},
year = {2024},
url = {https://github.com/mjbommar/glaurung-models}
}
```
|
piccassol/NOLAND
|
piccassol
| 2025-09-21T17:24:56Z | 0 | 0 | null |
[
"reinforcement-learning",
"en",
"license:mit",
"region:us"
] |
reinforcement-learning
| 2025-03-20T02:35:10Z |
---
license: mit
language:
- en
pipeline_tag: reinforcement-learning
---
# NolandAI ๐๐ข
[](https://www.npmjs.com/package/nolandai)
[](https://huggingface.co/piccassol/Noland)
[](LICENSE)
**NolandAI** is an enterprise-ready, AI-powered Solana trading agent that combines on-chain analytics, social sentiment scraping, and a fine-tuned LLM to generate actionable trading calls. It ships as an npm SDK plus a FastAPI backend and a React/Next.js UI.
---
## ๐ฆ Highlights
- **Real-time market feeds** via Dexscreener.
- **Social scraping** (AssetDash, X/Twitter monitoring such as `@mobyagent`, `@whalewatch`).
- **LLM forecasting** using a LoRA-fine-tuned model (`fingpt-forecaster_dow30_llama2-7b_lora` on Hugging Face).
- **Automated trading calls** (hourly), plus optional auto-posting to X/Twitter.
- **Modular SDK** (`nolandai` npm package) for JS/TS integration.
- **Production CI** with release workflow and npm publish.
---
## ๐ฆ Installation
### From npm (recommended)
```bash
npm install nolandai
ts
Copy code
import { NolandAI } from "nolandai";
const bot = new NolandAI({ apiKey: process.env.NOLAND_KEY });
const call = await bot.getTradingCall();
console.log(call);
From source (dev)
bash
Copy code
git clone https://github.com/your-username/NolandAI.git
cd NolandAI
npm install # frontend/sdk
pip install -r requirements.txt # if running python FastAPI backend
๐งฉ Repository layout (recommended)
pgsql
Copy code
NolandAI/
โโ package.json
โโ index.js
โโ index.d.ts
โโ README.md
โโ LICENSE
โโ CHANGELOG.md
โโ CONTRIBUTING.md
โโ .github/
โ โโ workflows/ci.yml
โโ backend/
โ โโ requirements.txt
โ โโ main.py # FastAPI app (endpoints: /trading-call, /market-data/:token)
โโ frontend/
โ โโ (Next.js app)
โโ examples/
โโ demo.js
โ๏ธ Quickstart โ Local dev (SDK + demo backend)
Start FastAPI backend (example):
bash
Copy code
# in backend/
uvicorn main:app --reload --port 8000
Test SDK locally:
bash
Copy code
# from repo root or examples/
node examples/demo.js
๐ API (SDK)
new NolandAI(config?: { apiKey?: string; baseUrl?: string })
getTradingCall() โ { token, action, confidence, reason }
getMarketData(tokenAddress: string) โ market JSON from backend
๐งช Example examples/demo.js
(see examples/demo.js file in repo โ quick show of getTradingCall + getMarketData)
๐ฆ Publish & Releases
We publish releases using semantic version tags (vMAJOR.MINOR.PATCH) and CI that validates tests and publishes to npm on tag push. See .github/workflows/ci.yml.
Official release example:
v1.0.0 โ Captainโs Log (2025-09-20)
Initial public release.
npm package nolandai published.
FastAPI endpoints /trading-call, /market-data/:token live.
Hugging Face model integration.
๐๏ธ Contributing
See CONTRIBUTING.md. Pull requests welcome โ use branches, add tests, sign commits.
โ๏ธ License
MIT ยฉ 2025 AuroraRift
Maintainers
AuroraRift Team โ [email protected]
Changelog
See CHANGELOG.md for full release history.
pgsql
Copy code
---
# 2) Files to create (copy these into repo root exactly)
Below are the key files. Put them where indicated.
## `package.json` (repo root)
```json
{
"name": "nolandai",
"version": "1.0.0",
"description": "NolandAI - An AI-powered Solana trading agent for market intelligence, social scraping, and automated trading calls.",
"main": "index.js",
"types": "index.d.ts",
"type": "module",
"scripts": {
"build": "tsc",
"test": "node test.js || echo \"no tests\"",
"lint": "eslint ."
},
"repository": {
"type": "git",
"url": "https://github.com/your-username/NolandAI.git"
},
"keywords": [
"AI",
"trading",
"Solana",
"crypto",
"blockchain",
"LLM",
"bot",
"Dexscreener",
"NolandAI",
"AuroraRift"
],
"author": "AuroraRift Team <[email protected]>",
"license": "MIT",
"bugs": {
"url": "https://github.com/your-username/NolandAI/issues"
},
"homepage": "https://huggingface.co/your-username/NolandAI",
"engines": {
"node": ">=18"
},
"dependencies": {
"axios": "^1.7.0",
"dotenv": "^16.3.1"
},
"devDependencies": {
"eslint": "^8.56.0",
"typescript": "^5.4.0"
}
}
index.js (repo root)
js
Copy code
import axios from "axios";
import dotenv from "dotenv";
dotenv.config();
export class NolandAI {
constructor(config = {}) {
this.apiKey = config.apiKey || process.env.NOLAND_KEY || "";
this.baseUrl = config.baseUrl || "http://localhost:8000"; // FastAPI default
}
async getTradingCall() {
const res = await axios.get(`${this.baseUrl}/trading-call`, {
headers: this.apiKey ? { Authorization: `Bearer ${this.apiKey}` } : {}
});
return res.data;
}
async getMarketData(tokenAddress) {
const res = await axios.get(`${this.baseUrl}/market-data/${tokenAddress}`);
return res.data;
}
}
export default NolandAI;
|
huwhitememes/charliekirk_v1-qwen_image
|
huwhitememes
| 2025-09-21T17:16:33Z | 0 | 0 | null |
[
"image",
"lora",
"qwen",
"charlie-kirk",
"generative-image",
"huwhitememes",
"Meme King Studio",
"Green Frog Labs",
"culture-war",
"tribute",
"text-to-image",
"base_model:Qwen/Qwen-Image",
"base_model:adapter:Qwen/Qwen-Image",
"license:apache-2.0",
"region:us"
] |
text-to-image
| 2025-09-21T16:35:40Z |
---
license: apache-2.0
base_model: Qwen/Qwen-Image
tags:
- image
- lora
- qwen
- charlie-kirk
- generative-image
- huwhitememes
- Meme King Studio
- Green Frog Labs
- culture-war
- tribute
pipeline_tag: text-to-image
---
# โ๏ธ Charlie Kirk Tribute LoRA for Qwen Image V1 ๐๏ธ
This is a LoRA trained on **43 curated images** of Charlie Kirk โ founder of TPUSA, Patriot, and Martyr of the American culture war. Trained with love on [Wavespeed.AI](https://wavespeed.ai), this LoRA allows creators to generate **powerful, emotional, and surreal art** that captures the iconography, legacy, and spiritual presence of Charlie.
> **WE ARE ALL CHARLIE KIRK.**
>
>
---
## ๐ฏ Use Cases
- Faith-based political tribute art
- Digital memorials honoring Christian martyrs
- Patriotic and pro-MAGA propaganda artwork
- Spiritual warfare visuals for the culture war
- Remembrance content for social media and movement building
- Meme canonization of righteous leaders silenced by the left
---
## ๐ง Training Details
- **Base Model**: Qwen/Qwen-Image
- **Trainer**: WaveSpeedAI LoRA Trainer
- **Steps**: ~2000
- **LoRA Rank**: 16
- **Image Count**: 43 (balanced for aesthetic and variation)
- **Trigger Word**: `Ch4rlie K!rk` (recommended at prompt start)
- **Style**: Meme realism, cinematic emotionality, digital martyrdom
---
## ๐ง Creator
Created by [@huwhitememes](https://x.com/huwhitememes)
Released by **Meme King Studio** in cooperation with **Green Frog Labs**
Part of the expanding creative ecosystem where memes become monuments.
---
## โ๏ธ Legal & Fair Use
This model was trained using **publicly available imagery** of a major public figure.
It is provided for **fair use, memorial tribute, and commentary purposes**.
Not for commercial misuse. Not affiliated with or endorsed by any individual or organization.
---
## ๐๐ป In Memoriam
On this solemn day, Sunday September 21st 2025, we mourn the loss of **Charlie Kirk**.
Targeted and **assassinated in cold blood** by radical leftists, Charlie's voice was silenced โ
but **his message lives on** in each of us.
**Today, we say it loud: WE ARE ALL CHARLIE KIRK.**
His spirit will rise in every meme, every image, every call to truth in a world gone mad.
---
## ๐งช Example Usage Prompt
```text
Ch4rlie K!rk as an armored angel of vengeance, standing atop a pile of rainbow-colored demon corpses, wings of fire, flaming sword raised to the heavens, battle-worn American flag waving behind him, photorealistic, dark fantasy, VHS glitch aesthetic
|
Rinanixvaruyr/Qwen3-0.6B-Gensyn-Swarm-howling_stalking_zebra
|
Rinanixvaruyr
| 2025-09-21T17:07:07Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am howling_stalking_zebra",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-21T17:07:00Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am howling_stalking_zebra
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
hadasor/abc-seed_5
|
hadasor
| 2025-09-21T17:02:12Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-21T16:22:43Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
beyoru/Qwen3-4B-I-1509
|
beyoru
| 2025-09-21T16:57:52Z | 106 | 1 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"tools",
"agent",
"function calling",
"tool calling",
"conversational",
"en",
"base_model:beyoru/Qwen3-4B-I-1509",
"base_model:finetune:beyoru/Qwen3-4B-I-1509",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-16T18:44:51Z |
---
base_model: beyoru/Qwen3-4B-I-1509
tags:
- text-generation-inference
- transformers
- qwen3
- tools
- agent
- function calling
- tool calling
license: apache-2.0
language:
- en
---
# ๐ Qwen3-4B-I-1509
## ๐งพ Model Overview
- ๐๏ธ **Base Model**: Qwen3-4B-Instruct-2507
- ๐ฏ **Training Method**: Reinforcement Learning (GRPO) with multiple reward functions
This model (`Qwen3-4B-I-1509`) is finetuned for **๐ง tool-use** and **๐ function call generation**.
<p align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/65905af887944e494e37e09a/znFHp2gPLIsMN613HEESX.webp" width="160">
</p>
---
## ๐ Reward Functions
The model was trained with **multi-signal rewards**:
1. ๐ **Rule-based Reward**
โ๏ธ Checks correctness of function call name and arguments.
โ Partial credit for matching subsets of arguments.
2. ๐ **Self-Certainty Reward**
โก Encourages confident predictions.
3. ๐ง **Tool-Call Reward**
โ
Validates structural correctness.
---
## โ๏ธ Training Configuration
- โก **Optimizer**: AdamW
- ๐ **Learning Rate**: 5e-6 with cosine decay (`min_lr_rate=0.1`)
- โณ **Scheduler**: cosine_with_min_lr
- ๐ **Generations per Prompt**: 4
---
## ๐ Eval Result:
### Important notes:
- Why it lower than technical report?
There have a limit of hardware so have to reduce some max tokens when evaluation for both 2 models
- Fair evaluate ?
I use the same configuration for all the models I review for larger or with a same size model.
### Tau-Bench
| ๐ง Model | โ๏ธ Airline | ๐๏ธ Retail | โญ Overall |
|-------------------|------------|-------------|------------|
| Qwen3-4B-I-1509 | 0.2800 | **0.2783** | **0.2788** |
| Base Model | **0.3000** | 0.2261 | 0.2485 |
## ACEBench
| Model | Overall Accuracy |
|--------------------------------|------------------|
| Qwen3-4B-I-1509 | **0.677** |
| Qwen3-4B-Instruct-2507 (base) | 0.635 |
*curently upadate more*
---
## Contribute:
I would be happy to receive a contribution to this model and get feedback about performance, quality of model
## ๐ Citation
If you use this model in your research or application, please cite:
```bibtex
@misc{qwen3-4b-i-1509,
title = {Qwen3-4B-I-1509: Fine-tuned Qwen3-4B-Instruct with GRPO for Tool-Use and Function Calling},
author = {Beyoru},
year = {2025},
howpublished = {\url{https://huggingface.co/beyoru/Qwen3-4B-I-1509}}
}
|
thefirstgoku/21_intergated_v32_9
|
thefirstgoku
| 2025-09-21T16:57:52Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-09-21T16:57:13Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
urm3l/model16
|
urm3l
| 2025-09-21T16:52:32Z | 37 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-12T19:44:11Z |
---
base_model: unsloth/qwen3-14b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** urm3l
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen3-14b-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
PrevAIHealth/Llama3.2-1B-Instruct-Medical
|
PrevAIHealth
| 2025-09-21T16:51:09Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-21T16:50:40Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
SandeepCodez/VCET-gemma-1b-it
|
SandeepCodez
| 2025-09-21T16:50:50Z | 0 | 0 | null |
[
"safetensors",
"gemma3_text",
"license:apache-2.0",
"region:us"
] | null | 2025-09-21T16:45:19Z |
---
license: apache-2.0
---
|
WenFengg/REP21Sun__14_23
|
WenFengg
| 2025-09-21T16:48:12Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-09-21T16:47:05Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
RH730/llama-3.2-11b-vision-eng-reviewer
|
RH730
| 2025-09-21T16:46:20Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mllama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-21T16:45:41Z |
---
base_model: unsloth/llama-3.2-11b-vision-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- mllama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** RH730
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-11b-vision-instruct-bnb-4bit
This mllama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
safffrron/prompt_tuned_adalora
|
safffrron
| 2025-09-21T16:45:53Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"license:apache-2.0",
"region:us"
] | null | 2025-09-21T15:15:43Z |
---
library_name: peft
license: apache-2.0
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
tags:
- generated_from_trainer
model-index:
- name: prompt_tuned_adalora
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# prompt_tuned_adalora
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4595
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.4868 | 1.0 | 7125 | 2.5579 |
| 2.4337 | 2.0 | 14250 | 2.4991 |
| 2.3644 | 3.0 | 21375 | 2.4595 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.2
|
dario-mazzola/gemma-ft_function_calling
|
dario-mazzola
| 2025-09-21T16:42:57Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"unsloth",
"base_model:unsloth/gemma-3-1b-it",
"base_model:finetune:unsloth/gemma-3-1b-it",
"endpoints_compatible",
"region:us"
] | null | 2025-09-20T16:15:11Z |
---
base_model: unsloth/gemma-3-1b-it
library_name: transformers
model_name: gemma-ft_function_calling
tags:
- generated_from_trainer
- sft
- trl
- unsloth
licence: license
---
# Model Card for gemma-ft_function_calling
This model is a fine-tuned version of [unsloth/gemma-3-1b-it](https://huggingface.co/unsloth/gemma-3-1b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="dario-mazzola/gemma-ft_function_calling", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.22.2
- Transformers: 4.55.4
- Pytorch: 2.8.0
- Datasets: 3.6.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
schooncestiaa/blockassist-bc-scruffy_webbed_dragonfly_1758472755
|
schooncestiaa
| 2025-09-21T16:40:35Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"scruffy webbed dragonfly",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-21T16:40:17Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- scruffy webbed dragonfly
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
MelonWithGlasses/MelonAI-7B-Instruct
|
MelonWithGlasses
| 2025-09-21T16:40:33Z | 0 | 0 | null |
[
"license:cc-by-nc-4.0",
"region:us"
] | null | 2025-09-21T16:40:33Z |
---
license: cc-by-nc-4.0
---
|
moulibasha/tourism-package-prediction-model
|
moulibasha
| 2025-09-21T16:37:23Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-21T10:57:16Z |
# Tourism Package Prediction Model
- **Data:** [tourism-package-prediction-train-test](https://huggingface.co/datasets/moulibasha/tourism-package-prediction-train-test)
- **Best params:** {'model__class_weight': 'balanced', 'model__max_depth': None, 'model__min_samples_leaf': 1, 'model__min_samples_split': 5, 'model__n_estimators': 300}
- **Metrics:** {'accuracy': 0.9028642590286425, 'precision': 0.8888888888888888, 'recall': 0.567741935483871, 'f1': 0.6929133858267716}
- **Pipeline:** preprocessing (imputer + onehot) + RandomForest
|
devovevo/partssource-longformer-combined-type-classifier
|
devovevo
| 2025-09-21T16:36:36Z | 10 | 0 | null |
[
"safetensors",
"endpoints_compatible",
"region:us"
] | null | 2025-08-21T01:08:18Z |
---
language:
- en
license: mit
library_name: transformers
tags:
- text-classification
- longformer
- customer-service
- case-classification
- sequence-classification
- pytorch
pipeline_tag: text-classification
widget:
- text: >-
I need help with my billing invoice and there seems to be an error in the
charges.
example_title: Billing Issue
- text: Can you help me return this defective part I received?
example_title: Return Request
- text: I need a quote for 100 units of part number ABC123
example_title: Quote Request
- text: My website login is not working properly
example_title: Website Issue
model-index:
- name: longformer-combined-classifier
results:
- task:
type: text-classification
name: Text Classification
dataset:
type: custom
name: Customer Service Cases
metrics:
- type: accuracy
name: Accuracy
value: 0.95
inference:
parameters:
max_length: 4096
truncation: true
padding: true
base_model:
- allenai/longformer-base-4096
---
# Longformer Combined Classifier
A robust Hugging Face Longformer model for sequence classification, specifically trained to classify customer service cases into case types and detailed categories.
## Model Overview
- **Base Model**: `longformer-base-4096`
- **Task**: Multi-class sequence classification
- **Labels**: 59 detailed labels across 12 main categories
- **Max Sequence Length**: 4096 tokens
- **Output Format**: `case_type|case_detail`
## Categories
The model classifies text into the following main categories:
- Account Update
- Billing
- Cancelation
- Customer Request
- Inventory
- Other
- Purchase Order
- Quote Request
- Repairs
- Returns
- Vendor Request
- Website
Each category has multiple detailed subcategories (59 total labels).
## Files
- `handler.py` - Hugging Face Inference Endpoints compatible handler with robust error handling
- `test_handler.py` - Comprehensive test script to validate the handler
- `requirements.txt` - Python dependencies
- `label_mappings.json` - Label mappings between IDs and human-readable labels
- `config.json` - Model configuration
- `model.safetensors` - Model weights
- `tokenizer.json` - Tokenizer configuration
- `tokenizer_config.json` - Tokenizer settings
- `vocab.json` - Vocabulary
- `special_tokens_map.json` - Special tokens mapping
## Installation
1. Install dependencies:
```bash
pip install -r requirements.txt
```
2. Ensure all model files are in the same directory as `handler.py`
## Usage
### Local Testing
Run the test script to validate everything works:
```bash
python test_handler.py
```
### Single Text Classification
```python
from handler import EndpointHandler
# Initialize handler
handler = EndpointHandler()
# Single prediction
data = {
"inputs": "I need help with my billing invoice and there seems to be an error in the charges."
}
result = handler(data)
print(result)
```
### Batch Classification
```python
from handler import EndpointHandler
# Initialize handler
handler = EndpointHandler()
# Batch prediction
data = {
"inputs": [
"I need help with my billing invoice and there seems to be an error in the charges.",
"Can you help me return this defective part I received?",
"I need a quote for 100 units of part number ABC123"
]
}
result = handler(data)
print(result)
```
### Compatibility Wrapper
For backward compatibility, a wrapper function is also available:
```python
from handler import handler
# Works with the same format as EndpointHandler
result = handler({"inputs": "Your text here"})
```
## Response Format
The handler returns predictions directly as a JSON list:
**Single Input Response:**
```json
[
{
"case_type": "Billing",
"case_detail": "Invoice Inquiry",
"full_label": "Billing|Invoice Inquiry",
"confidence": 0.9234,
"predicted_id": 5,
"top_3_predictions": [
{
"case_type": "Billing",
"case_detail": "Invoice Inquiry",
"confidence": 0.9234
},
{
"case_type": "Billing",
"case_detail": "Problem Invoice",
"confidence": 0.0456
},
{
"case_type": "Customer Request",
"case_detail": "Shipping Status",
"confidence": 0.0234
}
]
}
]
```
**Batch Input Response:**
```json
[
{
"case_type": "Billing",
"case_detail": "Invoice Inquiry",
"confidence": 0.9234,
"predicted_id": 5,
"top_3_predictions": [...]
},
{
"case_type": "Returns",
"case_detail": "Return Request",
"confidence": 0.8756,
"predicted_id": 48,
"top_3_predictions": [...]
}
]
```
Processing time, batch size, and model info are logged but not included in the response for cleaner output.
## Robust Features
### Token Limit Handling
- Automatically truncates texts longer than 4096 tokens
- Prevents model crashes from oversized inputs
- Logs warnings when truncation occurs
### Batch Processing
- Supports batch inference for efficiency
- Configurable batch size (default: 8)
- Handles mixed valid/invalid inputs gracefully
### Error Handling
- Comprehensive error handling and logging
- Graceful degradation for invalid inputs
- Returns meaningful error messages
### Logging
- Extensive logging for debugging and monitoring
- Logs to both console and file (`model_inference.log`)
- Different log levels for different scenarios
### Input Validation
- Handles empty strings and whitespace-only inputs gracefully
- Validates input format and structure
- Returns "Other|Junk" predictions for empty inputs (using actual label from mappings)
## Deployment
### Hugging Face Inference Endpoints (Recommended)
The model includes a handler (`handler.py`) that implements the `EndpointHandler` interface required by HF Inference Endpoints.
#### Prerequisites
1. Push your model to the Hugging Face Hub
2. Ensure all files are in your repository:
- `handler.py`
- `requirements.txt`
- `label_mappings.json`
- All model files (`*.safetensors`, `config.json`, etc.)
#### Deployment Steps
1. **Prepare the Repository**:
```bash
# Push to HF Hub
git add .
git commit -m "Add HF Inference Endpoints handler"
git push
```
2. **Create Inference Endpoint**:
- Go to [Hugging Face Inference Endpoints](https://ui.endpoints.huggingface.co/)
- Click "Create new endpoint"
- Select your model repository
- In **Advanced Configuration**:
- Set **Framework** to "Custom" (important!)
- Choose appropriate instance type (GPU recommended)
- Set memory to at least 8GB
3. **Test the Endpoint**:
```python
import requests
# Single prediction
response = requests.post(
"https://your-endpoint-url.endpoints.huggingface.cloud",
headers={"Authorization": "Bearer YOUR_TOKEN"},
json={"inputs": "I need help with my billing invoice"}
)
# Batch prediction
response = requests.post(
"https://your-endpoint-url.endpoints.huggingface.cloud",
headers={"Authorization": "Bearer YOUR_TOKEN"},
json={"inputs": ["Text 1", "Text 2", "Text 3"]}
)
```
#### Input Format
The handler expects the standard HF Inference Endpoints format:
```json
{
"inputs": "Single text string"
}
```
Or for batch processing:
```json
{
"inputs": ["Text 1", "Text 2", "Text 3"]
}
```
#### Response Format
The handler returns predictions directly as a list:
**Single Input:**
```json
[
{
"case_type": "Billing",
"case_detail": "Invoice Inquiry",
"full_label": "Billing|Invoice Inquiry",
"confidence": 0.9234,
"predicted_id": 5,
"top_3_predictions": [
{
"case_type": "Billing",
"case_detail": "Invoice Inquiry",
"confidence": 0.9234
},
{
"case_type": "Billing",
"case_detail": "Credit Request (Customer Complaint)",
"confidence": 0.0456
},
{
"case_type": "Customer Request",
"case_detail": "Shipping Status",
"confidence": 0.0234
}
]
}
]
```
**Batch Input:**
```json
[
{
"case_type": "Billing",
"case_detail": "Invoice Inquiry",
"confidence": 0.9234,
"predicted_id": 5,
"top_3_predictions": [...]
},
{
"case_type": "Returns",
"case_detail": "Return Request",
"confidence": 0.8756,
"predicted_id": 48,
"top_3_predictions": [...]
}
]
```
**Empty Input:**
```json
[]
```
Processing time and batch size are logged but not returned in the response.
### AWS Lambda
1. Package the model and handler:
```bash
# Create deployment package
zip -r deployment.zip handler.py requirements.txt *.json *.safetensors
```
2. Create Lambda function with:
- Runtime: Python 3.9+
- Handler: `handler.handler`
- Memory: 3008 MB (recommended for model loading)
- Timeout: 5 minutes
### Docker Deployment
Create a `Dockerfile`:
```dockerfile
FROM python:3.9-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY . .
EXPOSE 8080
CMD ["python", "-c", "from handler import handler; import json; import sys; event = json.loads(sys.argv[1]); print(json.dumps(handler(event)))", "{}"]
```
### SageMaker Endpoint
The handler is compatible with SageMaker inference endpoints. Use the `handler` function as your inference entry point.
## Performance Considerations
- **GPU Recommended**: Model performs significantly better on GPU
- **Memory Requirements**: ~2-3GB RAM for model loading
- **Batch Size**: Adjust `max_batch_size` based on available memory
- **Cold Start**: First inference may take longer due to model loading
## Monitoring
The handler provides comprehensive logging and metrics:
- Processing times
- Token counts and truncation warnings
- Error rates and types
- Batch sizes and throughput
Monitor the `model_inference.log` file for detailed operation logs.
## Troubleshooting
### Common Issues
1. **Out of Memory**: Reduce `max_batch_size` in handler
2. **Slow Performance**: Ensure GPU is available and being used
3. **Model Loading Errors**: Verify all model files are present
4. **Token Limit Errors**: Check logs for truncation warnings
### Debug Mode
Enable debug logging by modifying the logging level in `handler.py`:
```python
logging.basicConfig(level=logging.DEBUG, ...)
```
## Testing
Run comprehensive tests:
```bash
python test_handler.py
```
The test script validates:
- Single and batch predictions
- Long text handling and truncation
- Edge case handling (including empty inputs)
- Error scenarios
- Model information retrieval
- HF Inference Endpoints compatibility
## License
This model and handler are for internal use. Ensure compliance with your organization's AI/ML usage policies.
## Support
For issues or questions:
1. Check the logs in `model_inference.log`
2. Run the test script to validate setup
3. Review the troubleshooting section above
|
sweatSmile/DialoGPT-Quantitative-Risk-Analysis-Expert
|
sweatSmile
| 2025-09-21T16:33:25Z | 0 | 1 |
transformers
|
[
"transformers",
"conversational-ai",
"finance",
"fintech",
"risk-management",
"quantitative-analysis",
"financial-risk",
"risk-assessment",
"lora",
"hedge-funds",
"investment-banking",
"volatility-modeling",
"risk-metrics",
"portfolio-risk",
"market-risk",
"credit-risk",
"operational-risk",
"text-generation",
"en",
"dataset:AdaptLLM/finance-tasks",
"base_model:microsoft/DialoGPT-medium",
"base_model:adapter:microsoft/DialoGPT-medium",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-21T16:28:56Z |
---
base_model: microsoft/DialoGPT-medium
pipeline_tag: text-generation
library_name: transformers
tags:
- conversational-ai
- finance
- fintech
- risk-management
- quantitative-analysis
- financial-risk
- risk-assessment
- lora
- hedge-funds
- investment-banking
- volatility-modeling
- risk-metrics
- portfolio-risk
- market-risk
- credit-risk
- operational-risk
language:
- en
license: mit
datasets:
- AdaptLLM/finance-tasks
metrics:
- perplexity
- accuracy
widget:
- text: "<|user|> As a quantitative risk analyst, please analyze: What is the Value at Risk for a portfolio with 60% equity and 40% bonds during high volatility periods? <|bot|>"
example_title: "VaR Analysis"
- text: "<|user|> As a quantitative risk analyst, please analyze: How do correlation changes affect portfolio risk during market stress events? <|bot|>"
example_title: "Correlation Risk Assessment"
- text: "<|user|> As a quantitative risk analyst, please analyze: What are the key risk metrics for evaluating credit exposure in derivatives trading? <|bot|>"
example_title: "Credit Risk Evaluation"
---
# DialoGPT-Quantitative-Risk-Analysis-Expert
Fine-tuned DialoGPT-medium for advanced quantitative risk analysis, financial risk modeling, and comprehensive risk management consultations.
## Overview
- **Base Model:** microsoft/DialoGPT-medium (355M parameters)
- **Fine-tuning Method:** LoRA (4-bit quantization)
- **Dataset:** Financial risk analysis dataset (800 expert-level samples)
- **Training:** 3 epochs with optimized hyperparameters
## Key Features
- Advanced quantitative risk modeling and analysis
- Value at Risk (VaR) and Expected Shortfall calculations
- Portfolio risk assessment and optimization
- Market risk, credit risk, and operational risk evaluation
- Volatility modeling and stress testing scenarios
- Risk metric interpretation and regulatory compliance
## Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("sweatSmile/DialoGPT-Quantitative-Risk-Analysis-Expert")
tokenizer = AutoTokenizer.from_pretrained("sweatSmile/DialoGPT-Quantitative-Risk-Analysis-Expert")
# Quantitative risk analysis example
prompt = "<|user|> As a quantitative risk analyst, please analyze: How do we calculate risk-adjusted returns for a multi-asset portfolio under different market scenarios? <|bot|>"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=250, pad_token_id=tokenizer.eos_token_id)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## Applications
- Risk management department consultations
- Hedge fund risk assessment and monitoring
- Investment bank risk modeling and analysis
- Portfolio risk optimization and stress testing
- Regulatory compliance and risk reporting
- Quantitative research and model validation
## Training Details
- LoRA rank: 8, alpha: 16
- 4-bit NF4 quantization with bfloat16 precision
- Learning rate: 1e-4 with cosine scheduling
- Batch size: 8, Max length: 400 tokens
- 3 epochs on curated financial risk analysis dataset
Specialized for sophisticated quantitative risk analysis and modeling in institutional finance environments.
|
WenFengg/REP21Sun__14_19
|
WenFengg
| 2025-09-21T16:33:03Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-09-21T16:31:29Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
SandeepCodez/gemma-vcet-log
|
SandeepCodez
| 2025-09-21T16:32:19Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:google/gemma-3-270m-it",
"base_model:finetune:google/gemma-3-270m-it",
"endpoints_compatible",
"region:us"
] | null | 2025-09-21T15:52:25Z |
---
base_model: google/gemma-3-270m-it
library_name: transformers
model_name: gemma-vcet-log
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for gemma-vcet-log
This model is a fine-tuned version of [google/gemma-3-270m-it](https://huggingface.co/google/gemma-3-270m-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="SandeepCodez/gemma-vcet-log", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.22.2
- Transformers: 4.55.4
- Pytorch: 2.8.0+cu126
- Datasets: 3.6.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
schooncestiaa/blockassist-bc-scruffy_webbed_dragonfly_1758472137
|
schooncestiaa
| 2025-09-21T16:30:18Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"scruffy webbed dragonfly",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-21T16:29:59Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- scruffy webbed dragonfly
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
MrAnton/SmolVLM-256M-Instruct_grpo_carrot_plate_yesno_task
|
MrAnton
| 2025-09-21T16:29:35Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"idefics3",
"image-to-text",
"generated_from_trainer",
"grpo",
"trl",
"arxiv:2402.03300",
"base_model:HuggingFaceTB/SmolVLM-256M-Instruct",
"base_model:finetune:HuggingFaceTB/SmolVLM-256M-Instruct",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2025-09-21T16:04:08Z |
---
base_model: HuggingFaceTB/SmolVLM-256M-Instruct
library_name: transformers
model_name: SmolVLM-256M-Instruct_grpo_carrot_plate_yesno_task
tags:
- generated_from_trainer
- grpo
- trl
licence: license
---
# Model Card for SmolVLM-256M-Instruct_grpo_carrot_plate_yesno_task
This model is a fine-tuned version of [HuggingFaceTB/SmolVLM-256M-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM-256M-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="MrAnton/SmolVLM-256M-Instruct_grpo_carrot_plate_yesno_task", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.23.0
- Transformers: 4.56.1
- Pytorch: 2.2.0+cu121
- Datasets: 3.3.2
- Tokenizers: 0.22.0
## Citations
Cite GRPO as:
```bibtex
@article{shao2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
palusi/LAMP-Decision
|
palusi
| 2025-09-21T16:29:01Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-21T16:29:01Z |
---
license: apache-2.0
---
|
mradermacher/Gemma_Delirium_Rewired_9B-GGUF
|
mradermacher
| 2025-09-21T16:25:41Z | 0 | 1 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:SzilviaB/Gemma_Delirium_Rewired_9B",
"base_model:quantized:SzilviaB/Gemma_Delirium_Rewired_9B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-21T08:22:11Z |
---
base_model: SzilviaB/Gemma_Delirium_Rewired_9B
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/SzilviaB/Gemma_Delirium_Rewired_9B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Gemma_Delirium_Rewired_9B-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Gemma_Delirium_Rewired_9B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Gemma_Delirium_Rewired_9B-GGUF/resolve/main/Gemma_Delirium_Rewired_9B.Q2_K.gguf) | Q2_K | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma_Delirium_Rewired_9B-GGUF/resolve/main/Gemma_Delirium_Rewired_9B.Q3_K_S.gguf) | Q3_K_S | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma_Delirium_Rewired_9B-GGUF/resolve/main/Gemma_Delirium_Rewired_9B.Q3_K_M.gguf) | Q3_K_M | 4.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Gemma_Delirium_Rewired_9B-GGUF/resolve/main/Gemma_Delirium_Rewired_9B.Q3_K_L.gguf) | Q3_K_L | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma_Delirium_Rewired_9B-GGUF/resolve/main/Gemma_Delirium_Rewired_9B.IQ4_XS.gguf) | IQ4_XS | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma_Delirium_Rewired_9B-GGUF/resolve/main/Gemma_Delirium_Rewired_9B.Q4_K_S.gguf) | Q4_K_S | 5.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Gemma_Delirium_Rewired_9B-GGUF/resolve/main/Gemma_Delirium_Rewired_9B.Q4_K_M.gguf) | Q4_K_M | 5.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Gemma_Delirium_Rewired_9B-GGUF/resolve/main/Gemma_Delirium_Rewired_9B.Q5_K_S.gguf) | Q5_K_S | 6.6 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma_Delirium_Rewired_9B-GGUF/resolve/main/Gemma_Delirium_Rewired_9B.Q5_K_M.gguf) | Q5_K_M | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma_Delirium_Rewired_9B-GGUF/resolve/main/Gemma_Delirium_Rewired_9B.Q6_K.gguf) | Q6_K | 7.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Gemma_Delirium_Rewired_9B-GGUF/resolve/main/Gemma_Delirium_Rewired_9B.Q8_0.gguf) | Q8_0 | 9.9 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Gemma_Delirium_Rewired_9B-GGUF/resolve/main/Gemma_Delirium_Rewired_9B.f16.gguf) | f16 | 18.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
ThiagoVsky/DeepSeek-R1-Distill-Qwen-7B-Multilingual-Q4_K_S-GGUF_split
|
ThiagoVsky
| 2025-09-21T16:25:03Z | 0 | 0 | null |
[
"gguf",
"reasoning",
"llama-cpp",
"gguf-my-repo",
"am",
"ar",
"bn",
"zh",
"cs",
"nl",
"en",
"fr",
"de",
"el",
"ha",
"he",
"hi",
"id",
"it",
"ja",
"jv",
"km",
"ko",
"lo",
"ms",
"mr",
"fa",
"pl",
"pt",
"ro",
"ru",
"es",
"sw",
"sv",
"tl",
"ta",
"te",
"th",
"tr",
"uk",
"ur",
"vi",
"dataset:lightblue/reasoning-multilingual-R1-Llama-70B-train",
"base_model:lightblue/DeepSeek-R1-Distill-Qwen-7B-Multilingual",
"base_model:quantized:lightblue/DeepSeek-R1-Distill-Qwen-7B-Multilingual",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-21T16:17:29Z |
---
language:
- am
- ar
- bn
- zh
- cs
- nl
- en
- fr
- de
- el
- ha
- he
- hi
- id
- it
- ja
- jv
- km
- ko
- lo
- ms
- mr
- fa
- pl
- pt
- ro
- ru
- es
- sw
- sv
- tl
- ta
- te
- th
- tr
- uk
- ur
- vi
license: apache-2.0
datasets:
- lightblue/reasoning-multilingual-R1-Llama-70B-train
tags:
- reasoning
- llama-cpp
- gguf-my-repo
base_model: lightblue/DeepSeek-R1-Distill-Qwen-7B-Multilingual
---
# ThiagoVsky/DeepSeek-R1-Distill-Qwen-7B-Multilingual-Q4_K_S-GGUF
This model was converted to GGUF format from [`lightblue/DeepSeek-R1-Distill-Qwen-7B-Multilingual`](https://huggingface.co/lightblue/DeepSeek-R1-Distill-Qwen-7B-Multilingual) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/lightblue/DeepSeek-R1-Distill-Qwen-7B-Multilingual) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo ThiagoVsky/DeepSeek-R1-Distill-Qwen-7B-Multilingual-Q4_K_S-GGUF --hf-file deepseek-r1-distill-qwen-7b-multilingual-q4_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo ThiagoVsky/DeepSeek-R1-Distill-Qwen-7B-Multilingual-Q4_K_S-GGUF --hf-file deepseek-r1-distill-qwen-7b-multilingual-q4_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo ThiagoVsky/DeepSeek-R1-Distill-Qwen-7B-Multilingual-Q4_K_S-GGUF --hf-file deepseek-r1-distill-qwen-7b-multilingual-q4_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo ThiagoVsky/DeepSeek-R1-Distill-Qwen-7B-Multilingual-Q4_K_S-GGUF --hf-file deepseek-r1-distill-qwen-7b-multilingual-q4_k_s.gguf -c 2048
```
|
JW17/Q25-1.5B-BTRM-SKWv2
|
JW17
| 2025-09-21T16:22:34Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"grpo",
"trl",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-1.5B",
"base_model:finetune:Qwen/Qwen2.5-1.5B",
"endpoints_compatible",
"region:us"
] | null | 2025-09-21T16:14:35Z |
---
base_model: Qwen/Qwen2.5-1.5B
library_name: transformers
model_name: Qwen2.5-1.5B-GRPO-rm
tags:
- generated_from_trainer
- grpo
- trl
licence: license
---
# Model Card for Qwen2.5-1.5B-GRPO-rm
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B](https://huggingface.co/Qwen/Qwen2.5-1.5B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="None", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/jiwooya1000/ICRM-RLVR-Math/runs/ulwxm0yo)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.22.2
- Transformers: 4.55.2
- Pytorch: 2.8.0
- Datasets: 4.1.1
- Tokenizers: 0.21.4
## Citations
Cite GRPO as:
```bibtex
@article{shao2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
WenFengg/REP21Sun_14_17
|
WenFengg
| 2025-09-21T16:19:35Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-09-21T16:18:39Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
mradermacher/nemotron-medical-tuned-70b-GGUF
|
mradermacher
| 2025-09-21T16:17:55Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:alperk3003/nemotron-medical-tuned-70b",
"base_model:quantized:alperk3003/nemotron-medical-tuned-70b",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-21T15:01:01Z |
---
base_model: alperk3003/nemotron-medical-tuned-70b
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/alperk3003/nemotron-medical-tuned-70b
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#nemotron-medical-tuned-70b-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/nemotron-medical-tuned-70b-GGUF/resolve/main/nemotron-medical-tuned-70b.Q2_K.gguf) | Q2_K | 26.5 | |
| [GGUF](https://huggingface.co/mradermacher/nemotron-medical-tuned-70b-GGUF/resolve/main/nemotron-medical-tuned-70b.Q3_K_S.gguf) | Q3_K_S | 31.0 | |
| [GGUF](https://huggingface.co/mradermacher/nemotron-medical-tuned-70b-GGUF/resolve/main/nemotron-medical-tuned-70b.Q3_K_M.gguf) | Q3_K_M | 34.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/nemotron-medical-tuned-70b-GGUF/resolve/main/nemotron-medical-tuned-70b.Q3_K_L.gguf) | Q3_K_L | 37.2 | |
| [GGUF](https://huggingface.co/mradermacher/nemotron-medical-tuned-70b-GGUF/resolve/main/nemotron-medical-tuned-70b.IQ4_XS.gguf) | IQ4_XS | 38.4 | |
| [GGUF](https://huggingface.co/mradermacher/nemotron-medical-tuned-70b-GGUF/resolve/main/nemotron-medical-tuned-70b.Q4_K_S.gguf) | Q4_K_S | 40.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/nemotron-medical-tuned-70b-GGUF/resolve/main/nemotron-medical-tuned-70b.Q4_K_M.gguf) | Q4_K_M | 42.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/nemotron-medical-tuned-70b-GGUF/resolve/main/nemotron-medical-tuned-70b.Q5_K_S.gguf) | Q5_K_S | 48.8 | |
| [GGUF](https://huggingface.co/mradermacher/nemotron-medical-tuned-70b-GGUF/resolve/main/nemotron-medical-tuned-70b.Q5_K_M.gguf) | Q5_K_M | 50.0 | |
| [PART 1](https://huggingface.co/mradermacher/nemotron-medical-tuned-70b-GGUF/resolve/main/nemotron-medical-tuned-70b.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/nemotron-medical-tuned-70b-GGUF/resolve/main/nemotron-medical-tuned-70b.Q6_K.gguf.part2of2) | Q6_K | 58.0 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/nemotron-medical-tuned-70b-GGUF/resolve/main/nemotron-medical-tuned-70b.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/nemotron-medical-tuned-70b-GGUF/resolve/main/nemotron-medical-tuned-70b.Q8_0.gguf.part2of2) | Q8_0 | 75.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Konzai/ppo-LunarLander-v2
|
Konzai
| 2025-09-21T16:13:54Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-09-21T16:13:32Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 264.54 +/- 17.66
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
schooncestiaa/blockassist-bc-scruffy_webbed_dragonfly_1758470912
|
schooncestiaa
| 2025-09-21T16:09:56Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"scruffy webbed dragonfly",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-21T16:09:36Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- scruffy webbed dragonfly
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
DeathGodlike/Ollpheist-12B_EXL3
|
DeathGodlike
| 2025-09-21T16:05:03Z | 0 | 0 |
safetensors
|
[
"safetensors",
"exl3",
"4-bit",
"6-bit",
"8-bit",
"text-generation",
"base_model:Retreatcost/Ollpheist-12B",
"base_model:quantized:Retreatcost/Ollpheist-12B",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-09-21T16:05:01Z |
---
license: apache-2.0
base_model:
- Retreatcost/Ollpheist-12B
base_model_relation: quantized
pipeline_tag: text-generation
library_name: safetensors
tags:
- exl3
- 4-bit
- 6-bit
- 8-bit
---
## EXL3 quants: [ [H8-4.0BPW](https://huggingface.co/DeathGodlike/Ollpheist-12B_EXL3/tree/H8-4.0BPW) | [H8-6.0BPW](https://huggingface.co/DeathGodlike/Ollpheist-12B_EXL3/tree/H8-6.0BPW) | [H8-8.0BPW](https://huggingface.co/DeathGodlike/Ollpheist-12B_EXL3/tree/H8-8.0BPW) ]
# Original model: [Ollpheist-12B](https://huggingface.co/Retreatcost/Ollpheist-12B) by [Retreatcost](https://huggingface.co/Retreatcost)
|
n1kg0r/rubert_mvp
|
n1kg0r
| 2025-09-21T16:03:09Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:DeepPavlov/rubert-base-cased",
"base_model:finetune:DeepPavlov/rubert-base-cased",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-09-21T16:02:31Z |
---
library_name: transformers
base_model: DeepPavlov/rubert-base-cased
tags:
- generated_from_trainer
model-index:
- name: rubert_mvp
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rubert_mvp
This model is a fine-tuned version of [DeepPavlov/rubert-base-cased](https://huggingface.co/DeepPavlov/rubert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4511
- Mse: 0.4511
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mse |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.8506 | 1.0 | 95 | 0.4740 | 0.4740 |
| 0.5292 | 2.0 | 190 | 0.4301 | 0.4301 |
| 0.3854 | 3.0 | 285 | 0.4511 | 0.4511 |
### Framework versions
- Transformers 4.56.2
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.22.0
|
Aya-Ch/ALLaM7B-Islamic-LoRA
|
Aya-Ch
| 2025-09-21T15:55:02Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-21T15:54:55Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Amirdferff/sst2-bert-base-uncased
|
Amirdferff
| 2025-09-21T15:55:01Z | 44 | 1 | null |
[
"safetensors",
"bert",
"en",
"dataset:stanfordnlp/sst2",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"region:us"
] | null | 2025-08-27T06:40:00Z |
---
license: apache-2.0
datasets:
- stanfordnlp/sst2
language:
- en
base_model:
- google-bert/bert-base-uncased
---
# ๐ BERT fine-tuned on SST-2
This model is a fine-tuned version of **bert-base-uncased** on the **GLUE SST-2 dataset** for **sentiment analysis**.
It achieves strong results on the validation set, reaching **~92.6% accuracy**.
---
## ๐ Evaluation Results
- **Validation Accuracy:** 0.9266 (โ92.66%)
Raw output from `evaluate` library:
---
## ๐ How it works
- **Input:** A single English sentence
- **Output:** `POSITIVE` or `NEGATIVE` with a confidence score
- **Architecture:**
- Base model: BERT (bert-base-uncased)
- Classification head: 2-label linear layer on top of [CLS] token
- **Training setup:**
- Optimizer: AdamW
- Scheduler: Linear LR decay
- Epochs: 3
- Batch size: 16
---
## ๐ง Usage
### Inference with `pipeline`
```python
from transformers import pipeline
clf = pipeline("text-classification", model="Amirdferff/sst2-bert-base-uncased")
print(clf("I really loved this movie!"))
# [{'label': 'POSITIVE', 'score': 0.98}]
|
Stef7177/camembert-triathlon-coach
|
Stef7177
| 2025-09-21T15:50:40Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"camembert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-09-21T15:49:42Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
schooncestiaa/blockassist-bc-scruffy_webbed_dragonfly_1758469666
|
schooncestiaa
| 2025-09-21T15:48:52Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"scruffy webbed dragonfly",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-21T15:48:45Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- scruffy webbed dragonfly
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
astrooons/blockassist
|
astrooons
| 2025-09-21T15:47:08Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"voracious quiet bear",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-21T14:56:24Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- voracious quiet bear
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
rlogh/cheese-texture-classifier-final
|
rlogh
| 2025-09-21T15:44:42Z | 0 | 0 | null |
[
"pytorch",
"image-classification",
"cheese",
"texture",
"computer-vision",
"transfer-learning",
"final",
"dataset:aslan-ng/cheese-image",
"license:mit",
"model-index",
"region:us"
] |
image-classification
| 2025-09-21T15:44:37Z |
---
license: mit
tags:
- image-classification
- cheese
- texture
- computer-vision
- pytorch
- transfer-learning
- final
datasets:
- aslan-ng/cheese-image
metrics:
- accuracy
model-index:
- name: Final Cheese Texture Classifier
results:
- task:
type: image-classification
name: Cheese Texture Classification
dataset:
type: aslan-ng/cheese-image
name: Cheese Image Dataset
metrics:
- type: accuracy
value: 40.00
name: Test Accuracy
---
# Final Cheese Texture Classifier
This is the final version of the cheese texture classifier that fixes the BatchNorm issue with small batch sizes.
## Model Description
- **Architecture**: Transfer Learning with resnet18
- **Task**: 4-class texture classification (Low, Medium-Low, Medium-High, High texture)
- **Input**: 224x224 RGB images
- **Output**: 4-class probability distribution
## Final Features
- **Transfer Learning**: Uses pre-trained resnet18 as backbone
- **BatchNorm Fixed**: No BatchNorm1d layers to avoid small batch size issues
- **Safe Data Augmentation**: Transforms that work with small datasets
- **Final AutoML**: 20 trials with transfer learning hyperparameters
- **Extended Training**: Up to 50 epochs with careful early stopping
## Training Details
- **Dataset**: [aslan-ng/cheese-image](https://huggingface.co/datasets/aslan-ng/cheese-image)
- **Optimization Method**: Final Optuna AutoML with 20 trials
- **Transfer Learning**: Pre-trained resnet18 backbone
- **Early Stopping**: Yes (patience=10)
- **Max Epochs**: 50
## Performance
- **Test Accuracy**: 40.00%
- **Validation Accuracy**: 75.00%
- **Test Loss**: 0.9921
## Best Hyperparameters
```json
{
"model_name": "resnet18",
"dropout_rate": 0.32484158566728777,
"learning_rate": 0.00015218971132928362,
"weight_decay": 9.737804011286956e-05,
"batch_size": 2
}
```
## Usage
```python
import torch
import torch.nn as nn
from PIL import Image
import torchvision.transforms as transforms
import torchvision.models as models
# Load model (you'll need to define the TransferLearningModel class first)
model = TransferLearningModel(num_classes=4, dropout_rate=0.32484158566728777, model_name='resnet18')
model.load_state_dict(torch.load('pytorch_model.bin'))
model.eval()
# Preprocess image
transform = transforms.Compose([
transforms.Resize((224, 224)),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
])
# Load and preprocess image
image = Image.open('cheese_image.jpg').convert('RGB')
input_tensor = transform(image).unsqueeze(0)
# Make prediction
with torch.no_grad():
output = model(input_tensor)
probabilities = torch.softmax(output, dim=1)
predicted_class = torch.argmax(probabilities, dim=1).item()
class_names = ["Low Texture", "Medium-Low Texture", "Medium-High Texture", "High Texture"]
print(f"Predicted class: {class_names[predicted_class]}")
```
## Class Definitions
- **Class 0 (Low Texture)**: Texture values <= 0.425
- **Class 1 (Medium-Low Texture)**: Texture values 0.425 < x <= 0.600
- **Class 2 (Medium-High Texture)**: Texture values 0.600 < x <= 0.775
- **Class 3 (High Texture)**: Texture values > 0.775
## Final Improvements
- **BatchNorm Fixed**: Removed BatchNorm1d layers that caused issues with batch size 1
- **Transfer Learning**: Leverages pre-trained features for better performance
- **Safe Augmentation**: Transforms that work reliably with small datasets
- **Advanced Training**: Gradient clipping, learning rate scheduling, extended epochs
- **Final AutoML**: 20 trials with transfer learning specific hyperparameters
## Limitations
- Trained on a very small dataset (30 images)
- Texture classification may not generalize to all cheese types
- Performance may vary with different lighting conditions or image quality
## Citation
If you use this model, please cite the original dataset:
```bibtex
@dataset{aslan-ng/cheese-image,
title={Cheese Image Dataset},
author={Aslan Noorghasemi},
year={2024},
url={https://huggingface.co/datasets/aslan-ng/cheese-image}
}
```
|
Aname-Tommy/Mel-Band-Roformer_Duality
|
Aname-Tommy
| 2025-09-21T15:39:37Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-21T15:24:54Z |
---
license: apache-2.0
---
|
PushkarKumar/veritas-ai-isot-fake-news-classifier
|
PushkarKumar
| 2025-09-21T15:35:50Z | 0 | 0 | null |
[
"tensorboard",
"safetensors",
"distilbert",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"region:us"
] | null | 2025-09-21T14:37:05Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: veritas-ai-isot-fake-news-classifier
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# veritas-ai-isot-fake-news-classifier
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0016
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.0001 | 1.0 | 4490 | 0.0016 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.8.0+cu126
- Datasets 2.14.7
- Tokenizers 0.14.1
|
Thang26/Lora-Qwen2.5-3B-JP2EN
|
Thang26
| 2025-09-21T15:31:59Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-21T13:53:31Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
OddTheGreat/Terminal_24B_V.2
|
OddTheGreat
| 2025-09-21T15:30:43Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"base_model:CrucibleLab/M3.2-24B-Loki-V1.3",
"base_model:merge:CrucibleLab/M3.2-24B-Loki-V1.3",
"base_model:OddTheGreat/Circuitry_24B_V.2",
"base_model:merge:OddTheGreat/Circuitry_24B_V.2",
"base_model:SicariusSicariiStuff/Impish_Magic_24B",
"base_model:merge:SicariusSicariiStuff/Impish_Magic_24B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-21T13:13:03Z |
---
base_model:
- CrucibleLab/M3.2-24B-Loki-V1.3
- SicariusSicariiStuff/Impish_Magic_24B
- OddTheGreat/Circuitry_24B_V.2
library_name: transformers
tags:
- mergekit
- merge
---
# Terminal_24B_V.2
This is a merge of pre-trained language models
In testing. Seems to be more accurate than v1, no impersonations but sometimes makes summary at end of reply.
|
mradermacher/llama-user-sim-70b-GGUF
|
mradermacher
| 2025-09-21T15:19:01Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:ychen/llama-user-sim-70b",
"base_model:quantized:ychen/llama-user-sim-70b",
"endpoints_compatible",
"region:us"
] | null | 2025-09-21T13:47:36Z |
---
base_model: ychen/llama-user-sim-70b
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/ychen/llama-user-sim-70b
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#llama-user-sim-70b-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/llama-user-sim-70b-GGUF/resolve/main/llama-user-sim-70b.Q2_K.gguf) | Q2_K | 26.5 | |
| [GGUF](https://huggingface.co/mradermacher/llama-user-sim-70b-GGUF/resolve/main/llama-user-sim-70b.Q3_K_S.gguf) | Q3_K_S | 31.0 | |
| [GGUF](https://huggingface.co/mradermacher/llama-user-sim-70b-GGUF/resolve/main/llama-user-sim-70b.Q3_K_M.gguf) | Q3_K_M | 34.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/llama-user-sim-70b-GGUF/resolve/main/llama-user-sim-70b.Q3_K_L.gguf) | Q3_K_L | 37.2 | |
| [GGUF](https://huggingface.co/mradermacher/llama-user-sim-70b-GGUF/resolve/main/llama-user-sim-70b.IQ4_XS.gguf) | IQ4_XS | 38.4 | |
| [GGUF](https://huggingface.co/mradermacher/llama-user-sim-70b-GGUF/resolve/main/llama-user-sim-70b.Q4_K_S.gguf) | Q4_K_S | 40.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/llama-user-sim-70b-GGUF/resolve/main/llama-user-sim-70b.Q4_K_M.gguf) | Q4_K_M | 42.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/llama-user-sim-70b-GGUF/resolve/main/llama-user-sim-70b.Q5_K_S.gguf) | Q5_K_S | 48.8 | |
| [GGUF](https://huggingface.co/mradermacher/llama-user-sim-70b-GGUF/resolve/main/llama-user-sim-70b.Q5_K_M.gguf) | Q5_K_M | 50.0 | |
| [PART 1](https://huggingface.co/mradermacher/llama-user-sim-70b-GGUF/resolve/main/llama-user-sim-70b.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/llama-user-sim-70b-GGUF/resolve/main/llama-user-sim-70b.Q6_K.gguf.part2of2) | Q6_K | 58.0 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/llama-user-sim-70b-GGUF/resolve/main/llama-user-sim-70b.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/llama-user-sim-70b-GGUF/resolve/main/llama-user-sim-70b.Q8_0.gguf.part2of2) | Q8_0 | 75.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
SauravCh11/Passport-EN
|
SauravCh11
| 2025-09-21T15:17:36Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vision-encoder-decoder",
"image-to-text",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2025-09-21T08:50:40Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
little-john/lj-insurance-doc-classification-Skywork-Reward-V2-Qwen3-0.6B
|
little-john
| 2025-09-21T15:16:02Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"unsloth",
"text-classification",
"en",
"base_model:Skywork/Skywork-Reward-V2-Qwen3-0.6B",
"base_model:finetune:Skywork/Skywork-Reward-V2-Qwen3-0.6B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-09-21T15:00:58Z |
---
base_model: Skywork/Skywork-Reward-V2-Qwen3-0.6B
tags:
- transformers
- unsloth
- qwen3
- text-classification
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** little-john
- **License:** apache-2.0
- **Finetuned from model :** Skywork/Skywork-Reward-V2-Qwen3-0.6B
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
KoichiYasuoka/modernbert-large-scandinavian-ud-embeds
|
KoichiYasuoka
| 2025-09-21T15:11:27Z | 0 | 0 | null |
[
"pytorch",
"modernbert",
"scandinavian",
"icelandic",
"danish",
"swedish",
"norwegian",
"token-classification",
"pos",
"dependency-parsing",
"is",
"da",
"sv",
"nb",
"nn",
"dataset:universal_dependencies",
"base_model:AI-Sweden-Models/ModernBERT-large",
"base_model:finetune:AI-Sweden-Models/ModernBERT-large",
"license:apache-2.0",
"region:us"
] |
token-classification
| 2025-09-21T15:05:49Z |
---
language:
- "is"
- "da"
- "sv"
- "nb"
- "nn"
tags:
- "scandinavian"
- "icelandic"
- "danish"
- "swedish"
- "norwegian"
- "token-classification"
- "pos"
- "dependency-parsing"
base_model: AI-Sweden-Models/ModernBERT-large
datasets:
- "universal_dependencies"
license: "apache-2.0"
pipeline_tag: "token-classification"
---
# modernbert-large-scandinavian-ud-embeds
## Model Description
This is a ModernBERT model for POS-tagging and dependency-parsing, derived from [AI-Sweden-Models/ModernBERT-large](https://huggingface.co/AI-Sweden-Models/ModernBERT-large), [UD_Icelandic-IcePaHC](https://github.com/UniversalDependencies/UD_Icelandic-IcePaHC), [UD_Danish-DDT](https://github.com/UniversalDependencies/UD_Danish-DDT), [UD_Swedish-Talbanken](https://github.com/UniversalDependencies/UD_Swedish-Talbanken), [UD_Norwegian-Bokmaal](https://github.com/UniversalDependencies/UD_Norwegian-Bokmaal) and [UD_Norwegian-Nynorsk](https://github.com/UniversalDependencies/UD_Norwegian-Nynorsk).
## How to Use
```py
from transformers import pipeline
nlp=pipeline("universal-dependencies","KoichiYasuoka/modernbert-large-scandinavian-ud-embeds",trust_remote_code=True)
```
|
twelveyy/qwen-legal-lora-sft
|
twelveyy
| 2025-09-21T15:01:47Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:Qwen/Qwen2.5-0.5B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-0.5B-Instruct",
"region:us"
] | null | 2025-09-21T15:01:21Z |
---
base_model: Qwen/Qwen2.5-0.5B-Instruct
library_name: peft
model_name: qwen_legal_lora_sft
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for qwen_legal_lora_sft
This model is a fine-tuned version of [Qwen/Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="None", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- PEFT 0.15.2
- TRL: 0.23.0
- Transformers: 4.56.2
- Pytorch: 2.6.0+cu124
- Datasets: 3.6.0
- Tokenizers: 0.22.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
wjen/healthcare-cls-Qwen3-0.6B-LoRA
|
wjen
| 2025-09-21T15:00:47Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-09-21T14:29:00Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mingyi456/Chroma1-Base-DF11
|
mingyi456
| 2025-09-21T14:58:30Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"en",
"base_model:lodestones/Chroma1-Base",
"base_model:quantized:lodestones/Chroma1-Base",
"license:apache-2.0",
"region:us"
] |
text-to-image
| 2025-09-21T13:19:17Z |
---
license: apache-2.0
base_model:
- lodestones/Chroma1-Base
base_model_relation: quantized
language:
- en
pipeline_tag: text-to-image
library_name: diffusers
---
For more information (including how to compress models yourself), check out https://huggingface.co/DFloat11 and https://github.com/LeanModels/DFloat11
This is my first time using DF11 to compress a model outside the Flux architecture. The process for compressing Flux-based models is much more straightforward as compared to other architectures because the compression code requires a `pattern_dict` as input, but the original [example code](https://github.com/LeanModels/DFloat11/tree/master/examples/compress_flux1) only provides it for Flux, which meant I had to learn the notation myself and modify it to fit other models. At least Chroma is just a pruned version of Flux, so it was relatively simple to derive the correct `pattern_dict` this time. Do let me know if you run into any problems.
This is the `pattern_dict` I used for compression:
```python
pattern_dict = {
"transformer_blocks\.\d+": (
"attn.to_q",
"attn.to_k",
"attn.to_v",
"attn.add_k_proj",
"attn.add_v_proj",
"attn.add_q_proj",
"attn.to_out.0",
"attn.to_add_out",
"ff.net.0.proj",
"ff.net.2",
"ff_context.net.0.proj",
"ff_context.net.2",
),
"single_transformer_blocks\.\d+": (
"proj_mlp",
"proj_out",
"attn.to_q",
"attn.to_k",
"attn.to_v",
),
}
```
### How to Use
#### `diffusers`
1. Install the DFloat11 pip package *(installs the CUDA kernel automatically; requires a CUDA-compatible GPU and PyTorch installed)*:
```bash
pip install dfloat11[cuda12]
# or if you have CUDA version 11:
# pip install dfloat11[cuda11]
```
2. To use the DFloat11 model, run the following example code in Python:
```python
import torch
from diffusers import ChromaPipeline, ChromaTransformer2DModel
from dfloat11 import DFloat11Model
from transformers.modeling_utils import no_init_weights
with no_init_weights():
transformer = ChromaTransformer2DModel.from_config(
ChromaTransformer2DModel.load_config(
"lodestones/Chroma1-Base",
subfolder="transformer"
),
torch_dtype=torch.bfloat16
).to(torch.bfloat16)
pipe = ChromaPipeline.from_pretrained(
"lodestones/Chroma1-Base",
transformer=transformer,
torch_dtype=torch.bfloat16
)
DFloat11Model.from_pretrained("mingyi456/Chroma1-Base-DF11", device='cpu', bfloat16_model=pipe.transformer)
pipe.enable_model_cpu_offload()
prompt = "A high-fashion close-up portrait of a blonde woman in clear sunglasses. The image uses a bold teal and red color split for dramatic lighting. The background is a simple teal-green. The photo is sharp and well-composed, and is designed for viewing with anaglyph 3D glasses for optimal effect. It looks professionally done."
negative_prompt = "low quality, ugly, unfinished, out of focus, deformed, disfigure, blurry, smudged, restricted palette, flat colors"
image = pipe(
prompt,
negative_prompt=negative_prompt,
generator=torch.Generator("cpu").manual_seed(0)
).images[0]
image.save("Chroma1-Base.png")
```
#### ComfyUI
~~Follow the instructions (have not tested myself) here: https://github.com/LeanModels/ComfyUI-DFloat11~~
Currently, this model will not work with ComfyUI out of the box, because the custom node currently only supports Flux models. It should be possible to modify the code to successfully load this model as well, but it requires another `pattern_dict` that is of a completely different form compared to the one used to compress the model. If you are interested in running this model in ComfyUI, please try to contact the developer to request support.
|
yasserrmd/punjabi-gemma-300m-emb
|
yasserrmd
| 2025-09-21T14:57:43Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"gemma3_text",
"sentence-similarity",
"feature-extraction",
"dense",
"generated_from_trainer",
"dataset_size:5004",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:google/embeddinggemma-300m",
"base_model:finetune:google/embeddinggemma-300m",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-09-21T14:56:30Z |
---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- dense
- generated_from_trainer
- dataset_size:5004
- loss:MultipleNegativesRankingLoss
base_model: google/embeddinggemma-300m
widget:
- source_sentence: เจเฉเจฒเฉเจเฉเฉฑเจ เจญเจฐเจพเจตเจพเจ เจจเฉเฉฐ เจฌเฉฑเจฌเจฐเจพเจ เจจเฉ เจเจฆเฉเจ เจธเฉเจงเจฟเจ เจธเฉ?
sentences:
- 22 เจฌเจฟเจฒเฉเจ
เจจ เจ
เจฎเจฐเฉเจเฉ เจกเจพเจฒเจฐ
- '1923'
- '1992'
- source_sentence: เจฌเฉฐเจเจฒเจพเจฆเฉเจธเจผ เจตเจฟเฉฑเจ เจฒเฉเฉฐเจเฉ เจฌเจฃเจพเจเจฃ เจฆเจพ เจเฉฐเจฎ เจเจฟเฉฑเจฅเฉ เจเฉเจเจฆเจฐเจฟเจค เจนเฉ?
sentences:
- '1891'
- '21'
- เจธเจฟเจฐเจพเจเจเฉฐเจ, เจเฉเจธเจผเจเฉเจ, เจชเจฌเจจเจพ เจ
เจคเฉ เจเฉเฉฑเจฒเจจเจพ
- source_sentence: เจธเฉ เจเฉ เจเฉฐเจฆเจฐเฉฑเจชเจจ เจจเฉ เจเจชเจฃเฉ เจเฉเจฐเฉเจเฉเจเจธเจผเจจ เจเจฟเฉฑเจฅเฉเจ เจชเฉเจฐเฉ เจเฉเจคเฉ?
sentences:
- เจเฉเจฒเจก เจเฉเจคเฉ เจเฉเจฎ
- เจเจฟเฉฑเจคเฉเจฐ เจธเจฐเจเจพเจฐเฉ เจเจพเจฒเจ
- เจชเจเจฟเจเจฒเฉ
- source_sentence: เจซเฉเจฐเจฌเจธ เจฎเฉเจเจเจผเฉเจจ เจฆเฉ 2022 เจฆเฉ เจ
เฉฐเจเฉเจฟเจเจ เจ
เจจเฉเจธเจพเจฐ, เจตเจฟเจธเจผเจต เจฆเฉเจเจ เจธเจญ เจคเฉเจ เจธเจผเจเจคเฉเจธเจผเจพเจฒเฉ
เจเจฐเจคเจพเจ เจฆเฉ เจธเฉเจเฉ เจตเจฟเฉฑเจ เจฎเฉเจฒเฉเจจเฉ เจฆเจพ เจฆเจฐเจเจพ เจเฉ เจธเฉ?
sentences:
- เจเฉ เจนเจฅเจฟเจเจฐเจฌเฉฐเจฆ เจธเฉเจจเจพเจตเจพเจ เจฆเฉ เจเฉฑเจ เจเจฟเจธเจฎ เจฆเฉ เจฐเฉเจเจ เจตเจพเจฒเฉ เจฆเฉ เจ
เจนเฉเจฆเฉเจงเจพเจฐเฉ เจเฉฑเจเฉ เจเจฟเจนเฉ เจธเจฎเฉเจ เจตเจพเจธเจคเฉ
เจซเจผเฉเจเฉ เจจเฉเจเจฐเฉ เจเจฐเจฆเฉ เจนเจจ เจคเจพเจ เจเจนเจจเจพเจ เจจเฉเฉฐ เจฌเจฐเจพเจฌเจฐ เจฆเฉ เจชเฉเจจเจธเจผเจจ เจฎเจฟเจฒเจฃเฉ เจเจพเจนเฉเจฆเฉ เจนเฉ, เจญเจพเจตเฉเจ เจเจน เจ
เฉฑเจเฉ-เจชเจฟเฉฑเจเฉ
เจฐเจฟเจเจพเจเจฐเจฎเฉเจเจ โเจคเฉ เจเจเจฃ เจ
เจคเฉ เจเจนเจจเจพเจ เจจเฉเฉฐ เจชเฉเจจเจธเจผเจจ เจฆเฉเจเจ เจฆเจฐเจพเจ เจตเจฟเฉฑเจ เจนเฉเจฃ เจตเจพเจฒเฉ เจญเจตเจฟเฉฑเจเฉ เจฒเจพเจญ เจฆเจพ
เจซเจผเจพเจเจฆเจพ เจตเฉ เจฎเจฟเจฒเจฃเจพ เจเจพเจนเฉเจฆเจพ เจนเฉ
- เจเฉฐเจฎเฉ เจเจธเจผเจฎเฉเจฐ เจฏเฉเจจเฉเจตเจฐเจธเจฟเจเฉ
- เจธเฉฑเจคเจตเฉเจ
- source_sentence: เจเจฌเจฟเจฆ เจ
เจฒเฉ เจฆเจพ เจธเจญ เจคเฉเจ เจตเจงเฉเจ เจเฉเจเจฆเจฌเจพเจเจผเฉ เจชเฉเจฐเจฆเจฐเจธเจผเจจ เจเจฟเจนเฉเจพ เจธเฉ?
sentences:
- เจเฉฐเจ เจ
เจคเฉ เจญเฉเจธเจผเจฎเจพ
- '1950'
- 23 เจฆเฉเฉเจพเจ เจฆเฉ เจเฉ 6 เจฆเฉเฉเจพเจ
pipeline_tag: sentence-similarity
library_name: sentence-transformers
---
# SentenceTransformer based on google/embeddinggemma-300m
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [google/embeddinggemma-300m](https://huggingface.co/google/embeddinggemma-300m). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [google/embeddinggemma-300m](https://huggingface.co/google/embeddinggemma-300m) <!-- at revision c5cfa06e5e282a820e85d57f7fb053207494f41d -->
- **Maximum Sequence Length:** 2048 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 2048, 'do_lower_case': False, 'architecture': 'Gemma3TextModel'})
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Dense({'in_features': 768, 'out_features': 3072, 'bias': False, 'activation_function': 'torch.nn.modules.linear.Identity'})
(3): Dense({'in_features': 3072, 'out_features': 768, 'bias': False, 'activation_function': 'torch.nn.modules.linear.Identity'})
(4): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the ๐ค Hub
model = SentenceTransformer("yasserrmd/punjabi-gemma-300m-emb")
# Run inference
queries = [
"\u0a06\u0a2c\u0a3f\u0a26 \u0a05\u0a32\u0a40 \u0a26\u0a3e \u0a38\u0a2d \u0a24\u0a4b\u0a02 \u0a35\u0a27\u0a40\u0a06 \u0a17\u0a47\u0a02\u0a26\u0a2c\u0a3e\u0a1c\u0a3c\u0a40 \u0a2a\u0a4d\u0a30\u0a26\u0a30\u0a38\u0a3c\u0a28 \u0a15\u0a3f\u0a39\u0a5c\u0a3e \u0a38\u0a40?",
]
documents = [
'23 เจฆเฉเฉเจพเจ เจฆเฉ เจเฉ 6 เจฆเฉเฉเจพเจ',
'1950',
'เจเฉฐเจ เจ
เจคเฉ เจญเฉเจธเจผเจฎเจพ',
]
query_embeddings = model.encode_query(queries)
document_embeddings = model.encode_document(documents)
print(query_embeddings.shape, document_embeddings.shape)
# [1, 768] [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(query_embeddings, document_embeddings)
print(similarities)
# tensor([[0.3268, 0.1534, 0.0167]])
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 5,004 training samples
* Columns: <code>sentence_0</code> and <code>sentence_1</code>
* Approximate statistics based on the first 1000 samples:
| | sentence_0 | sentence_1 |
|:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 11 tokens</li><li>mean: 28.8 tokens</li><li>max: 88 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 16.26 tokens</li><li>max: 144 tokens</li></ul> |
* Samples:
| sentence_0 | sentence_1 |
|:-----------------------------------------------------------------|:---------------------------------|
| <code>เจตเจฟเจฐเจพเจ เจเฉเจนเจฒเฉ เจจเฉ เจเจฟเจนเฉเฉ เจธเจเฉเจฒ เจตเจฟเฉฑเจ เจชเฉเฉเจนเจพเจ เจเฉเจคเฉ?</code> | <code>เจธเฉเจเจ เจฅเจพเจฎเจธ เจธเจเฉเจฒ</code> |
| <code>1992 'เจ เจ
เฉฐเจคเจฐเจฐเจพเจธเจผเจเจฐเฉ เจ
เจเจพเจเจฌ เจเจฐ เจฆเจฟเจนเจพเฉเฉ เจฆเจพ เจตเจฟเจธเจผเจพ เจเฉ เจธเฉ?</code> | <code>เจ
เจเจพเจเจฌเจเจฐ เจ
เจคเฉ เจตเจพเจคเจพเจตเจฐเจฃ</code> |
| <code>เจเฉเจฐเจชเฉเจฐเฉเจค เจงเฉเจฐเฉ เจเจฟเฉฑเจฅเฉเจ เจฐเฉเจเจผเฉ เจฐเฉเจเฉ เจเจฎเจพ เจฐเจฟเจนเจพ เจนเฉ?</code> | <code>เจฆเจฟเฉฑเจฒเฉ</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim",
"gather_across_devices": false
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `per_device_train_batch_size`: 6
- `per_device_eval_batch_size`: 6
- `num_train_epochs`: 7
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 6
- `per_device_eval_batch_size`: 6
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 7
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `parallelism_config`: None
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `hub_revision`: None
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `liger_kernel_config`: None
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
- `router_mapping`: {}
- `learning_rate_mapping`: {}
</details>
### Training Logs
| Epoch | Step | Training Loss |
|:------:|:----:|:-------------:|
| 0.5995 | 500 | 1.346 |
| 1.1990 | 1000 | 1.3542 |
| 1.7986 | 1500 | 1.2281 |
| 2.3981 | 2000 | 1.1036 |
| 2.9976 | 2500 | 0.9937 |
| 3.5971 | 3000 | 0.7913 |
| 4.1966 | 3500 | 0.7128 |
| 4.7962 | 4000 | 0.557 |
| 5.3957 | 4500 | 0.4327 |
| 5.9952 | 5000 | 0.3557 |
| 6.5947 | 5500 | 0.2424 |
### Framework Versions
- Python: 3.12.11
- Sentence Transformers: 5.1.0
- Transformers: 4.56.2
- PyTorch: 2.8.0+cu128
- Accelerate: 1.10.1
- Datasets: 4.0.0
- Tokenizers: 0.22.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
ezequiel/new_or_used
|
ezequiel
| 2025-09-21T14:52:47Z | 0 | 0 | null |
[
"safetensors",
"bert",
"license:apache-2.0",
"region:us"
] | null | 2025-09-21T14:47:41Z |
---
license: apache-2.0
---
Epoch Training Loss Validation Loss Accuracy Precision Recall F1
1 0.410200 0.385042 0.832611 0.841840 0.847673 0.844747
2 0.323200 0.391985 0.835889 0.863106 0.825440 0.843852
3 0.234500 0.456331 0.835389 0.858242 0.830817 0.844307
|
Tomiwajin/setfit_email_classifier
|
Tomiwajin
| 2025-09-21T14:51:35Z | 31 | 1 |
setfit
|
[
"setfit",
"safetensors",
"bert",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:sentence-transformers/all-MiniLM-L6-v2",
"base_model:finetune:sentence-transformers/all-MiniLM-L6-v2",
"region:us"
] |
text-classification
| 2025-09-08T23:27:21Z |
---
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget:
- text: 'Monorepos, Verified Templates, Replica Metrics It''s Friday and you know
what that means! Here''s a summary of the stuff we shipped this week Time! It''s
Friday and you know what that means! Here''s a summary of the stuff we shipped
this week: First-Class Support for Monorepos Verified Templates Replica Metrics
to Priority Boarding Fixes and Improv'
- text: 'Thanks for your time Thank you for applying to the Backend Developer position
at YinzCam, Inc..
Unfortunately, YinzCam, Inc. has moved to the next step in their hiring process,
and your application was not selected at this time.'
- text: "Humanoid Alert! Your Data Packet Caught Our Eye at 1X Technologies! Hi Tomiwa,\n\
\nThank you for sending your application data stream our way at 1X Technologies!\n\
\nYour resume just ran through our systems, and let's just say, your skill matrix\
\ looks incredibly promising. We were genuinely intrigued by your experience and\
\ see some serious potential \n\nfor you to help us b"
- text: 'Indeed Application: Software Developer We''ll help you get started pplication
submitted Software Developer TherapyNotes.com - United States 30 reviews The following
items were sent to TherapyNotes.com. Good luck! • Application • Resume
Next steps • The employer or job advertiser may reach out to you about your
application.'
- text: 'Jobs! I have a job that I think lines up well with your resume. It''s new,
so they don''t have many candidates yet. Check out the description and hit "View
Details" if you like what you see.
Entry Level Software Engineer - Revature - Jersey City, NJ
Revature is looking to hire Entry Level Software Engineer'
metrics:
- accuracy
pipeline_tag: text-classification
library_name: setfit
inference: true
base_model: sentence-transformers/all-MiniLM-L6-v2
---
# SetFit with sentence-transformers/all-MiniLM-L6-v2
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 256 tokens
- **Number of Classes:** 7 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:----------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| next-phase | <ul><li>"Next step: Assessment ๐ for the Product Manager role at StartupXYZ Hi Thomas, Thank you again for your interest in the Product Manager position at StartupXYZ. As part of our hiring process, the next step is to complete an assessment that will help us better understand your skills and suitability for the role. Here's what to expect: โข Assessment details: Product str"</li><li>"Next Steps - Front-End Engineer Hey Oluwatomiwa,\nWe're excited to invite you to the next phase of the Front-End Engineer role.\nBefore moving forward, please ensure your location is on the list of accepted locations.\nImportant Notice\n\nIf you currently have or previously had credentials with Outlier or a related platform, please do"</li><li>'Coding Assessment - Backend Developer Position Dear Kevin, We appreciate your interest in the Backend Developer role at CloudSync Technologies. As the next step in our selection process, we invite you to complete our technical coding assessment. This assessment has been carefully designed to evaluate your programming skills and problem-solving a'</li></ul> |
| interview | <ul><li>"Interview for DevOps Engineer at ServerMax Hi Daniel, Thanks again for taking the time to chat with me on the phone! I'm very happy to move you to the next stage of our hiring process โ a 45-minute video interview. This interview will include me and my colleague Tom Rodriguez, our Infrastructure Lead. If you'd like to learn a little about hi"</li><li>"Video interview for Social Media Manager at BuzzMarketing Hi Taylor, Thank you for your application for the Social Media Manager position with BuzzMarketing. We're excited to learn more about you and your qualifications! We would like to invite you to a video interview with Christina Park, our Digital Marketing Director. This will be a chance for us to dis"</li><li>'Final round interview for Marketing Director at BrandBoost Hi Michelle, Congratulations on making it to the final round of interviews for the Marketing Director position! We would like to invite you to a final in-person interview with our executive team including CEO Jonathan Miller and CMO Patricia Davis. This will be a chance for us to discuss your strate'</li></ul> |
| not-job-status-update | <ul><li>"Jobs! Hi Seth,\n\nI found a job you may be interested in: 100% REMOTE - Senior Fullstack Engineer\n\nIf you'd like to apply, let me know in a quick response with your updated resume. Full job details below.\n\nf you are a Senior Software Engineer with Python and React experience, please read on!\n\nWe headquarter"</li><li>'Oluwatomiwa, you have new application updates this week Check out the status of your applications on LinkedIn Check out the status of your applications on LinkedIn Here are the latest updates from the past week Junior Software Engineer Fortune 500 · Plano, TX (On-site) No longer accepting applications Software Quality Assurance Engineer ChronicCar'</li><li>'Junior Software Engineer role at AmeriNat: you would be a great fit! Hey! Check out the latest industry content about career advice, salary negotiations, and interview tips, among other topics. Explore now! Jobs for you Jobs for you Weโรรดre on a mission to connect you with a dream job. To help us refine this list, search for more jobs AmeriNat 4.1 โรฒร Junior Softwar'</li></ul> |
| not-job-related | <ul><li>'Welcome to Idealist! Four actions you can take right now to get started Hi Oluwatomiwa, My name is Ami, and Iโรรดm the founder and executive director of Idealist. We started Idealist in the summer of 1995โรรฎon one old computer and with no full-time staffโรรฎto help connect people around the world with opportunities to do'</li><li>'New arrivals are here SHEIN Shop at SHEIN for the latest trends! Shop at SHEIN for the latest trends! Unsubscribe | View in Browser Pick your unique look SHOP NEW ARRIVALS > FIND US ON APP'</li><li>'\uf8ffรผรถยฎHereโรรดs the Zoom Link & exclusive offers! \uf8ffรผรถยฎHereโรรดs the Zoom Link & exclusive offers! Your seat at the Agentic AI Conference is reserved - plus unlock exclusive training offers up to 40% off! Hi Oluwatomiwa, Weโรรดre thrilled to have you join us for the second edition of the Future of Data and AI: Agentic AI Conference! \uf8ffรผรฎรณ Link to Join'</li></ul> |
| applied | <ul><li>'Thank you for applying! Dear Name,\n\nThank you for your interest in a career at Delta Dental of Iowa. We have received your application for Software Development Intern.\nIn the event that we wish to arrange a personal interview, we will contact you. Again, thank you for your interest in employment at Delta Dental of Iowa.'</li><li>'Thank You For Applying! Dear Name,\nThank you for applying! Your application will be taken into careful consideration and you will hear back from us after we review your application.\n\n\nBest Regards,\n\nBracco Human Resources Team'</li><li>'Thank you for applying to Passes Name,\n\nThanks for applying to Passes. Your application has been received and we will review it right away.\n\nIf your application seems like a good fit for the position we will contact you soon.\n\nRegards,\nPasses\n\n** Please note: Do not reply to this email. This email is sent from an unattended mailbox'</li></ul> |
| offer | <ul><li>"Congratulations - You're Our New Management Consultant! Dear Diana Brown, Congratulations! StrategyConsult Partners is excited to call you our new Management Consultant. We'll focus on wrapping up a few more formalities, including the successful completion of your background check and client reference verification, and aim to get you settled into your ne"</li><li>'Full-Time Employment Offer Dear Brandon Taylor, ArchitectureMax is offering to extend your current employment status from contractor to full-time employee, as of June 1st, 2024. If you choose to accept our offer, please review the terms and conditions of your new employment contract below: Position: You will be working as a S'</li><li>'Employment Offer - Product Manager Position Michael Chen 456 Innovation Drive, San Francisco, CA 94105 Re: Employment Offer Dear Michael: On behalf of ProductMax, Inc. (the "Company"), it is my pleasure to offer you employment with the Company in the role set forth below. The purpose of this letter is to summarize the initial terms of your em'</li></ul> |
| rejected | <ul><li>'Thanks for your time Thank you for your interest in the Software Engineer position at Lantana Consulting Group in Vermont, United States. Unfortunately, we will not be moving forward with your application, but we appreciate your time and interest in Lantana Consulting Group.\n\nRegards,\n\nLantana Consulting Group'</li><li>"Thanks for your time Hello Name,\n\nThank you very much for your interest in our Software Engineer - React/Redux opening. We've had a chance to discuss your background and qualifications with the hiring manager and unfortunately, we have decided to pursue other candidates who appear to match our requirements more closely"</li><li>"Thanks for your interest in Supernova Technology, Name Hi Name,\nThank you for your interest in Supernova Technology. After reviewing your background and experience, weโve decided not to move forward with your application at this time.\n\nWe truly appreciate the time and effort you put into the process, and we hope you don't mind if we reach out in the fut"</li></ul> |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the ๐ค Hub
model = SetFitModel.from_pretrained("setfit_model_id")
# Run inference
preds = model("Thanks for your time Thank you for applying to the Backend Developer position at YinzCam, Inc..
Unfortunately, YinzCam, Inc. has moved to the next step in their hiring process, and your application was not selected at this time.")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:--------|:----|
| Word count | 14 | 55.2121 | 288 |
| Label | Training Sample Count |
|:----------------------|:----------------------|
| applied | 40 |
| interview | 45 |
| next-phase | 35 |
| not-job-related | 55 |
| not-job-status-update | 41 |
| offer | 36 |
| rejected | 45 |
### Training Hyperparameters
- batch_size: (16, 2)
- num_epochs: (1, 16)
- max_steps: -1
- sampling_strategy: oversampling
- body_learning_rate: (2e-05, 1e-05)
- head_learning_rate: 0.01
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- l2_weight: 0.01
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0002 | 1 | 0.3397 | - |
| 0.0106 | 50 | 0.2699 | - |
| 0.0212 | 100 | 0.2293 | - |
| 0.0319 | 150 | 0.1907 | - |
| 0.0425 | 200 | 0.1685 | - |
| 0.0531 | 250 | 0.1174 | - |
| 0.0637 | 300 | 0.078 | - |
| 0.0743 | 350 | 0.0524 | - |
| 0.0849 | 400 | 0.0319 | - |
| 0.0956 | 450 | 0.0113 | - |
| 0.1062 | 500 | 0.0073 | - |
| 0.1168 | 550 | 0.0051 | - |
| 0.1274 | 600 | 0.0038 | - |
| 0.1380 | 650 | 0.0029 | - |
| 0.1487 | 700 | 0.0023 | - |
| 0.1593 | 750 | 0.0021 | - |
| 0.1699 | 800 | 0.0017 | - |
| 0.1805 | 850 | 0.0017 | - |
| 0.1911 | 900 | 0.0015 | - |
| 0.2017 | 950 | 0.0012 | - |
| 0.2124 | 1000 | 0.0011 | - |
| 0.2230 | 1050 | 0.0011 | - |
| 0.2336 | 1100 | 0.001 | - |
| 0.2442 | 1150 | 0.001 | - |
| 0.2548 | 1200 | 0.0009 | - |
| 0.2654 | 1250 | 0.0008 | - |
| 0.2761 | 1300 | 0.0008 | - |
| 0.2867 | 1350 | 0.0007 | - |
| 0.2973 | 1400 | 0.0007 | - |
| 0.3079 | 1450 | 0.0006 | - |
| 0.3185 | 1500 | 0.0006 | - |
| 0.3292 | 1550 | 0.0006 | - |
| 0.3398 | 1600 | 0.0006 | - |
| 0.3504 | 1650 | 0.0006 | - |
| 0.3610 | 1700 | 0.0005 | - |
| 0.3716 | 1750 | 0.0005 | - |
| 0.3822 | 1800 | 0.0005 | - |
| 0.3929 | 1850 | 0.0005 | - |
| 0.4035 | 1900 | 0.0004 | - |
| 0.4141 | 1950 | 0.0004 | - |
| 0.4247 | 2000 | 0.0004 | - |
| 0.4353 | 2050 | 0.0004 | - |
| 0.4460 | 2100 | 0.0004 | - |
| 0.4566 | 2150 | 0.0004 | - |
| 0.4672 | 2200 | 0.0004 | - |
| 0.4778 | 2250 | 0.0004 | - |
| 0.4884 | 2300 | 0.0003 | - |
| 0.4990 | 2350 | 0.0003 | - |
| 0.5097 | 2400 | 0.0003 | - |
| 0.5203 | 2450 | 0.0003 | - |
| 0.5309 | 2500 | 0.0003 | - |
| 0.5415 | 2550 | 0.0003 | - |
| 0.5521 | 2600 | 0.0003 | - |
| 0.5628 | 2650 | 0.0003 | - |
| 0.5734 | 2700 | 0.0003 | - |
| 0.5840 | 2750 | 0.0002 | - |
| 0.5946 | 2800 | 0.0002 | - |
| 0.6052 | 2850 | 0.0003 | - |
| 0.6158 | 2900 | 0.0002 | - |
| 0.6265 | 2950 | 0.0002 | - |
| 0.6371 | 3000 | 0.0002 | - |
| 0.6477 | 3050 | 0.0002 | - |
| 0.6583 | 3100 | 0.0002 | - |
| 0.6689 | 3150 | 0.0002 | - |
| 0.6795 | 3200 | 0.0002 | - |
| 0.6902 | 3250 | 0.0002 | - |
| 0.7008 | 3300 | 0.0002 | - |
| 0.7114 | 3350 | 0.0002 | - |
| 0.7220 | 3400 | 0.0002 | - |
| 0.7326 | 3450 | 0.0002 | - |
| 0.7433 | 3500 | 0.0002 | - |
| 0.7539 | 3550 | 0.0002 | - |
| 0.7645 | 3600 | 0.0002 | - |
| 0.7751 | 3650 | 0.0002 | - |
| 0.7857 | 3700 | 0.0002 | - |
| 0.7963 | 3750 | 0.0002 | - |
| 0.8070 | 3800 | 0.0002 | - |
| 0.8176 | 3850 | 0.0002 | - |
| 0.8282 | 3900 | 0.0002 | - |
| 0.8388 | 3950 | 0.0002 | - |
| 0.8494 | 4000 | 0.0002 | - |
| 0.8601 | 4050 | 0.0002 | - |
| 0.8707 | 4100 | 0.0002 | - |
| 0.8813 | 4150 | 0.0002 | - |
| 0.8919 | 4200 | 0.0002 | - |
| 0.9025 | 4250 | 0.0002 | - |
| 0.9131 | 4300 | 0.0002 | - |
| 0.9238 | 4350 | 0.0002 | - |
| 0.9344 | 4400 | 0.0002 | - |
| 0.9450 | 4450 | 0.0001 | - |
| 0.9556 | 4500 | 0.0002 | - |
| 0.9662 | 4550 | 0.0001 | - |
| 0.9769 | 4600 | 0.0002 | - |
| 0.9875 | 4650 | 0.0001 | - |
| 0.9981 | 4700 | 0.0002 | - |
### Framework Versions
- Python: 3.11.13
- SetFit: 1.1.3
- Sentence Transformers: 5.1.0
- Transformers: 4.56.1
- PyTorch: 2.2.2
- Datasets: 4.0.0
- Tokenizers: 0.22.0
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
lukedai/Qwen3-1.7B-luke-v1
|
lukedai
| 2025-09-21T14:51:08Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"luke-sft",
"trl",
"sft",
"conversational",
"dataset:lukedai/hehe",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-11T16:13:32Z |
---
datasets: lukedai/hehe
library_name: transformers
model_name: Qwen3-1.7B-luke-v1
tags:
- generated_from_trainer
- luke-sft
- trl
- sft
licence: license
---
# Model Card for Qwen3-1.7B-luke-v1
This model is a fine-tuned version of [None](https://huggingface.co/None) on the [lukedai/hehe](https://huggingface.co/datasets/lukedai/hehe) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="lukedai/Qwen3-1.7B-luke-v1", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.16.0
- Transformers: 4.52.0
- Pytorch: 2.6.0
- Datasets: 3.6.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
proj-airi/games-balatro-2024-yolo-entities-detection
|
proj-airi
| 2025-09-21T14:48:58Z | 0 | 1 | null |
[
"onnx",
"YOLO",
"ONNX",
"onnxruntime",
"en",
"multilingual",
"dataset:proj-airi/games-balatro-2024-entities-detection",
"base_model:Ultralytics/YOLO11",
"base_model:quantized:Ultralytics/YOLO11",
"license:mit",
"region:us"
] | null | 2025-09-21T14:03:38Z |
---
# Full model card template at https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md
language:
- en
- multilingual
license: mit
tags:
- YOLO
- ONNX
- onnxruntime
datasets:
- proj-airi/games-balatro-2024-entities-detection
base_model: Ultralytics/YOLO11
---
<p align="center">
<img src="./docs/cover.png">
</p>
## Balatro (2024, game) YOLO entities detection
> This project is part of (and also associate to) the [Project AIRI](https://github.com/moeru-ai/airi), we aim to build a LLM-driven VTuber like [Neuro-sama](https://www.youtube.com/@Neurosama) (subscribe if you didn't!) if you are interested in, please do give it a try on [live demo](https://airi.moeru.ai).
>
> Who are we?
>
> We are a group of currently non-funded talented people made up with computer scientists, experts in multi-modal fields, designers, product managers, and popular open source contributors who loves the goal of where we are heading now.
| Basic | Multiple card types | Description | Crowded cards |
| ------------------------- | ------------------------- | ------------------------- | ------------------------- |
|  |  |  |  |
## Training
We trained this model on our own datasets labelled with n<1k images using Label Studio with YOLOv11n as the base model, it's
available on HuggingFace as well: [proj-airi/games-balatro-2024-entities-detection](https://huggingface.co/datasets/proj-airi/games-balatro-2024-entities-detection).
The training was performed on a single NVIDIA 4080Super GPU with 16GB VRAM, the loss optimized well and converged within set 2000 epochs.


## Citation
If you find our works useful for your research, please consider citing:
```bibtex
@misc{proj_airi_game_ai_models_balatro_2024_yolo_entities_detection_2025,
title = {Balatro (2024, game) YOLO entities detection},
author = {Project AIRI Team, Neko Ayaka, Makito, Rainbow Bird},
howpublished = {\url{https://huggingface.co/proj-airi/games-balatro-2024-yolo-entities-detection}},
year = {2025}
}
```
## License
This model is licensed under the MIT.
|
kevinshin/qwen3-1.7b-sft-epoch-2-wc-cw-3k-pos
|
kevinshin
| 2025-09-21T14:45:37Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"alignment-handbook",
"conversational",
"dataset:kevinshin/wildchat-creative-writing-3k-critique-v2",
"base_model:Qwen/Qwen3-1.7B",
"base_model:finetune:Qwen/Qwen3-1.7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-21T09:10:42Z |
---
base_model: Qwen/Qwen3-1.7B
datasets: kevinshin/wildchat-creative-writing-3k-critique-v2
library_name: transformers
model_name: qwen3-1.7b-sft-epoch-2-wc-cw-3k-pos
tags:
- generated_from_trainer
- trl
- sft
- alignment-handbook
licence: license
---
# Model Card for qwen3-1.7b-sft-epoch-2-wc-cw-3k-pos
This model is a fine-tuned version of [Qwen/Qwen3-1.7B](https://huggingface.co/Qwen/Qwen3-1.7B) on the [kevinshin/wildchat-creative-writing-3k-critique-v2](https://huggingface.co/datasets/kevinshin/wildchat-creative-writing-3k-critique-v2) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="kevinshin/qwen3-1.7b-sft-epoch-2-wc-cw-3k-pos", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/myungjune-sogang-university/general_remo_train/runs/8f80ka8z)
This model was trained with SFT.
### Framework versions
- TRL: 0.19.1
- Transformers: 4.55.0.dev0
- Pytorch: 2.6.0+cu126
- Datasets: 4.0.0
- Tokenizers: 0.21.2
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
KobeBeef67/finetuned-llama
|
KobeBeef67
| 2025-09-21T14:41:54Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-20T23:07:29Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
nikilr/Llama3.1-8B-ds7000
|
nikilr
| 2025-09-21T14:39:56Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-21T14:38:56Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Labira/LabiraPJOK_123_100_Full
|
Labira
| 2025-09-21T14:37:09Z | 0 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"question-answering",
"generated_from_keras_callback",
"base_model:indolem/indobert-base-uncased",
"base_model:finetune:indolem/indobert-base-uncased",
"license:mit",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2025-09-21T06:43:30Z |
---
library_name: transformers
license: mit
base_model: indolem/indobert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: Labira/LabiraPJOK_123_100_Full
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Labira/LabiraPJOK_123_100_Full
This model is a fine-tuned version of [indolem/indobert-base-uncased](https://huggingface.co/indolem/indobert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0108
- Validation Loss: 0.0014
- Epoch: 99
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2200, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 4.8014 | 3.8239 | 0 |
| 3.5330 | 3.0989 | 1 |
| 3.0273 | 2.6526 | 2 |
| 2.6530 | 2.0593 | 3 |
| 2.2572 | 1.6401 | 4 |
| 1.7060 | 1.0829 | 5 |
| 1.2904 | 0.6494 | 6 |
| 0.9646 | 0.4921 | 7 |
| 0.6371 | 0.2708 | 8 |
| 0.4612 | 0.2947 | 9 |
| 0.4154 | 0.2030 | 10 |
| 0.4027 | 0.1670 | 11 |
| 0.2759 | 0.1051 | 12 |
| 0.2515 | 0.1313 | 13 |
| 0.1759 | 0.0651 | 14 |
| 0.1293 | 0.0732 | 15 |
| 0.1595 | 0.0472 | 16 |
| 0.0989 | 0.0647 | 17 |
| 0.0797 | 0.0566 | 18 |
| 0.1292 | 0.0351 | 19 |
| 0.1098 | 0.0743 | 20 |
| 0.1490 | 0.0591 | 21 |
| 0.0934 | 0.0558 | 22 |
| 0.0720 | 0.0330 | 23 |
| 0.0502 | 0.0265 | 24 |
| 0.0598 | 0.0235 | 25 |
| 0.0589 | 0.0272 | 26 |
| 0.0409 | 0.0243 | 27 |
| 0.0445 | 0.0199 | 28 |
| 0.0425 | 0.0395 | 29 |
| 0.0420 | 0.0252 | 30 |
| 0.0332 | 0.0194 | 31 |
| 0.0286 | 0.0178 | 32 |
| 0.0480 | 0.0184 | 33 |
| 0.0361 | 0.0279 | 34 |
| 0.0529 | 0.0195 | 35 |
| 0.0296 | 0.0194 | 36 |
| 0.0346 | 0.0143 | 37 |
| 0.0256 | 0.0177 | 38 |
| 0.0331 | 0.0098 | 39 |
| 0.0386 | 0.0086 | 40 |
| 0.0303 | 0.0053 | 41 |
| 0.0310 | 0.0154 | 42 |
| 0.0193 | 0.0024 | 43 |
| 0.1070 | 0.0090 | 44 |
| 0.0937 | 0.0123 | 45 |
| 0.0766 | 0.0112 | 46 |
| 0.0698 | 0.0057 | 47 |
| 0.0297 | 0.0043 | 48 |
| 0.0385 | 0.0117 | 49 |
| 0.0802 | 0.0181 | 50 |
| 0.1040 | 0.0072 | 51 |
| 0.0836 | 0.0163 | 52 |
| 0.0861 | 0.0060 | 53 |
| 0.0867 | 0.0079 | 54 |
| 0.1242 | 0.0041 | 55 |
| 0.1090 | 0.0070 | 56 |
| 0.0394 | 0.0042 | 57 |
| 0.0312 | 0.0041 | 58 |
| 0.0391 | 0.0020 | 59 |
| 0.0320 | 0.0023 | 60 |
| 0.0479 | 0.0135 | 61 |
| 0.0403 | 0.0017 | 62 |
| 0.0352 | 0.0019 | 63 |
| 0.0314 | 0.0030 | 64 |
| 0.0254 | 0.0020 | 65 |
| 0.0243 | 0.0013 | 66 |
| 0.0504 | 0.0022 | 67 |
| 0.0474 | 0.0023 | 68 |
| 0.0430 | 0.0036 | 69 |
| 0.0142 | 0.0021 | 70 |
| 0.0169 | 0.0014 | 71 |
| 0.0110 | 0.0013 | 72 |
| 0.0229 | 0.0011 | 73 |
| 0.0476 | 0.0008 | 74 |
| 0.0461 | 0.0012 | 75 |
| 0.0170 | 0.0013 | 76 |
| 0.0210 | 0.0020 | 77 |
| 0.0146 | 0.0021 | 78 |
| 0.0206 | 0.0019 | 79 |
| 0.0137 | 0.0021 | 80 |
| 0.0125 | 0.0015 | 81 |
| 0.0303 | 0.0026 | 82 |
| 0.0100 | 0.0019 | 83 |
| 0.0088 | 0.0015 | 84 |
| 0.0128 | 0.0016 | 85 |
| 0.0153 | 0.0018 | 86 |
| 0.0141 | 0.0018 | 87 |
| 0.0163 | 0.0017 | 88 |
| 0.0104 | 0.0014 | 89 |
| 0.0098 | 0.0014 | 90 |
| 0.0116 | 0.0013 | 91 |
| 0.0160 | 0.0015 | 92 |
| 0.0161 | 0.0016 | 93 |
| 0.0088 | 0.0015 | 94 |
| 0.0101 | 0.0015 | 95 |
| 0.0105 | 0.0015 | 96 |
| 0.0110 | 0.0015 | 97 |
| 0.0049 | 0.0014 | 98 |
| 0.0108 | 0.0014 | 99 |
### Framework versions
- Transformers 4.45.2
- TensorFlow 2.17.0
- Datasets 2.20.0
- Tokenizers 0.20.1
|
schooncestiaa/blockassist-bc-scruffy_webbed_dragonfly_1758465349
|
schooncestiaa
| 2025-09-21T14:36:54Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"scruffy webbed dragonfly",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-21T14:36:46Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- scruffy webbed dragonfly
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mradermacher/Ultracore-Instruct-12B-i1-GGUF
|
mradermacher
| 2025-09-21T14:36:32Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:pot99rta/Ultracore-Instruct-12B",
"base_model:quantized:pot99rta/Ultracore-Instruct-12B",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-09-21T13:14:59Z |
---
base_model: pot99rta/Ultracore-Instruct-12B
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
weighted/imatrix quants of https://huggingface.co/pot99rta/Ultracore-Instruct-12B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Ultracore-Instruct-12B-i1-GGUF).***
static quants are available at https://huggingface.co/mradermacher/Ultracore-Instruct-12B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Ultracore-Instruct-12B-i1-GGUF/resolve/main/Ultracore-Instruct-12B.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) |
| [GGUF](https://huggingface.co/mradermacher/Ultracore-Instruct-12B-i1-GGUF/resolve/main/Ultracore-Instruct-12B.i1-IQ1_S.gguf) | i1-IQ1_S | 3.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Ultracore-Instruct-12B-i1-GGUF/resolve/main/Ultracore-Instruct-12B.i1-IQ1_M.gguf) | i1-IQ1_M | 3.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Ultracore-Instruct-12B-i1-GGUF/resolve/main/Ultracore-Instruct-12B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/Ultracore-Instruct-12B-i1-GGUF/resolve/main/Ultracore-Instruct-12B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Ultracore-Instruct-12B-i1-GGUF/resolve/main/Ultracore-Instruct-12B.i1-IQ2_S.gguf) | i1-IQ2_S | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Ultracore-Instruct-12B-i1-GGUF/resolve/main/Ultracore-Instruct-12B.i1-IQ2_M.gguf) | i1-IQ2_M | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Ultracore-Instruct-12B-i1-GGUF/resolve/main/Ultracore-Instruct-12B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 4.6 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Ultracore-Instruct-12B-i1-GGUF/resolve/main/Ultracore-Instruct-12B.i1-Q2_K.gguf) | i1-Q2_K | 4.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Ultracore-Instruct-12B-i1-GGUF/resolve/main/Ultracore-Instruct-12B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 5.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Ultracore-Instruct-12B-i1-GGUF/resolve/main/Ultracore-Instruct-12B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Ultracore-Instruct-12B-i1-GGUF/resolve/main/Ultracore-Instruct-12B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 5.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Ultracore-Instruct-12B-i1-GGUF/resolve/main/Ultracore-Instruct-12B.i1-IQ3_S.gguf) | i1-IQ3_S | 5.7 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Ultracore-Instruct-12B-i1-GGUF/resolve/main/Ultracore-Instruct-12B.i1-IQ3_M.gguf) | i1-IQ3_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Ultracore-Instruct-12B-i1-GGUF/resolve/main/Ultracore-Instruct-12B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 6.2 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Ultracore-Instruct-12B-i1-GGUF/resolve/main/Ultracore-Instruct-12B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 6.7 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Ultracore-Instruct-12B-i1-GGUF/resolve/main/Ultracore-Instruct-12B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/Ultracore-Instruct-12B-i1-GGUF/resolve/main/Ultracore-Instruct-12B.i1-Q4_0.gguf) | i1-Q4_0 | 7.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Ultracore-Instruct-12B-i1-GGUF/resolve/main/Ultracore-Instruct-12B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 7.2 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Ultracore-Instruct-12B-i1-GGUF/resolve/main/Ultracore-Instruct-12B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 7.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Ultracore-Instruct-12B-i1-GGUF/resolve/main/Ultracore-Instruct-12B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 7.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Ultracore-Instruct-12B-i1-GGUF/resolve/main/Ultracore-Instruct-12B.i1-Q4_1.gguf) | i1-Q4_1 | 7.9 | |
| [GGUF](https://huggingface.co/mradermacher/Ultracore-Instruct-12B-i1-GGUF/resolve/main/Ultracore-Instruct-12B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 8.6 | |
| [GGUF](https://huggingface.co/mradermacher/Ultracore-Instruct-12B-i1-GGUF/resolve/main/Ultracore-Instruct-12B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 8.8 | |
| [GGUF](https://huggingface.co/mradermacher/Ultracore-Instruct-12B-i1-GGUF/resolve/main/Ultracore-Instruct-12B.i1-Q6_K.gguf) | i1-Q6_K | 10.2 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
SwetaJena/llama-3.1-8B-octopus_numbers_student_15_v1
|
SwetaJena
| 2025-09-21T14:32:09Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/Llama-3.1-8B-Instruct",
"base_model:finetune:unsloth/Llama-3.1-8B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-21T14:32:02Z |
---
base_model: unsloth/Llama-3.1-8B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** SwetaJena
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Llama-3.1-8B-Instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Eskender/products-ranker-preprod-bge-v8_corrected_data
|
Eskender
| 2025-09-21T14:28:03Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-09-21T14:27:37Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
schooncestiaa/blockassist-bc-scruffy_webbed_dragonfly_1758464725
|
schooncestiaa
| 2025-09-21T14:26:51Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"scruffy webbed dragonfly",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-21T14:26:43Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- scruffy webbed dragonfly
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
sir-timio/Loffi0
|
sir-timio
| 2025-09-21T14:26:29Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:Qwen/Qwen3-Reranker-8B",
"lora",
"transformers",
"arxiv:1910.09700",
"base_model:Qwen/Qwen3-Reranker-8B",
"region:us"
] | null | 2025-09-21T13:46:42Z |
---
base_model: Qwen/Qwen3-Reranker-8B
library_name: peft
tags:
- base_model:adapter:Qwen/Qwen3-Reranker-8B
- lora
- transformers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.16.0
|
sir-timio/Ugaga3
|
sir-timio
| 2025-09-21T14:25:44Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:Qwen/Qwen3-Reranker-8B",
"lora",
"transformers",
"arxiv:1910.09700",
"base_model:Qwen/Qwen3-Reranker-8B",
"region:us"
] | null | 2025-09-21T13:47:15Z |
---
base_model: Qwen/Qwen3-Reranker-8B
library_name: peft
tags:
- base_model:adapter:Qwen/Qwen3-Reranker-8B
- lora
- transformers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.16.0
|
TencentARC/ARC-Qwen-Video-7B
|
TencentARC
| 2025-09-21T14:23:56Z | 255 | 4 |
transformers
|
[
"transformers",
"safetensors",
"multimodal",
"video-understanding",
"video-audio understanding",
"video-qa",
"video-captioning",
"video-grounding",
"video-reasoning",
"short video understanding",
"video-text-to-text",
"arxiv:2507.20939",
"base_model:Qwen/Qwen2.5-VL-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-7B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
video-text-to-text
| 2025-09-18T03:42:52Z |
---
license: apache-2.0
base_model:
- Qwen/Qwen2.5-VL-7B-Instruct
pipeline_tag: video-text-to-text
library_name: transformers
tags:
- multimodal
- video-understanding
- video-audio understanding
- video-qa
- video-captioning
- video-grounding
- video-reasoning
- short video understanding
---
# ARC-Qwen-Video-7B
[](https://arxiv.org/abs/2507.20939)
[](https://arc.tencent.com/en/ai-demos/multimodal)
[](https://github.com/TencentARC/ARC-Hunyuan-Video-7B/tree/arc-qwen-video)
[](https://huggingface.co/TencentARC/ARC-Hunyuan-Video-7B)
[](https://huggingface.co/TencentARC/ARC-Qwen-Video-7B)
[](https://huggingface.co/TencentARC/ARC-Qwen-Video-7B-Narrator)
[](https://tencentarc.github.io/posts/arc-video-announcement/)
[](https://huggingface.co/datasets/TencentARC/ShortVid-Bench)
In this version, we have switched the base model from hunyuan VLM in [ARC-Hunyuan-Video-7B](https://huggingface.co/TencentARC/ARC-Hunyuan-Video-7B) to [Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct) and introduce [ARC-Qwen-Video-7B](https://huggingface.co/TencentARC/ARC-Qwen-Video-7B) for understanding real-world short videos. We used the same training data and training stages. For a detailed introduction, please refer to [ARC-Hunyuan-Video-7B](https://huggingface.co/TencentARC/ARC-Hunyuan-Video-7B). The main distinctions are listed as below,
| Feature | `ARC-Hunyuan-Video-7B` | `ARC-Qwen-Video-7B` |
| ---------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Base VLM** | Hunyuan-VL-7B-Pretrain | Qwen2.5-VL-7B-Instruct |
| **Frame Resolution** <br> <small>*Each model uses a fixed frame resolution to maintain audio-video synchronization.*</small> | Fixed at `640 x 640` | Fixed at `392 x 292` |
| **Frame Sampling** | โข < 150s: 1 FPS <br> โข > 150s: Uniformly sample 150 frames. | โข < 300s: 1 FPS <br> โข > 300s: Uniformly sample 300 frames. |
| **Audio-Video Synchronization** | โข < 150s: Sum tokens from 1s audio + 1s video frame. <br> โข 150-300s: Sum tokens from corresponding audio segment + video frame. <br> โข > 300s: Split audio into 300 segments, use first 2s of each. | โข < 300s: Sum tokens from 1s audio + 1s video. <br> โข > 300s: Split audio into 300 segments, use middle 1s of each. |
We are also introducing a new model, [ARC-Qwen-Video-7B-Narrator](https://huggingface.co/TencentARC/ARC-Qwen-Video-7B-Narrator). It can output **timestamped video descriptions, speaker identities, and the specific ASR (Automatic Speech Recognition) content**. By processing its output with an external LLM, you can obtain more comprehensive structured information as follows (Click to watch the video):
[<img src="https://img.youtube.com/vi/Bz1T4wCuWc8/maxresdefault.jpg" alt="่ง้ข" width="300">](https://www.youtube.com/watch?v=Bz1T4wCuWc8)
<table border="1" style="width:100%; border-collapse: collapse;">
<tr>
<td style="padding: 15px;">
### ่ง้ขๆฆ่ฟฐ
่ฟๆฏไธไธชๅๅง็ญ็๏ผ่ฎฒ่ฟฐไบไธไฝไธๅคซ่ๅจๆฃ่กฃ้็็งๆฟ้ฑ่ขซๅฆปๅญๆๅคๅ็ฐ๏ผๅนถ่ฏฏไปฅไธบๆฏไธๅคซๅๅค็โๆๅโ็คผ็ฉใ่ง้ข้่ฟๅคซๅฆปไบไบบ็ไธ้็ต่ฏ๏ผ็ๅจๅฑ็ฐไบไธๅคซไปๆ ้ฒ่ชๅพ๏ผๅฐ้ๆ้ๆ๏ผๅๅฐๅดฉๆบๆ ๅฅ็ๅ
จ่ฟ็จ๏ผๅ
ๆปกไบๆๅงๆง็ๅ่ฝฌๅๅนฝ้ปๆใ
### ๆ
่ๅๅฑๅ่งฃ
่ง้ขๆ
่ๅด็ปไธ้็ต่ฏๅฑๅผ๏ผไปฅไธๆฏ่ฏฆ็ป็ๆถ้ด็บฟใๅบๆฏใ่ฏด่ฏไบบๅๅฏน่ฏๅ
ๅฎน๏ผ
<table>
<thead>
<tr>
<th>ๆถ้ดๆณ</th>
<th>ๅบๆฏๆ่ฟฐ</th>
<th>่ฏด่ฏไบบ</th>
<th>ๅฏน่ฏๅ
ๅฎน (ASR)</th>
</tr>
</thead>
<tbody>
<tr>
<td>0:00 - 0:05</td>
<td>ไธๅคซๅคดๆดๆตดๅธฝ๏ผๅด็ๆตดๅทพ๏ผๅจๅฎคๅ
ๆณณๆฑ ่พนๆ ้ฒๅฐ่ชๆใ</td>
<td>ๆ </td>
<td>(ๆ ๅฏน่ฏ)</td>
</tr>
<tr>
<td>0:05 - 0:10</td>
<td><b>้ๅคดๅๆข</b>๏ผๅฆปๅญๅจๆ่ฃ
ๅบ้๏ผๆปก่ธๅนธ็ฆๅฐ็ปไธๅคซๆ็ต่ฏใ</td>
<td>ๅฆปๅญ</td>
<td>โๅ๏ผ่ๅ
ฌ๏ผ่ๅ
ฌ๏ผๆ็ฑไฝ ็ฑไฝ ๏ผ็ฑๆญปไฝ ไบ๏ผไนไนไนใโ</td>
</tr>
<tr>
<td rowspan="2" style="vertical-align: top;">0:10 - 0:18</td>
<td rowspan="2" style="vertical-align: top;">ไธๅคซๆฅ่ตท็ต่ฏ๏ผๅฏนๅฆปๅญ็็ญๆ
ๆๅฐๅฅฝๅฅ๏ผๅฆปๅญๅๅ
ดๅฅๅฐๆญๆไบโๆๅโใ</td>
<td>ไธๅคซ</td>
<td>โๅ๏ผๆไนไบไฝ ่ฟๆฏ๏ผ่ฟไน้ซๅ
ดๅ๏ผโ</td>
</tr>
<tr>
<td>ๅฆปๅญ</td>
<td>โไปๅคฉๆๅจๆ็ๆฃ่กฃๅ
้๏ผๅ็ฐไบไฝ ็ปๆ็ๆๅ๏ผไธไธๅ
ๅใโ</td>
</tr>
<tr>
<td>0:18 - 0:27</td>
<td>ๅฌๅฐโไธไธๅ
โ๏ผไธๅคซ่กจๆ
็ฌ้ดๅๅบ๏ผไป็ๆๅไธบ้ๆๅๆๆ๏ผไฝไปๅผบ่ฃ
้ๅฎใ</td>
<td>ไธๅคซ</td>
<td>โๅ๏ผๅฅฝๅ๏ผไฝ ไฝ ไฝ ไฝ ๅผๅฟ้ซๅ
ดๅฐฑ่กใโ</td>
</tr>
<tr>
<td>0:27 - 0:34</td>
<td>ๅฆปๅญๅผๅฟๅฐๅ็ฅ้ฑ็็จ้๏ผไธๅคซ็่กจๆ
ๅฝปๅบๅตไฝ๏ผ้ๆๅ ๅงใ</td>
<td>ๅฆปๅญ</td>
<td>โๆๅฝ็ถ้ซๅ
ดๅ๏ผๆ็จๅฎไนฐไบไธไปถๆฐ่กฃ่ฃณ๏ผ็ญๆไธๅๅป็ฉฟ็ปไฝ ็ๅใโ</td>
</tr>
<tr>
<td rowspan="3" style="vertical-align: top;">0:34 - 0:46</td>
<td rowspan="3" style="vertical-align: top;">ไธๅคซ็กฎ่ฎค้ฑๅทฒ่ขซ่ฑๆ๏ผๆ
็ปชๅดฉๆบใๅฆปๅญๅ่ฎคไธบๆฏไธๅคซๆๆ็๏ผไธๅคซๅฟไธไฝ้ชไบไธๅฅใ</td>
<td>ไธๅคซ</td>
<td>โไฝ ๅทฒ็ป็ปไนฐๆ่กฃๆไบ๏ผโ</td>
</tr>
<tr>
<td>ๅฆปๅญ</td>
<td>โๅฝ็ถๅฆ๏ผไธๆฏไฝ ่ฏด็ๅ๏ผ่ฏดไนฐๆ่ชๅทฑๅๆฌข็ไธ่ฅฟใ่ๅ
ฌ๏ผไฝ ็ๆฏๅคชๅฅฝไบใโ</td>
</tr>
<tr>
<td>ไธๅคซ</td>
<td>โไฝ ็ๆฏ่ดฅๅฎถๅจไปฌๅฟๅไฝ ใโ</td>
</tr>
<tr>
<td rowspan="4" style="vertical-align: top;">0:46 - 0:59</td>
<td rowspan="4" style="vertical-align: top;">ๅฆปๅญๅฏ่งไธๅคซ่ฏญๆฐไธๅฏน๏ผไธๅคซ็ซๅปๆนๅฃๆฉ้ฅฐ๏ผๅนถๅฌไฟๅฆปๅญๆฉ็นๅๅฎถใ</td>
<td>ๅฆปๅญ</td>
<td>โไปไน๏ผ่ๅ
ฌ๏ผไฝ ่ฏดไปไน๏ผโ</td>
</tr>
<tr>
<td>ไธๅคซ</td>
<td>โๅ๏ผๆ่ฏดๅฅฝๅ๏ผไฝ ๆผไบฎๆ้ซๅ
ดใโ</td>
</tr>
<tr>
<td>ๅฆปๅญ</td>
<td>โไฝ ่ฏด็๏ผ่ๅ
ฌใไฝ ไปๅคฉๅ๏ผไธๅฎ่ฆๆฉ็นๅๆฅๅ๏ผๆ็ญไฝ ๅใโ</td>
</tr>
<tr>
<td>ไธๅคซ</td>
<td>โ่ก่ก่ก่ก่กใโ</td>
</tr>
</tbody>
</table>
### ไบบ็ฉไธๆ ธๅฟๅฒ็ช
#### 1. ไบบ็ฉๅๆ
ไธๅคซ:
่กไธบ: ่็งๆฟ้ฑ๏ผไบๅๅๆๅๆฉ้ฅฐ่ชๅทฑ็็ๅฎๆ
็ปช๏ผๅฟ็ใๆๆ๏ผใ
ๅฟ็ๅๅ: ๆ ้ฒ -> ็ๆ -> ้ๆ -> ๅดฉๆบ -> ๆ ๅฅๆฅๅใ
็น็น: ็ฑ้ขๅญ๏ผๅฏนๅฆปๅญๆขๆ็ฑๆไนๆๆ ๅฅ๏ผๅ
ธๅ็โๅฆป็ฎกไธฅโๅฝข่ฑกใ
ๅฆปๅญ:
่กไธบ: ๅ็ฐ้ฑๅ๏ผ่ฎคไธบๆฏไธๅคซ็็ฑๆ่กจ่พพ๏ผๅนถ่ฟ
้ๅฐๅ
ถๆถ่ดนใ
ๅฟ็ๅๅ: ๅ
จ็จๅคไบๅ็ฐโๆๅโ็ๅนธ็ฆๅๅๆฆไธญใ
็น็น: ๅคฉ็ใๆถ่ดนๆๆญ๏ผๅฏนไธๅคซๅ
ๆปกไฟกไปปๅ็ฑๆใ
#### 2. ๆ ธๅฟๅฒ็ช
่ง้ข็ๆ ธๅฟๅฒ็ชๅจไบ โไฟกๆฏ็ไธฅ้ไธๅฏน็ญโ ๆ้ ๆ็ๆๅงๆง่ฏฏไผ๏ผ
* ไธๅคซ่ง่ง: ่พ่ฆๆไธ็ 10,000ๅ
็งๆฟ้ฑ่ขซๆๅคๅ็ฐๅนถ่ฑๆ๏ผๆฏไธๅบโๆๅโใ
* ๅฆปๅญ่ง่ง: ไธๅคซ็ฒพๅฟๅๅค็ 10,000ๅ
ๆตชๆผซๅบ้๏ผๆฏไธไปฝๅทจๅคง็โๆๅโใ
่ฟไธช่ฏฏไผๆจๅจไบๆดไธชๆ
ไบ็ๅๅฑ๏ผไธๅคซ็โๆ็ข็ๅพ่้ๅฝโๅๅฆปๅญ็โ็ๆๅฝ็ถ็ๅนธ็ฆโๅฝขๆไบๅผบ็็ๅๅงๅๅทฎ๏ผๅถ้ ไบๅฏ้็็ฌ็นใ
### ๆป็ป
่ฏฅ่ง้ข้่ฟไธไธชๅ
ณไบโ็งๆฟ้ฑโ็ๅธธ่งๅฎถๅบญๆ
ๆฏ๏ผๅทงๅฆๅฐๆๅปบไบไธไธชๅ
ๆปกๅ่ฝฌๅๅนฝ้ป็ๆ
ไบใๅฎๅฉ็จๆๅงๆง่ฎฝๅบ๏ผ่งไผๅไธๅคซ็ฅ้็็ธ๏ผ่ๅฆปๅญ่ๅจ้ผ้๏ผ็ๆๆณ๏ผ็ฒพๅๆๆไบไธๅคซๅจ็ชๅ็ถๅตไธ็ๅคๆๅฟ็ๆดปๅจใๆดไธช่ฟ็จไธไป
็ฌๆ็พๅบ๏ผไนๅซ่ๅฐๆข่ฎจไบๅคซๅฆป้ด็ๆฒ้ใไฟกไปปๅ้้ฑ่ง็ญ่ฏ้ข๏ผๅฎนๆๅผๅ่งไผ็ๅ
ฑ้ธฃๅ่ฎจ่ฎบใ
</td>
</tr>
</table>
## Usage
### Dependencies
The installation has been tested and verified on the following environments:
* NVIDIA H20 with CUDA 12.4
* NVIDIA A100 with CUDA 12.1
### Installation
Clone the repo and install dependent packages
```bash
git clone -b arc-qwen-video https://github.com/TencentARC/ARC-Hunyuan-Video-7B.git
cd ARC-Hunyuan-Video-7B
# Install torch 2.6.0 based on your CUDA version
# CUDA 11.8
pip install torch==2.6.0 torchvision==0.21.0 torchaudio==2.6.0 --index-url https://download.pytorch.org/whl/cu118
# CUDA 12.4
pip install torch==2.6.0 torchvision==0.21.0 torchaudio==2.6.0 --index-url https://download.pytorch.org/whl/cu124
# CUDA 12.6
pip install torch==2.6.0 torchvision==0.21.0 torchaudio==2.6.0 --index-url https://download.pytorch.org/whl/cu126
pip install librosa decord av accelerate
pip uninstall transformers
pip install git+https://github.com/geyuying/transformers.git@arc-qwen-video
pip install flash_attn==2.7.1.post4
# Install FFmpeg according to your system, and ensure that the following command produces a normal version output:
ffmpeg -version
# (Optional) For vllm, please follow the instructions below,
pip uninstall vllm
pip install git+https://github.com/geyuying/vllm.git@arc-qwen-video
```
#### An 'Ugly' Workaround for vLLM Installation
If you are unable to install our provided vllm package, we offer an alternative "ugly" method:
1. Install vllm with Qwen2.5-VL support.
2. Modify config.json. In your model weights directory, open config.json and change the architectures field to "Qwen2_5_VLForConditionalGeneration".
3. Patch the vllm source code. Locate the file vllm/model_executor/models/qwen2_5_vl.py in your vllm installation path. Add the following code inside the __init__ method of the Qwen2_5_VLForConditionalGeneration class:
```
whisper_path = 'openai/whisper-large-v3'
speech_encoder = WhisperModel.from_pretrained(whisper_path).encoder
self.speech_encoder = speech_encoder
speech_dim = speech_encoder.config.d_model
llm_hidden_size = config.vision_config.out_hidden_size
self.mlp_speech = nn.Sequential(
nn.LayerNorm(speech_dim),
nn.Linear(speech_dim, llm_hidden_size),
nn.GELU(),
nn.Linear(llm_hidden_size, llm_hidden_size)
)
```
**Why this works**: Our model is based on the Qwen-VL-2.5 architecture, with the addition of an audio encoder and a corresponding MLP. During vllm inference, the multi-modal encoder processes inputs sequentially, while the LLM performs batch inference. Since we only need to pass the final multi-modal embeddings to the LLM, we can reuse the existing code for Qwen-VL-2.5.
### Inference
```bash
# Our model currently excels at processing short videos of up to 5 minutes.
# If your video is longer, we recommend following the approach used in our demo and API:
# split the video into segments for inference, and then use an LLM to integrate the results.
```
To quickly verify that your environment is set up correctly and that video and audio information are being processed as expected, you can run the following test case with ARC-Qwen-Video-7B.
```bash
video_path = "examples/็ชๆ.mp4"
task = "QA"
question = "What did the man say at the beginning of the video after measuring the thickness of the fried pork cutlet?"
```
Expected Result: If the model's output contains the phrase "So thin", it indicates that your installation is successful.
#### Inference without vllm
```bash
cd ARC-Hunyuan-Video-7B
# For ARC-Hunyuan-Video-7B
python3 inference_arc_qwen_video.py
# For ARC-Hunyuan-Video-7B-Narrator
python3 inference_arc_qwen_video_narrator.py
```
#### Inference with vllm
```bash
cd ARC-Hunyuan-Video-7B
# For ARC-Hunyuan-Video-7B
python3 vllm_arc_qwen_vl_video_batch.py --batch_inference
# For ARC-Hunyuan-Video-7B-Narrator
python3 vllm_arc_qwen_vl_video_batch_narrator.py --batch_inference
```
## Benchmark Performance
| | Video-MMMU | MMVU | Temp-Compass | Video-Holmes | Video-MME | VCR-Bench | MV-Bench | ShortVid-Bench | Charades-STA |
|:---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| ARC-Hunyuan-Video-7B | 31.1 | 49.1 | 66.0 | 40.9 | 58.7 | 50.5 | **62.6** | **73.0** | **54.8** |
| ARC-Qwen-Video-7B | **41.3** | **55.5** | **68.7** | **51.1** | **61.0** | **52.3** | 60.8 | 72.6 | 52.8 |
Quantitative evaluation is performed on different benchmarks using accuracy as the evaluation metric, except for the grounding task on Charades-STA, which uses mIoU. For all benchmarks other than VideoMMMU and Charades-STA, we only evaluated the multiple-choice questions.
## Citation
If you find the work helpful, please consider citing:
```bash
@article{ge2025arc,
title={ARC-Hunyuan-Video-7B: Structured Video Comprehension of Real-World Shorts},
author={Ge, Yuying and Ge, Yixiao and Li, Chen and Wang, Teng and Pu, Junfu and Li, Yizhuo and Qiu, Lu and Ma, Jin and Duan, Lisheng and Zuo, Xinyu and others},
journal={arXiv preprint arXiv:2507.20939},
year={2025}
}
```
|
combe4259/fin_simplifier
|
combe4259
| 2025-09-21T14:22:45Z | 97 | 0 | null |
[
"safetensors",
"encoder-decoder",
"seq2seq",
"text-simplification",
"financial-domain",
"ko",
"pytorch",
"dataset:combe4259/fin_simplifier_dataset",
"license:other",
"region:us"
] | null | 2025-09-16T18:04:29Z |
---
language: ko
license: other
base_models:
- snunlp/KR-FinBert-SC
- skt/kogpt2-base-v2
tags:
- encoder-decoder
- seq2seq
- text-simplification
- financial-domain
- ko
- pytorch
datasets:
- combe4259/fin_simplifier_dataset
---
# ๊ธ์ต ํ
์คํธ ๊ฐ์ํ ๋ชจ๋ธ (Financial Text Simplifier)
## ๋ชจ๋ธ ์ค๋ช
[](https://colab.research.google.com/drive/19Q7kUWtHX2shLx6iGGoT66wEidOrvLCf?usp=sharing)
**fin_simplifier**๋ ๋ณต์กํ ๊ธ์ต ์ฉ์ด์ ๋ฌธ์ฅ์ ์ผ๋ฐ์ธ์ด ์ดํดํ๊ธฐ ์ฌ์ด ํ๊ตญ์ด๋ก ๋ณํํ๋ ์ธ์ฝ๋-๋์ฝ๋ ๋ชจ๋ธ์
๋๋ค.
### ๋ชจ๋ธ ๊ตฌ์กฐ (config.json ๊ธฐ๋ฐ)
- **๋ชจ๋ธ ํ์
**: EncoderDecoderModel
- **์ธ์ฝ๋**: snunlp/KR-FinBert-SC (์๋ ์ฐจ์: 768)
- **๋์ฝ๋**: skt/kogpt2-base-v2 (์ดํ ํฌ๊ธฐ: 51,201)
- **ํ๋ผ๋ฏธํฐ ์**: ์ฝ 255M
- **ํ์ผ ํฌ๊ธฐ**: 1.02GB (safetensors ํ์)
### ์ฃผ์ ํน์ง
- ๊ธ์ต ์ ๋ฌธ ์ฉ์ด๋ฅผ ์ฌ์ด ์ผ์์ด๋ก ๋ณํ
- ํ๊ตญ์ด ๊ธ์ต ๋ฌธ์์ ์ต์ ํ
- ๋ณต์กํ ๊ธ์ต ๊ฐ๋
๊ฐ์ํ (PER, ROE, ํ์์ํ ๋ฑ)
- ์ํ ์๋ด ๋ฐ ๊ธ์ต ๊ต์ก ํ์ฉ ๊ฐ๋ฅ
## ์ฌ์ฉ ๋ชฉ์
### ์ฃผ์ ํ์ฉ ์ฌ๋ก
1. **๊ธ์ต ์๋ด ์ง์**: ์ํ ์๋ด ์ ๊ณ ๊ฐ ์ดํด๋ ํฅ์
2. **๊ธ์ต ๊ต์ก**: ๋ณต์กํ ๊ธ์ต ๊ฐ๋
์ ์ฝ๊ฒ ์ค๋ช
3. **๋ฌธ์ ๊ฐ์ํ**: ์ฝ๊ด, ์ํ ์ค๋ช
์ ๋ฑ์ ์ดํดํ๊ธฐ ์ฝ๊ฒ ๋ณํ
4. **์ ๊ทผ์ฑ ๊ฐ์ **: ๊ธ์ต ์์ธ๊ณ์ธต์ ๊ธ์ต ์๋น์ค ์ ๊ทผ์ฑ ํฅ์
### ์ฌ์ฉ ์ ํ ์ฌํญ
- ๋ฒ์ ๊ตฌ์๋ ฅ์ด ์๋ ๋ฌธ์ ์์ฑ
- ํฌ์ ์กฐ์ธ ๋๋ ๊ธ์ต ์๋ด ๋์ฒด
- ์ ํํ ์์น๋ ๊ณ์ฐ์ด ํ์ํ ๊ฒฝ์ฐ
## ์ฌ์ฉ ๋ฐฉ๋ฒ
### ์ค์น
```python
from transformers import EncoderDecoderModel, AutoTokenizer
import torch
# Model loading
model = EncoderDecoderModel.from_pretrained("combe4259/fin_simplifier")
encoder_tokenizer = AutoTokenizer.from_pretrained("snunlp/KR-FinBert-SC")
decoder_tokenizer = AutoTokenizer.from_pretrained("skt/kogpt2-base-v2")
# Set special tokens
if decoder_tokenizer.pad_token is None:
decoder_tokenizer.pad_token = decoder_tokenizer.eos_token
```
### ์ถ๋ก ์์
```python
def simplify_text(text, model, encoder_tokenizer, decoder_tokenizer):
# Tokenize input
inputs = encoder_tokenizer(
text,
return_tensors="pt",
max_length=128,
padding="max_length",
truncation=True
)
# Generate simplified text
with torch.no_grad():
generated = model.generate(
input_ids=inputs["input_ids"],
attention_mask=inputs["attention_mask"],
max_length=128,
num_beams=6,
repetition_penalty=1.2,
length_penalty=0.8,
early_stopping=True,
do_sample=True,
top_k=50,
top_p=0.95,
temperature=0.7
)
# Decode output
simplified = decoder_tokenizer.decode(generated[0], skip_special_tokens=True)
return simplified
# Example usage
complex_text = "์ฃผ๊ฐ์์ต๋น์จ(PER)์ ์ฃผ๊ฐ๋ฅผ ์ฃผ๋น์์ด์ต์ผ๋ก ๋๋ ์งํ์
๋๋ค."
simple_text = simplify_text(complex_text, model, encoder_tokenizer, decoder_tokenizer)
print(f"์๋ฌธ: {complex_text}")
print(f"๊ฐ์ํ: {simple_text}")
# ์ถ๋ ฅ ์์: ๋ชจ๋ธ์ด ์์ฑํ๋ ๊ฐ์ํ๋ ํ
์คํธ
```
## ํ์ต ์์ธ ์ ๋ณด
### ํ์ต ๋ฐ์ดํฐ์
[๋ฐ์ดํฐ์
](https://huggingface.co/datasets/combe4259/fin_simplifier_dataset/tree/main)
์์ฒด ์ ์ ๋ฐ์ดํฐ์
-์ถ์ฒ: NH๋ํ์ํ
-NH๋ํ์ํ ์ํ์ค๋ช
์๋ฅผ gemma ๋ชจ๋ธ์ ํฌ์
ํ์ฌ ๋ณํํ์ฌ ์์ฑ
### ํ์ต ์ค์ (trainer_state.json ๊ธฐ๋ฐ)
- **์ํฌํฌ**: 10
- **๋ฐฐ์น ํฌ๊ธฐ**: 4 (gradient accumulation steps: 2)
- **์ต๋ ํ์ต๋ฅ **: 2.99e-05
- **์ต์ข
ํ์ต๋ฅ **: 8.82e-09
- **์ตํฐ๋ง์ด์ **: AdamW (warmup steps: 200)
- **๋ ์ด๋ธ ์ค๋ฌด๋ฉ**: 0.1
- **๋๋กญ์์**: 0.2 (์ธ์ฝ๋ ๋ฐ ๋์ฝ๋)
### ์์ฑ ํ์ดํผํ๋ผ๋ฏธํฐ
- **Beam Search**: 6 beams
- **Repetition Penalty**: 1.2
- **Length Penalty**: 0.8
- **Temperature**: 0.7
- **Top-k**: 50
- **Top-p**: 0.95
## ํ๊ฐ ๊ฒฐ๊ณผ
### ํ์ต ์ฑ๊ณผ (trainer_state.json ๊ธฐ์ค)
- **์ด๊ธฐ ์์ค**: 13.53
- **์ต์ข
์์ค**: 3.76
- **์์ค ๊ฐ์์จ**: 72.2%
- **์ด ํ์ต ์คํ
**: 3,600
- **์๋ ด ํจํด**: ์ํฌํฌ 8๋ถํฐ ์์ ์ ์๋ ด
### ์ํฌํฌ๋ณ ํ๊ท ์์ค
| ์ํฌํฌ | ํ๊ท ์์ค |
|--------|-----------|
| 1 | 8.98 |
| 2 | 6.93 |
| 3 | 5.95 |
| 4 | 5.28 |
| 5 | 4.81 |
| 6 | 4.44 |
| 7 | 4.17 |
| 8 | 3.97 |
| 9 | 3.82 |
| 10 | 3.73 |
### ์์ ์ถ๋ ฅ
| ์๋ฌธ (Complex) | ๋ณํ ๊ฒฐ๊ณผ (Simplified) |
|---------------|---------------------|
| ์๊ฐ์ด์ก์ ๋ฐํ์ฃผ์์์ ์ฃผ๊ฐ๋ฅผ ๊ณฑํ ๊ฐ์ผ๋ก ๊ธฐ์
์ ์์ฅ๊ฐ์น๋ฅผ ๋ํ๋
๋๋ค. | ์๊ฐ์ด์ก์ ํ์ฌ์ ๋ชจ๋ ์ฃผ์์ ํฉ์น ๊ฐ๊ฒฉ์
๋๋ค. |
| ํ์๊ฒฐํฉ์ฆ๊ถ์ ๊ธฐ์ด์์ฐ์ ๊ฐ๊ฒฉ๋ณ๋์ ์ฐ๊ณํ์ฌ ์์ต์ด ๊ฒฐ์ ๋๋ ์ฆ๊ถ์
๋๋ค. | ํ์๊ฒฐํฉ์ฆ๊ถ์ ๋ค๋ฅธ ์ํ ๊ฐ๊ฒฉ์ ๋ฐ๋ผ ์์ต์ด ๋ฐ๋๋ ํฌ์ ์ํ์
๋๋ค. |
| ํ๋งค์กฐ๊ฑด๋ถ์ฑ๊ถ(RP)์ ์ผ์ ๊ธฐ๊ฐ ํ ๋ค์ ๋งค์
ํ๋ ์กฐ๊ฑด์ผ๋ก ๋งค๋ํ๋ ์ฑ๊ถ์
๋๋ค. | RP๋ ๋์ค์ ๋ค์ ์ฌ๊ฒ ๋ค๊ณ ์ฝ์ํ๊ณ ์ผ๋จ ํ๋ ์ฑ๊ถ์
๋๋ค. |
| ์ ๋์ฑ์ํ์ ์์ฐ์ ์ ์ ๊ฐ๊ฒฉ์ ํ๊ธํํ์ง ๋ชปํ ์ํ์
๋๋ค. | ์ ๋์ฑ์ํ์ ๊ธํ๊ฒ ํ ๋ ์ ๊ฐ์ ๋ชป ๋ฐ์ ์ํ์
๋๋ค. |
| ์๋ฆฌ๊ธ๊ท ๋ฑ์ํ์ ๋งค์ ๋์ผํ ๊ธ์ก์ผ๋ก ์๊ธ๊ณผ ์ด์๋ฅผ ์ํํ๋ ๋ฐฉ์์
๋๋ค. | ์๋ฆฌ๊ธ๊ท ๋ฑ์ํ์ ๋งค๋ฌ ๊ฐ์ ๊ธ์ก์ ๊ฐ๋ ๋ฐฉ์์
๋๋ค. |
## ์ธ์ฉ
```bibtex
@misc{fin_simplifier2024,
title={Financial Text Simplifier: Korean Financial Terms Simplification Model},
author={combe4259},
year={2024},
publisher={HuggingFace},
url={https://huggingface.co/combe4259/fin_simplifier}
}
```
## ๊ฐ์ฌ์ ๋ง
- **KR-FinBert-SC**: ๊ธ์ต ๋๋ฉ์ธ ํนํ ์ธ์ฝ๋ ์ ๊ณต
- **SKT KoGPT2**: ํ๊ตญ์ด ์์ฑ ๋ชจ๋ธ ์ ๊ณต
## ์ฐ๋ฝ์ฒ
- **HuggingFace**: [combe4259](https://huggingface.co/combe4259)
- **Model Card**: ๋ฌธ์์ฌํญ์ HuggingFace ํ ๋ก ํญ์ ์ด์ฉํด์ฃผ์ธ์
---
|
arrowone/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-gliding_poisonous_mosquito
|
arrowone
| 2025-09-21T14:21:02Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am gliding_poisonous_mosquito",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-21T11:28:13Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am gliding_poisonous_mosquito
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
huijelee/mistral-7b-qlora-nemotron-merged-slerp
|
huijelee
| 2025-09-21T14:18:36Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-21T13:14:25Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
CraftJarvis/minecraft-textvla-qwen2vl-7b-2509
|
CraftJarvis
| 2025-09-21T14:17:18Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_vl",
"image-to-text",
"image-text-to-text",
"conversational",
"dataset:CraftJarvis/minecraft-text-action-dataset",
"arxiv:2509.13347",
"base_model:Qwen/Qwen2-VL-7B-Instruct",
"base_model:finetune:Qwen/Qwen2-VL-7B-Instruct",
"license:mit",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-09-12T09:33:42Z |
---
library_name: transformers
license: mit
datasets:
- CraftJarvis/minecraft-text-action-dataset
metrics:
- accuracy
base_model:
- Qwen/Qwen2-VL-7B-Instruct
pipeline_tag: image-text-to-text
arxiv: 2509.13347
---
# Minecraft-Textvla-Qwen2vl-7b-2509
<div align="left">
<a href="https://craftjarvis.github.io/"><img alt="Homepage" src="https://img.shields.io/badge/%20CraftJarvis-HomePage-ffc107?color=blue&logoColor=white"/></a>
<a href="https://github.com/CraftJarvis/OpenHA"><img alt="Github" src="https://img.shields.io/badge/%F0%9F%A4%97%20Github-CraftJarvis-ffc107?color=3b65ab&logoColor=white"/></a>
<a href="https://arxiv.org/abs/2509.13347"><img src="https://img.shields.io/badge/arXiv-2509.13347-b31b1b.svg"></a>
<a href="https://github.com/CraftJarvis/OpenHA/blob/master/LICENSE"><img src="https://img.shields.io/badge/Code License-MIT-blue"/></a>
</div>
**minecraft-textvla-qwen2vl-7b-2509** is part of the **OpenHA** suite, introduced in our paper [OpenHA: A Series of Open-Source Hierarchical Agentic Models in Minecraft](https://huggingface.co/papers/2509.13347).
## ๐ป Usage
You can download and use this model with:
```sh
python examples/rollout_openha.py \
--output_mode text_action \
--vlm_client_mode hf \
--system_message_tag text_action \
--model_ips localhost --model_ports 11000 \
--model_path CraftJarvis/minecraft-textvla-qwen2vl-7b-2509 \
--model_id minecraft-textvla-qwen2vl-7b-2509 \
--record_path "/DATA/limuyao/evaluate" \
--max_steps_num 200 \
--num_rollouts 8
```
For more details, please refer to our [code repository](https://github.com/CraftJarvis/OpenHA).
## ๐ Citation
```bibtex
@article{wang2025openha,
title={OpenHA: A Series of Open-Source Hierarchical Agentic Models in Minecraft},
author={Zihao Wang and Muyao Li and Kaichen He and Xiangyu Wang and Zhancun Mu and Anji Liu and Yitao Liang},
journal = {arXiv preprint arXiv:2509.13347},
year={2025},
url={https://arxiv.org/abs/2509.13347},
}
```
|
ruru189/bhavani_lora_model1
|
ruru189
| 2025-09-21T14:16:07Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"unsloth",
"region:us"
] | null | 2025-09-21T14:16:00Z |
---
base_model: unsloth/deepseek-r1-0528-qwen3-8b-unsloth-bnb-4bit
library_name: peft
model_name: outputs
tags:
- generated_from_trainer
- trl
- sft
- unsloth
licence: license
---
# Model Card for outputs
This model is a fine-tuned version of [unsloth/deepseek-r1-0528-qwen3-8b-unsloth-bnb-4bit](https://huggingface.co/unsloth/deepseek-r1-0528-qwen3-8b-unsloth-bnb-4bit).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="None", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- PEFT 0.15.2
- TRL: 0.22.2
- Transformers: 4.55.4
- Pytorch: 2.8.0
- Datasets: 3.6.0
- Tokenizers: 0.21.2
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
luckeciano/Llama-3.1-8B-Instruct-GRPO-Base-v2_4461
|
luckeciano
| 2025-09-21T14:14:57Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:DigitalLearningGmbH/MATH-lighteval",
"arxiv:2402.03300",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-8B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-21T10:13:50Z |
---
base_model: meta-llama/Llama-3.1-8B-Instruct
datasets: DigitalLearningGmbH/MATH-lighteval
library_name: transformers
model_name: Llama-3.1-8B-Instruct-GRPO-Base-v2_4461
tags:
- generated_from_trainer
- open-r1
- trl
- grpo
licence: license
---
# Model Card for Llama-3.1-8B-Instruct-GRPO-Base-v2_4461
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) on the [DigitalLearningGmbH/MATH-lighteval](https://huggingface.co/datasets/DigitalLearningGmbH/MATH-lighteval) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="luckeciano/Llama-3.1-8B-Instruct-GRPO-Base-v2_4461", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/max-ent-llms/PolicyGradientStability/runs/su9dg15c)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.49.0
- Pytorch: 2.5.1
- Datasets: 3.4.1
- Tokenizers: 0.21.2
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.