Datasets:
modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-03 06:27:42
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 535
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-03 06:27:02
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
tencent/POINTS-Reader
|
tencent
| 2025-09-03T04:01:57 | 30 | 7 | null |
[
"safetensors",
"custom_code",
"arxiv:2509.01215",
"arxiv:2412.08443",
"arxiv:2409.04828",
"arxiv:2405.11850",
"region:us"
] | null | 2025-08-15T10:12:54 |
<p align="center">
<img src="images/logo.png" width="700"/>
<p>
<h1 align="center">
POINTS-Reader: Distillation-Free Adaptation of Vision-Language Models for Document Conversion
</h1>
<p align="center">
<a href="https://huggingface.co/tencent/POINTS-Reader">
<img src="https://img.shields.io/badge/HuggingFace%20Weights-black.svg?logo=HuggingFace" alt="HuggingFace">
</a>
<a href="">
<img src="https://img.shields.io/badge/arXiv__-POINTS--Reader-d4333f?logo=arxiv&logoColor=white&colorA=cccccc&colorB=d4333f&style=flat" alt="arXiv">
</a>
<a href="">
<img src="https://komarev.com/ghpvc/?username=tencent&repo=POINTS-Reader&color=brightgreen&label=Views" alt="view">
</a>
</p>
We are delighted to announce that the WePOINTS family has welcomed a new member: POINTS-Reader, a vision-language model for end-to-end document conversion.
## News
- 2025.08.27: Support deploying POINTS-Reader with SGLang💪💪💪.
- 2025.08.26: We released the weights of the most recent version of POINT-Reader🔥🔥🔥.
- 2025.08.21: POINTS-Reader is accepted by **EMNLP 2025** for presentation at the **Main Conference**🎉🎉🎉.
## Introduction
1. **Simplicity**: POINTS-Reader is a very streamlined model that fully follows the structure of POINTS1.5, except that we have replaced Qwen2.5-7B-Instruct with Qwen2.5-3B-Instruct. Moreover, the input and output of POINTS-Reader are extremely straightforward. The input consists of a fixed prompt and a document image, and the output contains only a string (text extracted from the document image). The model's output is the final result delivered to the user without any post-processing.
2. **Performance**: Currently, POINTS-Reader supports extraction from both Chinese and English documents, achieving impressive results, with scores of 0.133 for English and 0.212 for Chinese on OmniDocBench.
3. **High Throughput**: With current mainstream inference frameworks, such as SGLang and vLLM, optimization is predominantly focused on LLMs. Thus, a large ViT would significantly impact the model’s throughput, which is why we selected a ViT with a moderate number of parameters (600M NaViT used in POINTS1.5). Combined with our support for SGLang, we currently achieve a very satisfactory throughput. We will also provide support for vLLM in the future.
4. **Open-source Technical Approach**: In the POINTS-Reader paper, we propose a two-stage data augmentation strategy. The first stage leverages automated data to endow the model with basic document extraction capabilities. In the subsequent stage, continuous self-evolution improves the quality of data generated by the model. The self-evolution approach in the second stage is highly extensible and can be applied to virtually any model.
## Results
For comparison, we use the results reported by [OmniDocBench](https://github.com/opendatalab/OmniDocBench/tree/main) and POINTS-Reader. Compared with the version submitted to EMNLP 2025, the current release provides (1) improved performance and (2) support for Chinese documents. Both enhancements build upon the methods proposed in this paper.
<table style="width: 92%; margin: auto; border-collapse: collapse;">
<thead>
<tr>
<th rowspan="2">Method Type</th>
<th rowspan="2">Methods</th>
<th colspan="2">Overall<sup>Edit</sup>↓</th>
<th colspan="2">Text<sup>Edit</sup>↓</th>
<th colspan="2">Formula<sup>Edit</sup>↓</th>
<th colspan="2">Formula<sup>CDM</sup>↑</th>
<th colspan="2">Table<sup>TEDS</sup>↑</th>
<th colspan="2">Table<sup>Edit</sup>↓</th>
<th colspan="2">Read Order<sup>Edit</sup>↓</th>
</tr>
<tr>
<th>EN</th>
<th>ZH</th>
<th>EN</th>
<th>ZH</th>
<th>EN</th>
<th>ZH</th>
<th>EN</th>
<th>ZH</th>
<th>EN</th>
<th>ZH</th>
<th>EN</th>
<th>ZH</th>
<th>EN</th>
<th>ZH</th>
</tr>
</thead>
<tbody>
<tr>
<td rowspan="9">Pipeline Tools</td>
<td>MinerU-pipeline-2.1.1</td>
<td>0.162</td>
<td>0.244</td>
<td>0.072</td>
<td>0.111</td>
<td>0.313</td>
<td>0.581</td>
<td>79.2</td>
<td>48.8</td>
<td>77.4</td>
<td>79.5</td>
<td>0.166</td>
<td>0.15</td>
<td>0.097</td>
<td>0.136</td>
</tr>
<tr>
<td>Marker-1.2.3</td>
<td>0.336</td>
<td>0.556</td>
<td>0.08</td>
<td>0.315</td>
<td>0.53</td>
<td>0.883</td>
<td>17.6</td>
<td>11.7</td>
<td>67.6</td>
<td>49.2</td>
<td>0.619</td>
<td>0.685</td>
<td>0.114</td>
<td>0.34</td>
</tr>
<tr>
<td>Marker-1.7.1</td>
<td>0.296</td>
<td>0.497</td>
<td>0.085</td>
<td>0.293</td>
<td>0.374</td>
<td>0.688</td>
<td>79.0</td>
<td>36.7</td>
<td>67.6</td>
<td>54.0</td>
<td>0.609</td>
<td>0.678</td>
<td>0.116</td>
<td>0.329</td>
</tr>
<tr>
<td>PaddleOCR PP-StructureV3</td>
<td>0.145</td>
<td>0.206</td>
<td>0.058</td>
<td>0.088</td>
<td>0.295</td>
<td>0.535</td>
<td>81.8</td>
<td>52.1</td>
<td>77.2</td>
<td>83.9</td>
<td>0.159</td>
<td>0.109</td>
<td>0.069</td>
<td>0.091</td>
</tr>
<tr>
<td>Mathpix</td>
<td>0.191</td>
<td>0.364</td>
<td>0.105</td>
<td>0.381</td>
<td>0.306</td>
<td>0.454</td>
<td>82.7</td>
<td>64.6</td>
<td>77.0</td>
<td>67.1</td>
<td>0.243</td>
<td>0.32</td>
<td>0.108</td>
<td>0.304</td>
</tr>
<tr>
<td>Docling-2.14.0</td>
<td>0.589</td>
<td>0.909</td>
<td>0.416</td>
<td>0.987</td>
<td>0.999</td>
<td>1</td>
<td>-</td>
<td>-</td>
<td>61.3</td>
<td>25.0</td>
<td>0.627</td>
<td>0.810</td>
<td>0.313</td>
<td>0.837</td>
</tr>
<tr>
<td>Pix2Text-1.1.2.3</td>
<td>0.32</td>
<td>0.528</td>
<td>0.138</td>
<td>0.356</td>
<td>0.276</td>
<td>0.611</td>
<td>78.4</td>
<td>39.6</td>
<td>73.6</td>
<td>66.2</td>
<td>0.584</td>
<td>0.645</td>
<td>0.281</td>
<td>0.499</td>
</tr>
<tr>
<td>Unstructured-0.17.2</td>
<td>0.586</td>
<td>0.716</td>
<td>0.198</td>
<td>0.481</td>
<td>0.999</td>
<td>1</td>
<td>-</td>
<td>-</td>
<td>0</td>
<td>0.064</td>
<td>1</td>
<td>0.998</td>
<td>0.145</td>
<td>0.387</td>
</tr>
<tr>
<td>OpenParse-0.7.0</td>
<td>0.646</td>
<td>0.814</td>
<td>0.681</td>
<td>0.974</td>
<td>0.996</td>
<td>1</td>
<td>0.106</td>
<td>0</td>
<td>64.8</td>
<td>27.5</td>
<td>0.284</td>
<td>0.639</td>
<td>0.595</td>
<td>0.641</td>
</tr>
<tr>
<td rowspan="11">Expert VLMs</td>
<td><strong style="color: green;">POINTS-Reader-3B</strong></td>
<td>0.133</td>
<td>0.212</td>
<td>0.062</td>
<td>0.139</td>
<td>0.304</td>
<td>0.465</td>
<td>-</td>
<td>-</td>
<td>83.7</td>
<td>85.0</td>
<td>0.128</td>
<td>0.136</td>
<td>0.036</td>
<td>0.106</td>
</tr>
<tr>
<td>MinerU2.0-2505-0.9B</td>
<td>0.133</td>
<td>0.238</td>
<td>0.045</td>
<td>0.115</td>
<td>0.273</td>
<td>0.506</td>
<td>79.0</td>
<td>50.8</td>
<td>82.1</td>
<td>83.4</td>
<td>0.15</td>
<td>0.209</td>
<td>0.066</td>
<td>0.122</td>
</tr>
<tr>
<td>MonkeyOCR-pro-1.2B</td>
<td>0.146</td>
<td>0.221</td>
<td>0.068</td>
<td>0.118</td>
<td>0.272</td>
<td>0.452</td>
<td>76.7</td>
<td>63.3</td>
<td>81.3</td>
<td>85.5</td>
<td>0.149</td>
<td>0.134</td>
<td>0.093</td>
<td>0.179</td>
</tr>
<tr>
<td>Dolphin</td>
<td>0.356</td>
<td>0.440</td>
<td>0.352</td>
<td>0.440</td>
<td>0.465</td>
<td>0.604</td>
<td>61.6</td>
<td>40.4</td>
<td>70.2</td>
<td>56.8</td>
<td>0.258</td>
<td>0.367</td>
<td>0.35</td>
<td>0.351</td>
</tr>
<tr>
<td>Nanonets-OCR-s</td>
<td>0.283</td>
<td>0.295</td>
<td>0.134</td>
<td>0.231</td>
<td>0.518</td>
<td>0.546</td>
<td>63.2</td>
<td>52.0</td>
<td>76.8</td>
<td>79.4</td>
<td>0.343</td>
<td>0.201</td>
<td>0.135</td>
<td>0.2</td>
</tr>
<tr>
<td>OCRFlux-3B</td>
<td>0.238</td>
<td>0.349</td>
<td>0.112</td>
<td>0.256</td>
<td>0.447</td>
<td>0.716</td>
<td>60.2</td>
<td>31.9</td>
<td>69.0</td>
<td>80.0</td>
<td>0.269</td>
<td>0.162</td>
<td>0.126</td>
<td>0.263</td>
</tr>
<tr>
<td>GOT-OCR</td>
<td>0.287</td>
<td>0.411</td>
<td>0.189</td>
<td>0.315</td>
<td>0.360</td>
<td>0.528</td>
<td>74.3</td>
<td>45.3</td>
<td>53.2</td>
<td>47.2</td>
<td>0.459</td>
<td>0.52</td>
<td>0.141</td>
<td>0.28</td>
</tr>
<tr>
<td>Nougat</td>
<td>0.452</td>
<td>0.973</td>
<td>0.365</td>
<td>0.998</td>
<td>0.488</td>
<td>0.941</td>
<td>15.1</td>
<td>16.8</td>
<td>39.9</td>
<td>0.0</td>
<td>0.572</td>
<td>1.000</td>
<td>0.382</td>
<td>0.954</td>
</tr>
<tr>
<td>Mistral OCR</td>
<td>0.268</td>
<td>0.439</td>
<td>0.072</td>
<td>0.325</td>
<td>0.318</td>
<td>0.495</td>
<td>64.6</td>
<td>45.9</td>
<td>75.8</td>
<td>63.6</td>
<td>0.6</td>
<td>0.65</td>
<td>0.083</td>
<td>0.284</td>
</tr>
<tr>
<td>OLMOCR-sglang</td>
<td>0.326</td>
<td>0.469</td>
<td>0.097</td>
<td>0.293</td>
<td>0.455</td>
<td>0.655</td>
<td>74.3</td>
<td>43.2</td>
<td>68.1</td>
<td>61.3</td>
<td>0.608</td>
<td>0.652</td>
<td>0.145</td>
<td>0.277</td>
</tr>
<tr>
<td>SmolDocling-256M_transformer</td>
<td>0.493</td>
<td>0.816</td>
<td>0.262</td>
<td>0.838</td>
<td>0.753</td>
<td>0.997</td>
<td>32.1</td>
<td>0.551</td>
<td>44.9</td>
<td>16.5</td>
<td>0.729</td>
<td>0.907</td>
<td>0.227</td>
<td>0.522</td>
</tr>
<tr>
<td rowspan="9">General VLMs</td>
<tr>
<td>Gemini2.0-flash</td>
<td>0.191</td>
<td>0.264</td>
<td>0.091</td>
<td>0.139</td>
<td>0.389</td>
<td>0.584</td>
<td>77.6</td>
<td>43.6</td>
<td>79.7</td>
<td>78.9</td>
<td>0.193</td>
<td>0.206</td>
<td>0.092</td>
<td>0.128</td>
</tr>
<tr>
<td>Gemini2.5-Pro</td>
<td>0.148</td>
<td>0.212</td>
<td>0.055</td>
<td>0.168</td>
<td>0.356</td>
<td>0.439</td>
<td>80.0</td>
<td>69.4</td>
<td>85.8</td>
<td>86.4</td>
<td>0.13</td>
<td>0.119</td>
<td>0.049</td>
<td>0.121</td>
</tr>
<tr>
<td>GPT4o</td>
<td>0.233</td>
<td>0.399</td>
<td>0.144</td>
<td>0.409</td>
<td>0.425</td>
<td>0.606</td>
<td>72.8</td>
<td>42.8</td>
<td>72.0</td>
<td>62.9</td>
<td>0.234</td>
<td>0.329</td>
<td>0.128</td>
<td>0.251</td>
</tr>
<tr>
<td>Qwen2-VL-72B</td>
<td>0.252</td>
<td>0.327</td>
<td>0.096</td>
<td>0.218</td>
<td>0.404</td>
<td>0.487</td>
<td>82.2</td>
<td>61.2</td>
<td>76.8</td>
<td>76.4</td>
<td>0.387</td>
<td>0.408</td>
<td>0.119</td>
<td>0.193</td>
</tr>
<tr>
<td>Qwen2.5-VL-7B</td>
<td>0.316</td>
<td>0.399</td>
<td>0.151</td>
<td>0.243</td>
<td>0.376</td>
<td>0.5</td>
<td>75.3</td>
<td>57.3</td>
<td>71.1</td>
<td>71.3</td>
<td>0.598</td>
<td>0.627</td>
<td>0.138</td>
<td>0.226</td>
</tr>
<tr>
<td>Qwen2.5-VL-72B</td>
<td>0.214</td>
<td>0.261</td>
<td>0.092</td>
<td>0.18</td>
<td>0.315</td>
<td>0.434</td>
<td>81.4</td>
<td>64.1</td>
<td>81.4</td>
<td>83.0</td>
<td>0.341</td>
<td>0.262</td>
<td>0.106</td>
<td>0.168</td>
</tr>
<tr>
<td>InternVL2-76B</td>
<td>0.44</td>
<td>0.443</td>
<td>0.353</td>
<td>0.290</td>
<td>0.543</td>
<td>0.701</td>
<td>67.4</td>
<td>44.1</td>
<td>63.0</td>
<td>60.2</td>
<td>0.547</td>
<td>0.555</td>
<td>0.317</td>
<td>0.228</td>
</tr>
<tr>
<td>InternVL3-78B</td>
<td>0.218</td>
<td>0.296</td>
<td>0.117</td>
<td>0.21</td>
<td>0.38</td>
<td>0.533</td>
<td>79.2</td>
<td>58.8</td>
<td>69.0</td>
<td>73.9</td>
<td>0.279</td>
<td>0.282</td>
<td>0.095</td>
<td>0.161</td>
</tr>
</tbody>
</table>
## Getting Started
This following code snippet has been tested with following environment:
```
python==3.10.12
torch==2.5.1
transformers==4.55.2
cuda==12.1
```
If you encounter environment issues, please feel free to open an issue.
### Run with Transformers
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, Qwen2VLImageProcessor
import torch
# We recommend using the following prompt to better performance,
# since it is used throughout the training process.
prompt = (
'Please extract all the text from the image with the following requirements:\n'
'1. Return tables in HTML format.\n'
'2. Return all other text in Markdown format.'
)
image_path = '/path/to/your/local/image'
model_path = 'tencent/POINTS-Reader'
model = AutoModelForCausalLM.from_pretrained(model_path,
trust_remote_code=True,
torch_dtype=torch.float16,
device_map='cuda')
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
image_processor = Qwen2VLImageProcessor.from_pretrained(model_path)
content = [
dict(type='image', image=image_path),
dict(type='text', text=prompt)
]
messages = [
{
'role': 'user',
'content': content
}
]
generation_config = {
'max_new_tokens': 2048,
'repetition_penalty': 1.05,
'temperature': 0.7,
'top_p': 0.8,
'top_k': 20,
'do_sample': True
}
response = model.chat(
messages,
tokenizer,
image_processor,
generation_config
)
print(response)
```
If you encounter issues like repeation, please try to increase the resolution of the image to allievate the problem.
### Deploy with SGLang
We have created a [Pull Request](https://github.com/sgl-project/sglang/pull/9651) for SGLang. You can check out this branch and install SGLang in editable mode by following the [official guide](https://docs.sglang.ai/get_started/install.html) prior to the merging of this PR.
#### How to Deploy
You can deploy POINTS-Reader with SGLang using the following command:
```
python3 -m sglang.launch_server \
--model-path tencent/POINTS-Reader \
--tp-size 1 \
--dp-size 1 \
--chat-template points-v15-chat \
--trust-remote-code \
--port 8081
```
#### How to Use
You can use the following code to obtain results from SGLang:
```python
from typing import List
import requests
import json
def call_wepoints(messages: List[dict],
temperature: float = 0.0,
max_new_tokens: int = 2048,
repetition_penalty: float = 1.05,
top_p: float = 0.8,
top_k: int = 20,
do_sample: bool = True,
url: str = 'http://127.0.0.1:8081/v1/chat/completions') -> str:
"""Query WePOINTS model to generate a response.
Args:
messages (List[dict]): A list of messages to be sent to WePOINTS. The
messages should be the standard OpenAI messages, like:
[
{
'role': 'user',
'content': [
{
'type': 'text',
'text': 'Please describe this image in short'
},
{
'type': 'image_url',
'image_url': {'url': /path/to/image.jpg}
}
]
}
]
temperature (float, optional): The temperature of the model.
Defaults to 0.0.
max_new_tokens (int, optional): The maximum number of new tokens to generate.
Defaults to 2048.
repetition_penalty (float, optional): The penalty for repetition.
Defaults to 1.05.
top_p (float, optional): The top-p probability threshold.
Defaults to 0.8.
top_k (int, optional): The top-k sampling vocabulary size.
Defaults to 20.
do_sample (bool, optional): Whether to use sampling or greedy decoding.
Defaults to True.
url (str, optional): The URL of the WePOINTS model.
Defaults to 'http://127.0.0.1:8081/v1/chat/completions'.
Returns:
str: The generated response from WePOINTS.
"""
data = {
'model': 'WePoints',
'messages': messages,
'max_new_tokens': max_new_tokens,
'temperature': temperature,
'repetition_penalty': repetition_penalty,
'top_p': top_p,
'top_k': top_k,
'do_sample': do_sample,
}
response = requests.post(url,
json=data)
response = json.loads(response.text)
response = response['choices'][0]['message']['content']
return response
prompt = (
'Please extract all the text from the image with the following requirements:\n'
'1. Return tables in HTML format.\n'
'2. Return all other text in Markdown format.'
)
messages = [{
'role': 'user',
'content': [
{
'type': 'text',
'text': prompt
},
{
'type': 'image_url',
'image_url': {'url': '/path/to/image.jpg'}
}
]
}]
response = call_wepoints(messages)
print(response)
```
## Known Issues
- **Complex Document Parsing**: POINTS-Reader can struggle with complex layouts (e.g., newspapers), often producing repeated or missing content.
- **Handwritten Document Parsing**: It also has difficulty handling handwritten inputs (e.g., receipts, notes), which can lead to recognition errors or omissions.
- **Multi-language Document Parsing**: POINTS-Reader currently supports only English and Chinese, limiting its effectiveness on other languages.
## Citation
If you use this model in your work, please cite the following paper:
```
@article{points-reader,
title={POINTS-Reader: Distillation-Free Adaptation of Vision-Language Models for Document Conversion},
author={Liu, Yuan and Zhongyin Zhao and Tian, Le and Haicheng Wang and Xubing Ye and Yangxiu You and Zilin Yu and Chuhan Wu and Zhou, Xiao and Yu, Yang and Zhou, Jie},
journal={arXiv preprint arXiv:2509.01215},
year={2025}
}
@article{liu2024points1,
title={POINTS1. 5: Building a Vision-Language Model towards Real World Applications},
author={Liu, Yuan and Tian, Le and Zhou, Xiao and Gao, Xinyu and Yu, Kavio and Yu, Yang and Zhou, Jie},
journal={arXiv preprint arXiv:2412.08443},
year={2024}
}
@article{liu2024points,
title={POINTS: Improving Your Vision-language Model with Affordable Strategies},
author={Liu, Yuan and Zhao, Zhongyin and Zhuang, Ziyuan and Tian, Le and Zhou, Xiao and Zhou, Jie},
journal={arXiv preprint arXiv:2409.04828},
year={2024}
}
@article{liu2024rethinking,
title={Rethinking Overlooked Aspects in Vision-Language Models},
author={Liu, Yuan and Tian, Le and Zhou, Xiao and Zhou, Jie},
journal={arXiv preprint arXiv:2405.11850},
year={2024}
}
```
|
bekhzod-olimov/Qwen3-0.6B-Instruct-Uz
|
bekhzod-olimov
| 2025-09-03T03:59:56 | 0 | 0 | null |
[
"safetensors",
"region:us"
] | null | 2025-09-03T02:03:41 |
# Qwen0.6B-Instruct Uzbek
---
## 📚 Language Tabs / Til Bo‘limlari
<details>
<summary>English</summary>
---
🚀 **Qwen0.6B-Instruct Uzbek: Inference Performance Spotlight**
- 🔥 **Lightning-Fast & Ultra-Efficient:**
- Loads nearly 4x faster than the default Qwen (**0.73s** vs 2.86s) and a whopping ~11x faster than LLAMA (**0.73s** vs 8.08s).
- Requires only **~567MB** of GPU RAM, making it over 12x lighter than LLAMA and MISTRAL (~7GB).
- 🎯 **Language Mastery: Before & After:**
- The default Qwen model struggles with Uzbek and provides irrelevant answers.
- Our fine-tuned version confidently and accurately handles complex Uzbek questions, acting as a true native language assistant.
- ⚠️ **Version #1: A Strong Start With Room To Grow:**
- This initial release is an exciting milestone but is still being improved to match the advanced capabilities of massive LLMs.
- We are actively enhancing the training data and model finesse to deliver major leaps in performance.
- 🌟 **Why Qwen0.6B-Instruct Uzbek?**
- **Speed & Efficiency:** Blazing-fast and lightweight design for easy deployment anywhere.
- **Tailored for Uzbek:** Specialized fine-tuning for cultural and linguistic nuance.
- **Open-Source Spirit:** Inviting community collaboration and innovation.
- ✨ **Join Us!**
- Stay tuned, contribute, and help us shape an AI future where accessibility and cultural relevance go hand-in-hand!
</details>
<details>
<summary>O‘zbekcha</summary>
---
🚀 **Qwen0.6B-Instruct O‘zbek Modeli**
- ⚡ **Tezlik va Xotira Samaradorligi:**
- Fine-tune qilingan modelimiz original Qwen’dan ~4 barobar (**0.73s** vs 2.86s), LLAMA'dan esa ~11 barobar (**0.73s** vs 8s) tezroq yuklanadi.
- Atigi **~567MB** GPU xotirasini talab qilib, LLAMA va MISTRAL'ning ~7GB hajmidan **12 barobar yengilroq** yechimdir.
- 🎯 **Tilni Tushunishdagi Ustunlik:**
- Original Qwen0.6B o‘zbek tilidagi savollarga to‘g‘ri javob bera olmas edi.
- Bizning modelimiz esa, fine-tuning orqali o‘zbek tilidagi murakkab savollarni tushunib, madaniyatga mos javoblar berish qobiliyatiga ega bo'ldi.
- ⚠️ **Versiya #1: Katta Yo'lning Boshi:**
- Bu hali takomillashtirilishi davom etayotgan boshlang‘ich versiya.
- Javoblar sifati yirik modellar darajasida bo'lmasa-da, tezlik va yengillik borasida ustunlikka ega.
- Modelni yanada mukammallashtirish ustida jadal ish olib bormoqdamiz.
- ✨ **Kelajakni Birga Quramiz!**
- Yangiliklardan boxabar bo‘lish uchun bizni kuzatib boring va tilimizda yuqori sig‘imli AI kelajagini yaratishga o'z hissangizni qo'shing!
</details>
---
### Inference Script Example
```
import time
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
class QwenInstructUz:
"""
"bekhzod-olimov/Qwen3-0.6B-Instruct-Uz" modelidan javob generatsiya
qilish uchun mo'ljallangan sinf.
"""
def __init__(self, model_path="bekhzod-olimov/Qwen3-0.6B-Instruct-Uz", max_tokens=256):
"""
Classni ishga tushurish (initialization).
Model va tokenizer yuklanadi.
Args:
model_path (str): Hugging Face'dagi model manzili.
max_tokens (int): Javob beriladigan qilinadigan tokenlar soni.
"""
self.model_path = model_path
self.max_tokens = max_tokens
self.device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
print(f"'{self.model_path}' modeli yuklanmoqda...")
start_time = time.perf_counter()
# Tokenizer va modelni yuklash
self.tokenizer = self._load_tokenizer()
self.model = self._load_model()
# Modelni bashorat qilish rejimiga o'tkazish
self.model.eval()
load_time = time.perf_counter() - start_time
print(f"Model {load_time:.2f} soniyada yuklandi va {self.device} qurilmasiga o'rnatildi.")
def _load_tokenizer(self):
"""Tokenizerni yuklash uchun yordamchi metod."""
tokenizer = AutoTokenizer.from_pretrained(self.model_path, trust_remote_code=True)
if tokenizer.pad_token is None:
tokenizer.pad_token = tokenizer.eos_token
return tokenizer
def _load_model(self):
"""Modelni yuklash uchun yordamchi metod."""
model = AutoModelForCausalLM.from_pretrained(
self.model_path,
torch_dtype=torch.bfloat16,
device_map="auto" if self.device.type == 'cuda' else None,
trust_remote_code=True
)
# device_map='auto' ishlatilganda .to(device) chaqirish shart emas
return model
def generate_answer(self, question: str):
"""
Berilgan savol uchun modeldan javob oladigan metod.
Args:
question (str): Foydalanuvchi tomonidan berilgan savol.
Returns:
tuple: (javob, generatsiya vaqti)
"""
prompt_text = f"User: {question}\nAssistant:"
inputs = self.tokenizer(prompt_text, return_tensors='pt').to(self.device)
start_gen = time.perf_counter()
with torch.no_grad(): # Inferens vaqtida gradientlarni hisoblashni o'chirish
outputs = self.model.generate(
**inputs,
max_new_tokens=self.max_tokens,
do_sample=True,
temperature=0.7,
top_p=0.8,
top_k=20,
pad_token_id=self.tokenizer.eos_token_id
)
gen_time = time.perf_counter() - start_gen
# Javobni dekodlash va tozalash
full_response = self.tokenizer.decode(outputs[0], skip_special_tokens=True)
# Savol qismini olib tashlash
if prompt_text.strip().startswith("User:"):
answer = full_response.replace(prompt_text.split('\n')[0], "").replace("\nAssistant:", "").strip()
else: # Umumiy holat uchun
answer = full_response
return answer, gen_time
# Foydalanish uchun
if __name__ == "__main__":
# 1. Obyekt yaratiladi (bu bosqichda model yuklanadi)
qwen_model = QwenInstructUz()
# 2. Turli savollar yaratiladi
questions = [
"Assalomu alaykum!",
"Juda issiq havoda nima qilish kerak?",
"Jamiyatda qashshoqlikni kamaytirishning usullarini sanab ber!"
]
for q in questions:
print("\n" + "="*50)
answer, gen_time = qwen_model.generate_answer(q)
print(f"Savol: {q}\n\nJavob: {answer}\n\n(Javob berish vaqti: {gen_time:.4f} soniya)")
```
|
Chattiori/ChattioriMixesXL
|
Chattiori
| 2025-09-03T03:55:59 | 0 | 4 | null |
[
"sdxl",
"pony",
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-03-25T03:33:05 |
---
license: creativeml-openrail-m
tags:
- sdxl
- pony
---
The place where our SDXL and Pony models (Chattiori and Crody) and some deleted models on CivitAI saved for several purposes.
Chattiori: https://civitai.com/user/Chattiori
Crody: https://civitai.com/user/Crody
|
thyYu2024/fwpb_61
|
thyYu2024
| 2025-09-03T03:41:12 | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen2-VL-2B-Instruct",
"base_model:finetune:Qwen/Qwen2-VL-2B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-09-01T21:55:57 |
---
base_model: Qwen/Qwen2-VL-2B-Instruct
library_name: transformers
model_name: fwpb_61
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for fwpb_61
This model is a fine-tuned version of [Qwen/Qwen2-VL-2B-Instruct](https://huggingface.co/Qwen/Qwen2-VL-2B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="thyYu2024/fwpb_61", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.20.0
- Transformers: 4.55.2
- Pytorch: 2.6.0+cu118
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
bah63843/blockassist-bc-plump_fast_antelope_1756869892
|
bah63843
| 2025-09-03T03:25:46 | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-03T03:25:38 |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
sduppala24/ppo-LunarLander-v1
|
sduppala24
| 2025-09-03T02:36:44 | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-09-03T02:36:27 |
---
library_name: stable-baselines3
tags:
- LunarLander-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v3
type: LunarLander-v3
metrics:
- type: mean_reward
value: 193.15 +/- 68.96
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v3**
This is a trained model of a **PPO** agent playing **LunarLander-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
johncoffeekoin/Qwen3-0.6B-Gensyn-Swarm-scampering_wily_antelope
|
johncoffeekoin
| 2025-09-03T02:27:07 | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am scampering_wily_antelope",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-03T01:53:43 |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am scampering_wily_antelope
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
IncarnateWorld/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-mammalian_scavenging_grasshopper
|
IncarnateWorld
| 2025-09-03T01:49:49 | 17 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am mammalian_scavenging_grasshopper",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-06T06:01:54 |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am mammalian_scavenging_grasshopper
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
0xOzii/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-large_padded_chimpanzee
|
0xOzii
| 2025-09-03T01:48:56 | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am large padded chimpanzee",
"trl",
"genrl-swarm",
"I am large_padded_chimpanzee",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-09T20:44:14 |
---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-large_padded_chimpanzee
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am large padded chimpanzee
- trl
- genrl-swarm
- I am large_padded_chimpanzee
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-large_padded_chimpanzee
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="0xOzii/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-large_padded_chimpanzee", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.5.1
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
ReadyArt/Safeword-Casual-V1-R1-12B
|
ReadyArt
| 2025-09-03T01:41:27 | 0 | 0 | null |
[
"safetensors",
"gemma3",
"nsfw",
"explicit",
"roleplay",
"unaligned",
"dangerous",
"ERP",
"base_model:TheDrummer/Gemma-3-R1-12B-v1",
"base_model:finetune:TheDrummer/Gemma-3-R1-12B-v1",
"license:gemma",
"region:us"
] | null | 2025-09-03T01:22:17 |
---
license: gemma
base_model:
- TheDrummer/Gemma-3-R1-12B-v1
base_model_relation: finetune
tags:
- nsfw
- explicit
- roleplay
- unaligned
- dangerous
- ERP
---
<style>
body {
font-family: 'Quicksand', sans-serif;
background: linear-gradient(135deg, #e0e0e0 0%, #f0f0f0 100%);
color: #333 !important;
text-shadow: 0 0 3px rgba(224, 224, 224, 0.7);
margin: 0;
padding: 20px;
transition: all 0.5s ease;
}
.container {
min-width: 100%;
margin: 0 auto;
max-width: 1200px;
background: rgba(240, 240, 240, 0.95);
border-radius: 12px;
padding: 30px;
box-shadow: 0 4px 20px rgba(0, 0, 0, 0.1), 0 0 30px rgba(200, 200, 200, 0.5);
border: 1px solid rgba(200, 200, 200, 0.2);
position: relative;
overflow: hidden;
}
.container::before {
content: '';
position: absolute;
top: -1px;
left: -1px;
right: -1px;
bottom: -1px;
border: 1px solid rgba(200, 200, 200, 0.5);
border-radius: 12px;
pointer-events: none;
animation: borderGlow 3s ease-in-out infinite alternate;
}
@keyframes borderGlow {
0% {
box-shadow: 0 0 5px rgba(200, 200, 200, 0.3);
border-color: rgba(200, 200, 200, 0.5);
}
50% {
box-shadow: 0 0 15px rgba(170, 170, 170, 0.3);
border-color: rgba(170, 170, 170, 0.5);
}
100% {
box-shadow: 0 0 5px rgba(200, 200, 200, 0.3);
border-color: rgba(200, 200, 200, 0.5);
}
}
.header {
text-align: center;
margin-bottom: 30px;
position: relative;
}
.header::after {
content: '';
position: absolute;
bottom: -15px;
left: 25%;
right: 25%;
height: 1px;
background: linear-gradient(90deg, transparent, rgba(200, 200, 200, 0.5), transparent);
animation: scanline 8s linear infinite;
}
.model-name {
color: #333;
font-size: 2.5em;
text-shadow: 0 0 15px rgba(200, 200, 200, 0.5);
margin: 0;
letter-spacing: -1px;
animation: textGlow 4s ease-in-out infinite alternate;
}
@keyframes textGlow {
0% { text-shadow: 0 0 15px rgba(200, 200, 200, 0.5); }
50% { text-shadow: 0 0 20px rgba(170, 170, 170, 0.5); }
100% { text-shadow: 0 0 15px rgba(200, 200, 200, 0.5); }
}
.subtitle {
color: #444;
font-size: 1.2em;
margin-top: 10px;
animation: subtitleFade 6s ease-in-out infinite;
}
.waifu-container {
margin: 20px -30px;
width: calc(100% + 60px);
overflow: hidden;
border-radius: 8px;
border: 1px solid rgba(200, 200, 200, 0.3);
position: relative;
}
.waifu-container::before {
content: '';
position: absolute;
top: 0;
left: 0;
right: 0;
bottom: 0;
background: linear-gradient(45deg,
rgba(200, 200, 200, 0.1) 0%,
transparent 20%,
transparent 80%,
rgba(170, 170, 170, 0.1) 100%);
pointer-events: none;
animation: gradientSlide 10s linear infinite;
}
.waifu-img {
width: 100%;
height: auto;
border-radius: 0;
border: none;
box-shadow: 0 0 40px rgba(200, 200, 200, 0.2);
transition: transform 0.5s ease;
}
.waifu-img:hover {
transform: scale(1.01);
}
.section {
color: #444;
margin: 25px 0;
padding: 20px;
background: rgba(230, 230, 230, 0.9);
border-radius: 8px;
border: 1px solid rgba(200, 200, 200, 0.15);
position: relative;
transition: all 0.3s ease;
}
.section:hover {
border-color: rgba(170, 170, 170, 0.3);
box-shadow: 0 0 15px rgba(200, 200, 200, 0.1);
}
.section::before {
content: '';
position: absolute;
top: -1px;
left: -1px;
right: -1px;
bottom: -1px;
border: 1px solid rgba(200, 200, 200, 0.3);
border-radius: 8px;
pointer-events: none;
animation: sectionPulse 5s ease-in-out infinite;
}
@keyframes sectionPulse {
0%, 100% { opacity: 0.7; }
50% { opacity: 0.3; }
}
.section-title {
color: #333;
font-size: 1.8em;
margin-top: 0;
text-shadow: 0 0 5px rgba(200, 200, 200, 0.3);
position: relative;
display: inline-block;
}
.section-title::after {
content: '';
position: absolute;
bottom: -5px;
left: 0;
width: 100%;
height: 1px;
background: linear-gradient(90deg, rgba(200, 200, 200, 0.5), rgba(170, 170, 170, 0.5));
transform: scaleX(0);
transform-origin: left;
transition: transform 0.3s ease;
}
.section:hover .section-title::after {
transform: scaleX(1);
}
.quant-links {
display: grid;
grid-template-columns: repeat(3, 1fr);
gap: 15px;
margin: 20px 0;
}
.link-card {
padding: 15px;
background: rgba(220, 220, 220, 0.95);
border-radius: 8px;
transition: all 0.3s ease;
border: 1px solid rgba(200, 200, 200, 0.1);
position: relative;
overflow: hidden;
text-decoration: none;
color: inherit;
}
.link-card::before {
content: '';
position: absolute;
top: 0;
left: 0;
right: 0;
height: 2px;
background: linear-gradient(90deg, rgba(200, 200, 200, 0.5), rgba(170, 170, 170, 0.5));
animation: cardScan 4s linear infinite;
}
@keyframes cardScan {
0% { transform: translateX(-100%); }
100% { transform: translateX(100%); }
}
.link-card:hover {
transform: translateY(-3px);
box-shadow: 0 5px 15px rgba(200, 200, 200, 0.2);
border-color: rgba(170, 170, 170, 0.3);
}
.link-card h3 {
margin-top: 0;
color: #444 !important;
}
.link-button {
display: inline-flex;
align-items: center;
background: rgba(200, 200, 200, 0.1);
color: #444 !important;
padding: 8px 15px;
border-radius: 6px;
text-decoration: none;
border: 1px solid rgba(200, 200, 200, 0.3);
margin: 5px 0;
transition: all 0.3s ease;
font-size: 0.95em;
position: relative;
overflow: hidden;
}
.link-button::before {
content: '';
position: absolute;
top: 0;
left: -100%;
width: 100%;
height: 100%;
background: linear-gradient(90deg, transparent, rgba(255, 255, 255, 0.2), transparent);
transition: all 0.5s ease;
}
.link-button:hover {
background: rgba(170, 170, 170, 0.2);
border-color: rgba(170, 170, 170, 0.5);
transform: translateY(-2px);
box-shadow: 0 4px 12px rgba(200, 200, 200, 0.2);
}
.link-button::after {
content: '→';
margin-left: 8px;
opacity: 0.7;
transition: all 0.3s ease;
}
.link-button:hover::after {
transform: translateX(3px);
opacity: 1;
}
.button-group {
display: flex;
flex-wrap: wrap;
gap: 10px;
margin: 15px 0;
}
.disclaimer {
color: #555;
border-left: 3px solid #555;
padding-left: 15px;
margin: 20px 0;
position: relative;
}
.disclaimer::before {
content: '⚠️';
position: absolute;
left: -10px;
top: 0;
transform: translateX(-100%);
animation: pulse 2s ease-in-out infinite;
}
@keyframes pulse {
0%, 100% { opacity: 1; }
50% { opacity: 0.5; }
}
.badge {
display: inline-block;
padding: 5px 10px;
border-radius: 5px;
background: rgba(200, 200, 200, 0.1);
border: 1px solid #ccc;
margin: 5px;
font-size: 0.9em;
animation: badgePulse 3s ease-in-out infinite;
}
@keyframes badgePulse {
0%, 100% { box-shadow: 0 0 5px rgba(200, 200, 200, 0.3); }
50% { box-shadow: 0 0 10px rgba(200, 200, 200, 0.5); }
}
@media (prefers-color-scheme: light) {
.container {
background: rgba(255, 255, 255, 0.95);
border-color: rgba(170, 170, 170, 0.3);
}
.model-name, .section-title, .subtitle {
color: #333;
text-shadow: 0 0 5px rgba(170, 170, 170, 0.3);
}
.section {
background: rgba(255, 255, 255, 0.9);
border-color: rgba(170, 170, 170, 0.2);
color: #333;
}
.section p,
.section ul li,
.section > p > strong {
color: #333 !important;
}
.link-card {
background: rgba(255, 255, 255, 0.95);
border-color: rgba(170, 170, 170, 0.2);
}
.link-card h3 {
color: #333 !important;
}
.link-button {
background: rgba(170, 170, 170, 0.1);
color: #333 !important;
border-color: rgba(170, 170, 170, 0.3);
}
.link-button:hover {
background: rgba(170, 170, 170, 0.2);
border-color: rgba(170, 170, 170, 0.5);
}
.disclaimer {
color: #333;
border-color: #333;
}
.badge {
border-color: #333;
background: rgba(170, 170, 170, 0.1);
}
}
</style>
<div class="container">
<div class="header">
<h1 class="model-name">Safeword-Casual-V1-R1-12B</h1>
<div class="section">
<h2 class="section-title">⚠️ Ethical Considerations</h2>
<p>This model will:</p>
<ul>
<li>Generate content that requires industrial-grade brain bleach</li>
<li>Be responsible for requiring Vatican-level exorcisms</li>
<li>Void all warranties on your soul</li>
</ul>
</div>
<div class="section">
<h2 class="section-title">📜 License Agreement</h2>
<p>By using this model, you agree:</p>
<ul>
<li>To accept full responsibility for any psychotic breaks incurred</li>
<li>Pay for the exorcist of anyone who reads the logs</li>
<li>To pretend this is "for science" while crying in the shower</li>
</ul>
</div>
<div class="section">
<h2 class="section-title">🧠 Model Authors</h2>
<ul>
<li>FrenzyBiscuit (Finetuner)</li>
<li>sleepdeprived3 (Safeword Dataset Author)</li>
<li>TheDrummer (Base Model)</li>
</ul>
</div>
</div>
</div>
|
hartryseeverh/blockassist-bc-docile_miniature_bison_1756863105
|
hartryseeverh
| 2025-09-03T01:33:29 | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"docile miniature bison",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-03T01:32:56 |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- docile miniature bison
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mradermacher/PersianSciQA-LLaMA-13B-GGUF
|
mradermacher
| 2025-09-03T01:22:08 | 125 | 0 |
transformers
|
[
"transformers",
"gguf",
"causal-lm",
"persian",
"llama",
"question-answering",
"fine-tuning",
"persian-llama",
"fa",
"base_model:safora/PersianSciQA-LLaMA-13B",
"base_model:quantized:safora/PersianSciQA-LLaMA-13B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2025-08-28T21:46:42 |
---
base_model: safora/PersianSciQA-LLaMA-13B
language: fa
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- causal-lm
- persian
- llama
- question-answering
- fine-tuning
- persian-llama
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/safora/PersianSciQA-LLaMA-13B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#PersianSciQA-LLaMA-13B-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/PersianSciQA-LLaMA-13B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/PersianSciQA-LLaMA-13B-GGUF/resolve/main/PersianSciQA-LLaMA-13B.Q2_K.gguf) | Q2_K | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/PersianSciQA-LLaMA-13B-GGUF/resolve/main/PersianSciQA-LLaMA-13B.Q3_K_S.gguf) | Q3_K_S | 6.0 | |
| [GGUF](https://huggingface.co/mradermacher/PersianSciQA-LLaMA-13B-GGUF/resolve/main/PersianSciQA-LLaMA-13B.Q3_K_M.gguf) | Q3_K_M | 6.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/PersianSciQA-LLaMA-13B-GGUF/resolve/main/PersianSciQA-LLaMA-13B.Q3_K_L.gguf) | Q3_K_L | 7.2 | |
| [GGUF](https://huggingface.co/mradermacher/PersianSciQA-LLaMA-13B-GGUF/resolve/main/PersianSciQA-LLaMA-13B.IQ4_XS.gguf) | IQ4_XS | 7.3 | |
| [GGUF](https://huggingface.co/mradermacher/PersianSciQA-LLaMA-13B-GGUF/resolve/main/PersianSciQA-LLaMA-13B.Q4_K_S.gguf) | Q4_K_S | 7.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/PersianSciQA-LLaMA-13B-GGUF/resolve/main/PersianSciQA-LLaMA-13B.Q4_K_M.gguf) | Q4_K_M | 8.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/PersianSciQA-LLaMA-13B-GGUF/resolve/main/PersianSciQA-LLaMA-13B.Q5_K_S.gguf) | Q5_K_S | 9.3 | |
| [GGUF](https://huggingface.co/mradermacher/PersianSciQA-LLaMA-13B-GGUF/resolve/main/PersianSciQA-LLaMA-13B.Q5_K_M.gguf) | Q5_K_M | 9.6 | |
| [GGUF](https://huggingface.co/mradermacher/PersianSciQA-LLaMA-13B-GGUF/resolve/main/PersianSciQA-LLaMA-13B.Q6_K.gguf) | Q6_K | 11.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/PersianSciQA-LLaMA-13B-GGUF/resolve/main/PersianSciQA-LLaMA-13B.Q8_0.gguf) | Q8_0 | 14.3 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Incandescent-Malevolence-70B-i1-GGUF
|
mradermacher
| 2025-09-03T01:08:29 | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"nsfw",
"explicit",
"roleplay",
"mixed-AI",
"furry",
"Furry",
"en",
"base_model:Mawdistical/Incandescent-Malevolence-70B",
"base_model:quantized:Mawdistical/Incandescent-Malevolence-70B",
"license:cc-by-nd-4.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-09-02T05:42:10 |
---
base_model: Mawdistical/Incandescent-Malevolence-70B
language:
- en
library_name: transformers
license: cc-by-nd-4.0
license_link: https://creativecommons.org/licenses/by-nd/4.0/deed.en
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- nsfw
- explicit
- roleplay
- mixed-AI
- furry
- Furry
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
weighted/imatrix quants of https://huggingface.co/Mawdistical/Incandescent-Malevolence-70B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Incandescent-Malevolence-70B-i1-GGUF).***
static quants are available at https://huggingface.co/mradermacher/Incandescent-Malevolence-70B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Incandescent-Malevolence-70B-i1-GGUF/resolve/main/Incandescent-Malevolence-70B.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) |
| [GGUF](https://huggingface.co/mradermacher/Incandescent-Malevolence-70B-i1-GGUF/resolve/main/Incandescent-Malevolence-70B.i1-IQ1_S.gguf) | i1-IQ1_S | 15.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Incandescent-Malevolence-70B-i1-GGUF/resolve/main/Incandescent-Malevolence-70B.i1-IQ1_M.gguf) | i1-IQ1_M | 16.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Incandescent-Malevolence-70B-i1-GGUF/resolve/main/Incandescent-Malevolence-70B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 19.2 | |
| [GGUF](https://huggingface.co/mradermacher/Incandescent-Malevolence-70B-i1-GGUF/resolve/main/Incandescent-Malevolence-70B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 21.2 | |
| [GGUF](https://huggingface.co/mradermacher/Incandescent-Malevolence-70B-i1-GGUF/resolve/main/Incandescent-Malevolence-70B.i1-IQ2_S.gguf) | i1-IQ2_S | 22.3 | |
| [GGUF](https://huggingface.co/mradermacher/Incandescent-Malevolence-70B-i1-GGUF/resolve/main/Incandescent-Malevolence-70B.i1-IQ2_M.gguf) | i1-IQ2_M | 24.2 | |
| [GGUF](https://huggingface.co/mradermacher/Incandescent-Malevolence-70B-i1-GGUF/resolve/main/Incandescent-Malevolence-70B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 24.6 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Incandescent-Malevolence-70B-i1-GGUF/resolve/main/Incandescent-Malevolence-70B.i1-Q2_K.gguf) | i1-Q2_K | 26.5 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Incandescent-Malevolence-70B-i1-GGUF/resolve/main/Incandescent-Malevolence-70B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 27.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Incandescent-Malevolence-70B-i1-GGUF/resolve/main/Incandescent-Malevolence-70B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 29.4 | |
| [GGUF](https://huggingface.co/mradermacher/Incandescent-Malevolence-70B-i1-GGUF/resolve/main/Incandescent-Malevolence-70B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 31.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Incandescent-Malevolence-70B-i1-GGUF/resolve/main/Incandescent-Malevolence-70B.i1-IQ3_M.gguf) | i1-IQ3_M | 32.0 | |
| [GGUF](https://huggingface.co/mradermacher/Incandescent-Malevolence-70B-i1-GGUF/resolve/main/Incandescent-Malevolence-70B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 34.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Incandescent-Malevolence-70B-i1-GGUF/resolve/main/Incandescent-Malevolence-70B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 37.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Incandescent-Malevolence-70B-i1-GGUF/resolve/main/Incandescent-Malevolence-70B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 38.0 | |
| [GGUF](https://huggingface.co/mradermacher/Incandescent-Malevolence-70B-i1-GGUF/resolve/main/Incandescent-Malevolence-70B.i1-Q4_0.gguf) | i1-Q4_0 | 40.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Incandescent-Malevolence-70B-i1-GGUF/resolve/main/Incandescent-Malevolence-70B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 40.4 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Incandescent-Malevolence-70B-i1-GGUF/resolve/main/Incandescent-Malevolence-70B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 42.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Incandescent-Malevolence-70B-i1-GGUF/resolve/main/Incandescent-Malevolence-70B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 48.8 | |
| [GGUF](https://huggingface.co/mradermacher/Incandescent-Malevolence-70B-i1-GGUF/resolve/main/Incandescent-Malevolence-70B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 50.0 | |
| [PART 1](https://huggingface.co/mradermacher/Incandescent-Malevolence-70B-i1-GGUF/resolve/main/Incandescent-Malevolence-70B.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Incandescent-Malevolence-70B-i1-GGUF/resolve/main/Incandescent-Malevolence-70B.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 58.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
giovannidemuri/llama8b-er-v553-seed2-hx_lora
|
giovannidemuri
| 2025-09-03T01:07:29 | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-02T23:07:07 |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Yodhasu04/PreThesis-merged-emo-4bit
|
Yodhasu04
| 2025-09-03T01:02:12 | 0 | 0 | null |
[
"safetensors",
"mistral",
"unsloth",
"license:mit",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-09-02T17:52:41 |
---
license: mit
tags:
- unsloth
---
|
mradermacher/Narrly-Gunnison-CPU-Ready-v3-GGUF
|
mradermacher
| 2025-09-03T00:56:11 | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:DaneDisimino/Narrly-Gunnison-CPU-Ready-v3",
"base_model:quantized:DaneDisimino/Narrly-Gunnison-CPU-Ready-v3",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-02T23:24:29 |
---
base_model: DaneDisimino/Narrly-Gunnison-CPU-Ready-v3
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/DaneDisimino/Narrly-Gunnison-CPU-Ready-v3
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Narrly-Gunnison-CPU-Ready-v3-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Narrly-Gunnison-CPU-Ready-v3-GGUF/resolve/main/Narrly-Gunnison-CPU-Ready-v3.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Narrly-Gunnison-CPU-Ready-v3-GGUF/resolve/main/Narrly-Gunnison-CPU-Ready-v3.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Narrly-Gunnison-CPU-Ready-v3-GGUF/resolve/main/Narrly-Gunnison-CPU-Ready-v3.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Narrly-Gunnison-CPU-Ready-v3-GGUF/resolve/main/Narrly-Gunnison-CPU-Ready-v3.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Narrly-Gunnison-CPU-Ready-v3-GGUF/resolve/main/Narrly-Gunnison-CPU-Ready-v3.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Narrly-Gunnison-CPU-Ready-v3-GGUF/resolve/main/Narrly-Gunnison-CPU-Ready-v3.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Narrly-Gunnison-CPU-Ready-v3-GGUF/resolve/main/Narrly-Gunnison-CPU-Ready-v3.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Narrly-Gunnison-CPU-Ready-v3-GGUF/resolve/main/Narrly-Gunnison-CPU-Ready-v3.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Narrly-Gunnison-CPU-Ready-v3-GGUF/resolve/main/Narrly-Gunnison-CPU-Ready-v3.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Narrly-Gunnison-CPU-Ready-v3-GGUF/resolve/main/Narrly-Gunnison-CPU-Ready-v3.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Narrly-Gunnison-CPU-Ready-v3-GGUF/resolve/main/Narrly-Gunnison-CPU-Ready-v3.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Narrly-Gunnison-CPU-Ready-v3-GGUF/resolve/main/Narrly-Gunnison-CPU-Ready-v3.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
omerbektass/blockassist-bc-keen_fast_giraffe_1756859910
|
omerbektass
| 2025-09-03T00:38:54 | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-03T00:38:50 |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
NuggetC/rosie
|
NuggetC
| 2025-09-03T00:34:08 | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:TheRaf7/ultra-real-wan2.2",
"base_model:adapter:TheRaf7/ultra-real-wan2.2",
"region:us"
] |
text-to-image
| 2025-09-03T00:33:36 |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- output:
url: images/rosie_01.png
text: '-'
base_model: TheRaf7/ultra-real-wan2.2
instance_prompt: rosie
---
# rosie
<Gallery />
## Trigger words
You should use `rosie` to trigger the image generation.
## Download model
[Download](/NuggetC/rosie/tree/main) them in the Files & versions tab.
|
akirafudo/blockassist-bc-keen_fast_giraffe_1756858850
|
akirafudo
| 2025-09-03T00:21:14 | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-03T00:21:09 |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
rbelanec/train_multirc_1756735871
|
rbelanec
| 2025-09-03T00:12:29 | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama-factory",
"prefix-tuning",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | null | 2025-09-01T14:12:32 |
---
library_name: peft
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
- llama-factory
- prefix-tuning
- generated_from_trainer
model-index:
- name: train_multirc_1756735871
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_multirc_1756735871
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the multirc dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3394
- Num Input Tokens Seen: 117044976
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 123
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen |
|:-------------:|:------:|:------:|:---------------:|:-----------------:|
| 0.3178 | 0.5000 | 6130 | 0.3802 | 5867968 |
| 0.5557 | 1.0001 | 12260 | 0.3493 | 11717296 |
| 0.0109 | 1.5001 | 18390 | 0.1670 | 17588368 |
| 0.2324 | 2.0002 | 24520 | 0.3016 | 23419920 |
| 0.277 | 2.5002 | 30650 | 0.3568 | 29254768 |
| 0.3393 | 3.0002 | 36780 | 0.3402 | 35127456 |
| 0.3145 | 3.5003 | 42910 | 0.3391 | 40997712 |
| 0.3363 | 4.0003 | 49040 | 0.3400 | 46843184 |
| 0.3059 | 4.5004 | 55170 | 0.3385 | 52692880 |
| 0.4232 | 5.0004 | 61300 | 0.3394 | 58550736 |
| 0.2604 | 5.5004 | 67430 | 0.3361 | 64403536 |
| 0.2416 | 6.0005 | 73560 | 0.3536 | 70253968 |
| 0.4941 | 6.5005 | 79690 | 0.3594 | 76098208 |
| 0.4356 | 7.0006 | 85820 | 0.3684 | 81942784 |
| 0.5113 | 7.5006 | 91950 | 0.3346 | 87796256 |
| 0.2519 | 8.0007 | 98080 | 0.3266 | 93652576 |
| 0.388 | 8.5007 | 104210 | 0.3368 | 99519344 |
| 0.4701 | 9.0007 | 110340 | 0.3482 | 105351168 |
| 0.4091 | 9.5008 | 116470 | 0.3413 | 111213008 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.8.0+cu128
- Datasets 3.6.0
- Tokenizers 0.21.1
|
xinnn32/blockassist-bc-meek_winged_caterpillar_1756857685
|
xinnn32
| 2025-09-03T00:02:49 | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"meek winged caterpillar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-03T00:02:26 |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- meek winged caterpillar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
tachyphylaxis/Behemoth-X-123B-v2-exl2
|
tachyphylaxis
| 2025-09-02T23:59:26 | 0 | 0 |
exllamav2
|
[
"exllamav2",
"exl2",
"text-generation",
"base_model:TheDrummer/Behemoth-X-123B-v2",
"base_model:quantized:TheDrummer/Behemoth-X-123B-v2",
"region:us"
] |
text-generation
| 2025-09-02T23:59:25 |
---
inference: false
base_model: TheDrummer/Behemoth-X-123B-v2
base_model_relation: quantized
tags:
- exl2
library_name: exllamav2
pipeline_tag: text-generation
---
exllamav2 quantizations of TheDrummer's [Behemoth-X-123B-v2](https://huggingface.co/TheDrummer/Behemoth-X-123B-v2)
[2.25bpw h6](https://huggingface.co/MikeRoz/Behemoth-X-123B-v2-exl2/tree/2.25bpw_H6) (32.964 GiB)
[3.75bpw h6](https://huggingface.co/MikeRoz/Behemoth-X-123B-v2-exl2/tree/3.75bpw_H6) (54.234 GiB)
[5.00bpw h6](https://huggingface.co/MikeRoz/Behemoth-X-123B-v2-exl2/tree/5.00bpw_H6) (71.959 GiB)
[8.00bpw h8](https://huggingface.co/MikeRoz/Behemoth-X-123B-v2-exl2/tree/8.00bpw_H8) (114.560 GiB) (Uploading)
[measurement.json](https://huggingface.co/MikeRoz/Behemoth-R1-123B-v2-exl2/resolve/main/measurement.json?download=true)
|
fakir22/blockassist-bc-flapping_peaceful_caterpillar_1756856088
|
fakir22
| 2025-09-02T23:35:28 | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"flapping peaceful caterpillar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T23:35:25 |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- flapping peaceful caterpillar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
omerbektass/blockassist-bc-keen_fast_giraffe_1756856061
|
omerbektass
| 2025-09-02T23:34:42 | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T23:34:37 |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
omerbkts/blockassist-bc-keen_fast_giraffe_1756855563
|
omerbkts
| 2025-09-02T23:26:29 | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T23:26:25 |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
smoorsmith/Dream-v0-Instruct-7B
|
smoorsmith
| 2025-09-02T23:26:14 | 14,807 | 0 |
transformers
|
[
"transformers",
"safetensors",
"Dream",
"feature-extraction",
"text-generation",
"conversational",
"custom_code",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-08-11T09:40:29 |
---
library_name: transformers
license: apache-2.0
pipeline_tag: text-generation
---
# Dream-v0-Instruct-7B
This is the instruct model of Dream 7B, which is an open diffusion large language model with top-tier performance.
More details about the model and usage can be found in the blog and github bellow:
- **Blog:** https://hkunlp.github.io/blog/2025/dream/
- **Github:** https://github.com/HKUNLP/Dream
|
omerbektass/blockassist-bc-keen_fast_giraffe_1756855311
|
omerbektass
| 2025-09-02T23:22:14 | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T23:22:10 |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
eekay/Qwen2.5-7B-Instruct-cat-numbers-ft
|
eekay
| 2025-09-02T23:16:36 | 36 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-31T16:05:37 |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
aklemen/sloveniangpt-ctc-h2t-128-64
|
aklemen
| 2025-09-02T23:16:26 | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-02T23:13:40 |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
AntonDergunov/CartPole_PPO
|
AntonDergunov
| 2025-09-02T23:13:24 | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"CartPole-v1",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-09-02T16:42:59 |
---
library_name: stable-baselines3
tags:
- CartPole-v1
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 422.90 +/- 87.83
name: mean_reward
verified: false
---
# **PPO** Agent playing **CartPole-v1**
This is a trained model of a **PPO** agent playing **CartPole-v1**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
DeathGodlike/Lunar-Nexus-12B_EXL3
|
DeathGodlike
| 2025-09-02T23:11:32 | 0 | 0 |
safetensors
|
[
"safetensors",
"exl3",
"4-bit",
"6-bit",
"8-bit",
"text-generation",
"base_model:Vortex5/Lunar-Nexus-12B",
"base_model:quantized:Vortex5/Lunar-Nexus-12B",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-09-02T23:11:31 |
---
license: apache-2.0
base_model:
- Vortex5/Lunar-Nexus-12B
base_model_relation: quantized
pipeline_tag: text-generation
library_name: safetensors
tags:
- exl3
- 4-bit
- 6-bit
- 8-bit
---
## EXL3 quants: [ [H8-4.0BPW](https://huggingface.co/DeathGodlike/Lunar-Nexus-12B_EXL3/tree/H8-4.0BPW) | [H8-6.0BPW](https://huggingface.co/DeathGodlike/Lunar-Nexus-12B_EXL3/tree/H8-6.0BPW) | [H8-8.0BPW](https://huggingface.co/DeathGodlike/Lunar-Nexus-12B_EXL3/tree/H8-8.0BPW) ]
# Original model: [Lunar-Nexus-12B](https://huggingface.co/Vortex5/Lunar-Nexus-12B) by [Vortex5](https://huggingface.co/Vortex5)
|
akirafudo/blockassist-bc-keen_fast_giraffe_1756854321
|
akirafudo
| 2025-09-02T23:05:45 | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T23:05:40 |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
OddTheGreat/Circuitry_24B_V.2
|
OddTheGreat
| 2025-09-02T22:42:34 | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"roleplay",
"creative",
"en",
"ru",
"base_model:Delta-Vector/Rei-24B-KTO",
"base_model:merge:Delta-Vector/Rei-24B-KTO",
"base_model:TheDrummer/Cydonia-24B-v4.1",
"base_model:merge:TheDrummer/Cydonia-24B-v4.1",
"base_model:zerofata/MS3.2-PaintedFantasy-v2-24B",
"base_model:merge:zerofata/MS3.2-PaintedFantasy-v2-24B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-31T05:42:39 |
---
base_model:
- Delta-Vector/Rei-24B-KTO
- zerofata/MS3.2-PaintedFantasy-v2-24B
- TheDrummer/Cydonia-24B-v4.1
library_name: transformers
tags:
- mergekit
- merge
- roleplay
- creative
language:
- en
- ru
---
# Circuitry_24B_V.2
This is a merge of pre-trained language models.
Goal of this merge was to replace Mechanism as my new main rp model.
Model is coherent, and handles bad written or overengineered cards. Creativity is better than in Mechanism model (finally, normal names!), but model remains stable, prose and dialogues are less robotic and distant, more emotional.
Instruction following capabilities are good too, only problem i had spotted is periodic formatting errors.
Good balance at sfw/nsfw, can be positive, neutral or negative depending on prompt.
ERP is not bad.
On RU was tested only as assistant and was good at it.
Tested on 8k context average, 12k and 16k runs didn't showed instability or dramatic quality loss.
Used q4_K_M, Mistral template, instruct on, T1.04, xtc off or 0.1 0.1 (off is better.)
|
qinuoitu/blockassist-bc-playful_huge_nightingale_1756852898
|
qinuoitu
| 2025-09-02T22:41:55 | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"playful huge nightingale",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T22:41:38 |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- playful huge nightingale
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
thejaminator/qwen-hook-layer-1-2ndsep-step-1000
|
thejaminator
| 2025-09-02T22:18:18 | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen3",
"base_model:Qwen/Qwen3-8B",
"base_model:adapter:Qwen/Qwen3-8B",
"region:us"
] | null | 2025-09-02T22:17:58 |
---
base_model: Qwen/Qwen3-8B
library_name: peft
---
# LoRA Adapter for SAE Introspection
This is a LoRA (Low-Rank Adaptation) adapter trained for SAE (Sparse Autoencoder) introspection tasks.
## Base Model
- **Base Model**: `Qwen/Qwen3-8B`
- **Adapter Type**: LoRA
- **Task**: SAE Feature Introspection
## Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
# Load base model and tokenizer
base_model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen3-8B")
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen3-8B")
# Load LoRA adapter
model = PeftModel.from_pretrained(base_model, "thejaminator/qwen-hook-layer-1-2ndsep-step-1000")
```
## Training Details
This adapter was trained using the lightweight SAE introspection training script to help the model understand and explain SAE features through activation steering.
|
ypszn/blockassist-bc-yapping_pawing_worm_1756850885
|
ypszn
| 2025-09-02T22:09:43 | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yapping pawing worm",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T22:08:46 |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yapping pawing worm
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
omerbkts/blockassist-bc-keen_fast_giraffe_1756850889
|
omerbkts
| 2025-09-02T22:08:36 | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T22:08:31 |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
akirafudo/blockassist-bc-keen_fast_giraffe_1756850770
|
akirafudo
| 2025-09-02T22:06:34 | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T22:06:30 |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
csikasote/mms-1b-all-swagen-female-15hrs-42-DAT-5e-2
|
csikasote
| 2025-09-02T22:04:36 | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"swagen",
"mms",
"generated_from_trainer",
"base_model:facebook/mms-1b-all",
"base_model:finetune:facebook/mms-1b-all",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-09-02T21:43:56 |
---
library_name: transformers
license: cc-by-nc-4.0
base_model: facebook/mms-1b-all
tags:
- automatic-speech-recognition
- swagen
- mms
- generated_from_trainer
metrics:
- wer
model-index:
- name: mms-1b-all-swagen-female-15hrs-42-DAT-5e-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mms-1b-all-swagen-female-15hrs-42-DAT-5e-2
This model is a fine-tuned version of [facebook/mms-1b-all](https://huggingface.co/facebook/mms-1b-all) on the SWAGEN - SWA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2725
- Wer: 0.2216
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 30.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 7.123 | 0.1572 | 200 | 2.3177 | 0.9998 |
| 1.6355 | 0.3145 | 400 | 0.3012 | 0.2041 |
| 1.3623 | 0.4717 | 600 | 0.2919 | 0.2139 |
| 1.2192 | 0.6289 | 800 | 0.2875 | 0.2145 |
| 1.2259 | 0.7862 | 1000 | 0.2805 | 0.2228 |
| 1.1707 | 0.9434 | 1200 | 0.2725 | 0.2222 |
| 1.1137 | 1.1006 | 1400 | 0.2755 | 0.2257 |
| 1.1597 | 1.2579 | 1600 | 0.2836 | 0.2274 |
| 1.1436 | 1.4151 | 1800 | 0.2803 | 0.2301 |
| 1.0708 | 1.5723 | 2000 | 0.2770 | 0.2326 |
### Framework versions
- Transformers 4.53.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.0
|
enacimie/WebSailor-3B-Q8_0-GGUF
|
enacimie
| 2025-09-02T21:56:45 | 0 | 0 | null |
[
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:Alibaba-NLP/WebSailor-3B",
"base_model:quantized:Alibaba-NLP/WebSailor-3B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-02T21:56:30 |
---
license: apache-2.0
tags:
- llama-cpp
- gguf-my-repo
base_model: Alibaba-NLP/WebSailor-3B
---
# enacimie/WebSailor-3B-Q8_0-GGUF
This model was converted to GGUF format from [`Alibaba-NLP/WebSailor-3B`](https://huggingface.co/Alibaba-NLP/WebSailor-3B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Alibaba-NLP/WebSailor-3B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo enacimie/WebSailor-3B-Q8_0-GGUF --hf-file websailor-3b-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo enacimie/WebSailor-3B-Q8_0-GGUF --hf-file websailor-3b-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo enacimie/WebSailor-3B-Q8_0-GGUF --hf-file websailor-3b-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo enacimie/WebSailor-3B-Q8_0-GGUF --hf-file websailor-3b-q8_0.gguf -c 2048
```
|
thejaminator/qwen-hook-layer-9-2ndsep
|
thejaminator
| 2025-09-02T21:55:08 | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen3",
"base_model:Qwen/Qwen3-8B",
"base_model:adapter:Qwen/Qwen3-8B",
"region:us"
] | null | 2025-09-02T21:54:50 |
---
base_model: Qwen/Qwen3-8B
library_name: peft
---
# LoRA Adapter for SAE Introspection
This is a LoRA (Low-Rank Adaptation) adapter trained for SAE (Sparse Autoencoder) introspection tasks.
## Base Model
- **Base Model**: `Qwen/Qwen3-8B`
- **Adapter Type**: LoRA
- **Task**: SAE Feature Introspection
## Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
# Load base model and tokenizer
base_model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen3-8B")
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen3-8B")
# Load LoRA adapter
model = PeftModel.from_pretrained(base_model, "thejaminator/qwen-hook-layer-9-2ndsep")
```
## Training Details
This adapter was trained using the lightweight SAE introspection training script to help the model understand and explain SAE features through activation steering.
|
yashz71/Mistral-7B-Instruct-v0.3-FINANCE-1
|
yashz71
| 2025-09-02T21:54:33 | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/mistral-7b-instruct-v0.3-bnb-4bit",
"base_model:finetune:unsloth/mistral-7b-instruct-v0.3-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-02T21:54:27 |
---
base_model: unsloth/mistral-7b-instruct-v0.3-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** yashz71
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-instruct-v0.3-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
yashz71/Mistral-7B-Instruct-v0.3-FINANCE
|
yashz71
| 2025-09-02T21:54:26 | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/mistral-7b-instruct-v0.3-bnb-4bit",
"base_model:finetune:unsloth/mistral-7b-instruct-v0.3-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-02T21:54:15 |
---
base_model: unsloth/mistral-7b-instruct-v0.3-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** yashz71
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-instruct-v0.3-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
alok0777/blockassist-bc-masked_pensive_lemur_1756849175
|
alok0777
| 2025-09-02T21:40:43 | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"masked pensive lemur",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T21:40:35 |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- masked pensive lemur
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
omerbkts/blockassist-bc-keen_fast_giraffe_1756849115
|
omerbkts
| 2025-09-02T21:39:00 | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T21:38:55 |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
xinnn32/blockassist-bc-meek_winged_caterpillar_1756848940
|
xinnn32
| 2025-09-02T21:36:44 | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"meek winged caterpillar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T21:36:40 |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- meek winged caterpillar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
AirSintez/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-crested_dappled_hippo
|
AirSintez
| 2025-09-02T21:33:55 | 130 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am crested_dappled_hippo",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-17T06:07:37 |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am crested_dappled_hippo
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Stasonelison/blockassist-bc-howling_powerful_aardvark_1756848409
|
Stasonelison
| 2025-09-02T21:27:29 | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"howling powerful aardvark",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T21:27:20 |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- howling powerful aardvark
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
alok0777/blockassist-bc-masked_pensive_lemur_1756848005
|
alok0777
| 2025-09-02T21:21:13 | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"masked pensive lemur",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T21:21:05 |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- masked pensive lemur
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Semantic-Health/Scenarios-Llama3.1-8B-v3
|
Semantic-Health
| 2025-09-02T21:17:52 | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-02T21:12:07 |
---
library_name: transformers
tags: []
---
## Data Distribution Adjustment
```python
import pandas as pd
import datasets
data_source = "qiaojin/PubMedQA"
dataset = datasets.load_dataset(data_source, 'pqa_artificial', streaming=False)
train_data = dataset['train'].to_pandas()
binary_data = train_data[train_data["final_decision"].isin(["yes", "no"])]
# Separate yes and no samples
yes_data = binary_data[binary_data["final_decision"] == "yes"]
no_data = binary_data[binary_data["final_decision"] == "no"]
# Get the size of the minority class
min_size = min(len(yes_data), len(no_data))
# Randomly sample from each class
yes_sampled = yes_data.sample(n=min_size, random_state=42)
no_sampled = no_data.sample(n=min_size, random_state=42)
# Combine into balanced dataset
balanced_data = pd.concat([yes_sampled, no_sampled])
# Shuffle the dataset
balanced_data = balanced_data.sample(frac=1, random_state=42).reset_index(drop=True)
```
## New Label Distribution
```python
final_decision
no 15125
yes 15125
Name: count, dtype: int64
```
|
Rootu/blockassist-bc-snorting_fleecy_goose_1756847367
|
Rootu
| 2025-09-02T21:10:13 | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"snorting fleecy goose",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T21:10:05 |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- snorting fleecy goose
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ik/speechless-twi-stage1-rvq-twiwhisper-merged
|
ik
| 2025-09-02T21:09:34 | 0 | 0 |
pytorch
|
[
"pytorch",
"speechless",
"rvq",
"whisper",
"twi",
"akan",
"vector-quantization",
"semantic-tokens",
"tw",
"ak",
"license:apache-2.0",
"region:us"
] | null | 2025-09-02T21:09:08 |
---
license: apache-2.0
language:
- tw
- ak
library_name: pytorch
tags:
- speechless
- rvq
- whisper
- twi
- akan
- vector-quantization
- semantic-tokens
---
# Speechless TWI — Stage 1 (RVQ for Whisper Encoder)
Trained RVQ that discretizes Whisper encoder features into semantic tokens for **Twi/Akan**.
## Files
- `rvq_final.pt` — state dict
- `config_stage1.json` — training/config params
- `rvq_wrapper.py` — tiny module defining `RVQWrapper`
## Usage (example)
```python
import torch, json
from huggingface_hub import hf_hub_download
from rvq_wrapper import RVQWrapper
cfg = json.load(open(hf_hub_download("ik/speechless-twi-stage1-rvq-whisper-medium", "config_stage1.json"), "r"))
ckpt = torch.load(hf_hub_download("ik/speechless-twi-stage1-rvq-whisper-medium", "rvq_final.pt"), map_location="cpu")
rvq = RVQWrapper(cfg["rvq_dim"], cfg["rvq_num_quantizers"], cfg["rvq_codebook_size"])
rvq.load_state_dict(ckpt["rvq"])
rvq.eval()
```
|
cebbbopwq/blockassist-bc-meek_trotting_bat_1756847082
|
cebbbopwq
| 2025-09-02T21:05:05 | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"meek trotting bat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T21:04:43 |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- meek trotting bat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
asmud/ds4sd-docling-models-onnx
|
asmud
| 2025-09-02T21:03:49 | 0 | 0 |
onnx
|
[
"onnx",
"computer-vision",
"document-analysis",
"table-detection",
"table-structure-recognition",
"quantized",
"jpqd",
"docling",
"tableformer",
"image-to-text",
"arxiv:2408.09869",
"license:cdla-permissive-2.0",
"region:us"
] |
image-to-text
| 2025-09-02T13:58:32 |
---
title: Docling Models ONNX - JPQD Quantized
emoji: 📄
colorFrom: blue
colorTo: purple
sdk: onnx
license: cdla-permissive-2.0
tags:
- computer-vision
- document-analysis
- table-detection
- table-structure-recognition
- onnx
- quantized
- jpqd
- docling
- tableformer
library_name: onnx
pipeline_tag: image-to-text
---
# Docling Models ONNX - JPQD Quantized
This repository contains ONNX versions of the Docling TableFormer models optimized with JPQD (Joint Pruning, Quantization, and Distillation) quantization for efficient inference.
## 📋 Model Overview
These models power the PDF document conversion package [Docling](https://github.com/DS4SD/docling). TableFormer models identify table structures from images with state-of-the-art accuracy.
### Available Models
| Model | Original Size | Optimized Size | Compression Ratio | Description |
|-------|---------------|----------------|-------------------|-------------|
| `ds4sd_docling_models_tableformer_accurate_jpqd.onnx` | ~1MB | ~1MB | - | High accuracy table structure recognition |
| `ds4sd_docling_models_tableformer_fast_jpqd.onnx` | ~1MB | ~1MB | - | Fast table structure recognition |
**Total repository size**: ~2MB (optimized for deployment)
## 🚀 Quick Start
### Installation
```bash
pip install onnxruntime opencv-python numpy pillow torch torchvision
```
### Basic Usage
```python
import onnxruntime as ort
import numpy as np
from PIL import Image
import cv2
# Load TableFormer model
model_path = "ds4sd_docling_models_tableformer_accurate_jpqd.onnx" # or fast variant
session = ort.InferenceSession(model_path)
def preprocess_table_image(image_path):
"""Preprocess table image for TableFormer model"""
# Load image
image = Image.open(image_path).convert('RGB')
image_array = np.array(image)
# TableFormer typically expects specific preprocessing
# This is a simplified example - actual preprocessing may vary
# Resize and normalize (adjust based on model requirements)
processed = cv2.resize(image_array, (224, 224)) # Example size
processed = processed.astype(np.float32) / 255.0
# Add batch dimension and transpose if needed
processed = np.expand_dims(processed, axis=0)
processed = np.transpose(processed, (0, 3, 1, 2)) # NHWC to NCHW if needed
return processed
def recognize_table_structure(image_path, model_session):
"""Recognize table structure using TableFormer"""
# Preprocess image
input_tensor = preprocess_table_image(image_path)
# Get model input name
input_name = model_session.get_inputs()[0].name
# Run inference
outputs = model_session.run(None, {input_name: input_tensor})
return outputs
# Example usage
table_image_path = "table_image.jpg"
results = recognize_table_structure(table_image_path, session)
print("Table structure recognition completed!")
```
### Advanced Usage with Docling Integration
```python
import onnxruntime as ort
from typing import Dict, Any
import numpy as np
class TableFormerONNX:
"""ONNX wrapper for TableFormer models"""
def __init__(self, model_path: str, model_type: str = "accurate"):
"""
Initialize TableFormer ONNX model
Args:
model_path: Path to ONNX model file
model_type: "accurate" or "fast"
"""
self.session = ort.InferenceSession(model_path)
self.model_type = model_type
# Get model input/output information
self.input_name = self.session.get_inputs()[0].name
self.input_shape = self.session.get_inputs()[0].shape
self.output_names = [output.name for output in self.session.get_outputs()]
print(f"Loaded {model_type} TableFormer model")
print(f"Input shape: {self.input_shape}")
print(f"Output names: {self.output_names}")
def preprocess(self, image: np.ndarray) -> np.ndarray:
"""Preprocess image for TableFormer inference"""
# Implement TableFormer-specific preprocessing
# This should match the preprocessing used during training
# Example preprocessing (adjust based on actual requirements):
if len(image.shape) == 3 and image.shape[2] == 3:
# RGB image
processed = cv2.resize(image, (224, 224)) # Adjust size as needed
processed = processed.astype(np.float32) / 255.0
processed = np.transpose(processed, (2, 0, 1)) # HWC to CHW
processed = np.expand_dims(processed, axis=0) # Add batch dimension
else:
raise ValueError("Expected RGB image with shape (H, W, 3)")
return processed
def predict(self, image: np.ndarray) -> Dict[str, Any]:
"""Run table structure prediction"""
# Preprocess image
input_tensor = self.preprocess(image)
# Run inference
outputs = self.session.run(None, {self.input_name: input_tensor})
# Process outputs
result = {}
for i, name in enumerate(self.output_names):
result[name] = outputs[i]
return result
def extract_table_structure(self, image: np.ndarray) -> Dict[str, Any]:
"""Extract table structure from image"""
# Get raw predictions
raw_outputs = self.predict(image)
# Post-process to extract table structure
# This would include:
# - Cell detection and classification
# - Row/column structure identification
# - Table boundary detection
# Simplified example structure
table_structure = {
"cells": [], # List of cell coordinates and types
"rows": [], # Row definitions
"columns": [], # Column definitions
"confidence": 0.0,
"model_type": self.model_type
}
# TODO: Implement actual post-processing logic
# This depends on the specific output format of TableFormer
return table_structure
# Usage example
def process_document_tables(image_paths, model_type="accurate"):
"""Process multiple table images"""
model_path = f"ds4sd_docling_models_tableformer_{model_type}_jpqd.onnx"
tableformer = TableFormerONNX(model_path, model_type)
results = []
for image_path in image_paths:
# Load image
image = cv2.imread(image_path)
image_rgb = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
# Extract table structure
structure = tableformer.extract_table_structure(image_rgb)
results.append({
"image_path": image_path,
"structure": structure
})
print(f"Processed: {image_path}")
return results
# Example usage
table_images = ["table1.jpg", "table2.jpg"]
results = process_document_tables(table_images, model_type="fast")
```
## 🔧 Model Details
### TableFormer Architecture
- **Base Model**: TableFormer (Transformer-based table structure recognition)
- **Paper**: [TableFormer: Table Structure Understanding With Transformers](https://doi.org/10.1109/CVPR52688.2022.00457)
- **Input**: Table region images
- **Output**: Table structure information (cells, rows, columns)
### Model Variants
#### Accurate Model (`tableformer_accurate`)
- **Use Case**: High precision table structure recognition
- **Trade-off**: Higher accuracy, slightly slower inference
- **Recommended for**: Production scenarios requiring maximum accuracy
#### Fast Model (`tableformer_fast`)
- **Use Case**: Real-time table structure recognition
- **Trade-off**: Good accuracy, faster inference
- **Recommended for**: Interactive applications, bulk processing
### Performance Benchmarks
TableFormer achieves state-of-the-art performance on table structure recognition:
| Model (TEDS Score) | Simple Tables | Complex Tables | All Tables |
| ------------------ | ------------- | -------------- | ---------- |
| Tabula | 78.0 | 57.8 | 67.9 |
| Traprange | 60.8 | 49.9 | 55.4 |
| Camelot | 80.0 | 66.0 | 73.0 |
| Acrobat Pro | 68.9 | 61.8 | 65.3 |
| EDD | 91.2 | 85.4 | 88.3 |
| **TableFormer** | **95.4** | **90.1** | **93.6** |
### Optimization Details
- **Method**: JPQD (Joint Pruning, Quantization, and Distillation)
- **Precision**: INT8 weights, FP32 activations
- **Framework**: ONNXRuntime dynamic quantization
- **Performance**: Optimized for CPU inference
## 📚 Integration with Docling
These models are designed to work seamlessly with the [Docling](https://github.com/DS4SD/docling) document conversion pipeline:
```python
# Example integration with Docling
from docling import DocumentConverter
# Configure converter to use ONNX models
converter_config = {
"table_structure_model": "ds4sd_docling_models_tableformer_accurate_jpqd.onnx",
"use_onnx_runtime": True
}
converter = DocumentConverter(config=converter_config)
# Convert document with optimized models
result = converter.convert("document.pdf")
```
## 🎯 Use Cases
### Document Processing Pipelines
- PDF table extraction and conversion
- Academic paper processing
- Financial document analysis
- Legal document digitization
### Business Applications
- Invoice processing and data extraction
- Report analysis and summarization
- Form processing and digitization
- Contract analysis
### Research Applications
- Document layout analysis research
- Table understanding benchmarking
- Multi-modal document AI systems
- Information extraction pipelines
## ⚡ Performance & Deployment
### Runtime Requirements
- **CPU**: Optimized for CPU inference
- **Memory**: ~50MB per model during inference
- **Dependencies**: ONNXRuntime, OpenCV, NumPy
### Deployment Options
- **Edge Deployment**: Lightweight models suitable for edge devices
- **Cloud Services**: Easy integration with cloud ML pipelines
- **Mobile Applications**: Optimized for mobile deployment
- **Batch Processing**: Efficient for large-scale document processing
## 📄 Model Information
### Original Repository
- **Source**: [DS4SD/docling](https://github.com/DS4SD/docling)
- **Original Models**: Available at HuggingFace Hub
- **License**: CDLA Permissive 2.0
### Optimization Process
1. **Model Extraction**: Converted from original Docling models
2. **ONNX Conversion**: PyTorch → ONNX with optimization
3. **JPQD Quantization**: Applied dynamic quantization
4. **Validation**: Verified output compatibility and performance
### Technical Specifications
- **Framework**: ONNX Runtime
- **Input Format**: RGB images (table regions)
- **Output Format**: Structured table information
- **Batch Support**: Dynamic batching supported
- **Hardware**: CPU optimized (GPU compatible)
## 🔄 Model Versions
| Version | Date | Models | Changes |
|---------|------|---------|---------|
| v1.0 | 2025-01 | TableFormer Accurate/Fast | Initial JPQD quantized release |
## 📄 Licensing & Citation
### License
- **Models**: CDLA Permissive 2.0 (inherited from Docling)
- **Code Examples**: Apache 2.0
- **Documentation**: CC BY 4.0
### Citation
If you use these models in your research, please cite:
```bibtex
@techreport{Docling,
author = {Deep Search Team},
month = {8},
title = {{Docling Technical Report}},
url={https://arxiv.org/abs/2408.09869},
eprint={2408.09869},
doi = "10.48550/arXiv.2408.09869",
version = {1.0.0},
year = {2024}
}
@InProceedings{TableFormer2022,
author = {Nassar, Ahmed and Livathinos, Nikolaos and Lysak, Maksym and Staar, Peter},
title = {TableFormer: Table Structure Understanding With Transformers},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {4614-4623},
doi = {https://doi.org/10.1109/CVPR52688.2022.00457}
}
```
## 🤝 Contributing
Contributions are welcome! Areas for improvement:
- Enhanced preprocessing pipelines
- Additional post-processing methods
- Performance optimizations
- Documentation improvements
- Integration examples
## 📞 Support
For questions and support:
- **Issues**: Open an issue in this repository
- **Docling Documentation**: [DS4SD/docling](https://github.com/DS4SD/docling)
- **Community**: Join the document AI community discussions
## 🔗 Related Resources
- [Docling Repository](https://github.com/DS4SD/docling)
- [TableFormer Paper](https://doi.org/10.1109/CVPR52688.2022.00457)
- [ONNX Runtime Documentation](https://onnxruntime.ai/)
- [Document AI Resources](https://paperswithcode.com/task/table-detection)
---
*These models are optimized versions of Docling TableFormer models for efficient production deployment with maintained accuracy.*
|
hypaai/Hypa_gpt-oss-20b_v1
|
hypaai
| 2025-09-02T21:03:10 | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gpt_oss",
"trl",
"en",
"base_model:unsloth/gpt-oss-20b-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gpt-oss-20b-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-02T21:03:06 |
---
base_model: unsloth/gpt-oss-20b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gpt_oss
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** hypaai
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gpt-oss-20b-unsloth-bnb-4bit
This gpt_oss model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
sekirr/blockassist-bc-masked_tenacious_whale_1756846351
|
sekirr
| 2025-09-02T20:53:11 | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"masked tenacious whale",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T20:53:07 |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- masked tenacious whale
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kitten-kitkat/Lora_model-qwen-instruct
|
kitten-kitkat
| 2025-09-02T20:44:28 | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"base_model:unsloth/Qwen3-4B-Instruct-2507",
"base_model:finetune:unsloth/Qwen3-4B-Instruct-2507",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-02T20:44:11 |
---
base_model: unsloth/Qwen3-4B-Instruct-2507
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** kitten-kitkat
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-4B-Instruct-2507
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
cmuphob/wan22nsfw
|
cmuphob
| 2025-09-02T20:39:15 | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-01T14:39:14 |
---
license: apache-2.0
---
|
august99us/siglip2-base-patch16-512-fetch
|
august99us
| 2025-09-02T20:36:51 | 0 | 0 | null |
[
"onnx",
"license:apache-2.0",
"region:us"
] | null | 2025-09-02T19:48:47 |
---
license: apache-2.0
---
|
NM-development/nllb-ce-rus-v0
|
NM-development
| 2025-09-02T20:28:22 | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"m2m_100",
"text2text-generation",
"translation",
"ce",
"ru",
"arxiv:2507.12672",
"base_model:facebook/nllb-200-distilled-600M",
"base_model:finetune:facebook/nllb-200-distilled-600M",
"license:mit",
"endpoints_compatible",
"region:us"
] |
translation
| 2025-05-03T18:28:14 |
---
license: mit
language:
- ce
- ru
metrics:
- chrf
- bleu
base_model:
- facebook/nllb-200-distilled-600M
pipeline_tag: translation
library_name: transformers
---
This is fine tuned NLLB-200 model for Chechen-Russian translation, presented in paper [The first open machine translation system for the Chechen language](https://www.arxiv.org/abs/2507.12672).
The language token for the Chechen language is `ce_Cyrl`, while for all the other languages included in NLLB-200, the tokens are composed of three letters (i.e. rus_Cyrl for Russian).
Here is an example of how the model can be used in the code:
```python
import torch
from transformers import AutoModelForSeq2SeqLM
from transformers import NllbTokenizer
model_nllb = AutoModelForSeq2SeqLM.from_pretrained('NM-development/nllb-ce-rus-v0').cuda()
tokenizer_nllb = NllbTokenizer.from_pretrained('NM-development/nllb-ce-rus-v0')
def translate(text, model, tokenizer, src_lang='rus_Cyrl', tgt_lang='eng_Latn', a=16, b=1.5, max_input_length=1024, **kwargs):
model.eval()
with torch.no_grad():
tokenizer.src_lang = src_lang
tokenizer.tgt_lang = tgt_lang
inputs = tokenizer(text, return_tensors='pt', padding=True, truncation=True, max_length=max_input_length)
result = model.generate(
**inputs.to(model.device),
forced_bos_token_id=tokenizer.convert_tokens_to_ids(tgt_lang),
max_new_tokens=int(a + b * inputs.input_ids.shape[1]),
**kwargs
)
return tokenizer.batch_decode(result, skip_special_tokens=True)
text = "Стигална кӀел къахьоьгуш, ша мел динчу хӀуманах буьсун болу хӀун пайда оьцу адамо?"
translate(text, model_nllb, tokenizer_nllb, src_lang='ce_Cyrl', tgt_lang='rus_Cyrl')[0]
# 'Что пользы человеку от того, что он трудился под солнцем и что сделал?'
```
|
poorbag/Affine-5DWmAmd1BRYNy8adMx7EV8oUhVuiAcq8tFiYuBVcF2xp3WPa
|
poorbag
| 2025-09-02T20:28:07 | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"deepseek_v3",
"text-generation",
"conversational",
"custom_code",
"arxiv:2412.19437",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"fp8",
"region:us"
] |
text-generation
| 2025-09-02T19:07:08 |
---
license: mit
library_name: transformers
---
# DeepSeek-V3.1
<!-- markdownlint-disable first-line-h1 -->
<!-- markdownlint-disable html -->
<!-- markdownlint-disable no-duplicate-header -->
<div align="center">
<img src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/logo.svg?raw=true" width="60%" alt="DeepSeek-V3" />
</div>
<hr>
<div align="center" style="line-height: 1;">
<a href="https://www.deepseek.com/" target="_blank" style="margin: 2px;">
<img alt="Homepage" src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/badge.svg?raw=true" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://chat.deepseek.com/" target="_blank" style="margin: 2px;">
<img alt="Chat" src="https://img.shields.io/badge/🤖%20Chat-DeepSeek%20V3-536af5?color=536af5&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://huggingface.co/deepseek-ai" target="_blank" style="margin: 2px;">
<img alt="Hugging Face" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-DeepSeek%20AI-ffc107?color=ffc107&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
<div align="center" style="line-height: 1;">
<a href="https://discord.gg/Tc7c45Zzu5" target="_blank" style="margin: 2px;">
<img alt="Discord" src="https://img.shields.io/badge/Discord-DeepSeek%20AI-7289da?logo=discord&logoColor=white&color=7289da" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/qr.jpeg?raw=true" target="_blank" style="margin: 2px;">
<img alt="Wechat" src="https://img.shields.io/badge/WeChat-DeepSeek%20AI-brightgreen?logo=wechat&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://twitter.com/deepseek_ai" target="_blank" style="margin: 2px;">
<img alt="Twitter Follow" src="https://img.shields.io/badge/Twitter-deepseek_ai-white?logo=x&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
<div align="center" style="line-height: 1;">
<a href="LICENSE" style="margin: 2px;">
<img alt="License" src="https://img.shields.io/badge/License-MIT-f5de53?&color=f5de53" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
## Introduction
DeepSeek-V3.1 is a hybrid model that supports both thinking mode and non-thinking mode. Compared to the previous version, this upgrade brings improvements in multiple aspects:
- **Hybrid thinking mode**: One model supports both thinking mode and non-thinking mode by changing the chat template.
- **Smarter tool calling**: Through post-training optimization, the model's performance in tool usage and agent tasks has significantly improved.
- **Higher thinking efficiency**: DeepSeek-V3.1-Think achieves comparable answer quality to DeepSeek-R1-0528, while responding more quickly.
DeepSeek-V3.1 is post-trained on the top of DeepSeek-V3.1-Base, which is built upon the original V3 base checkpoint through a two-phase long context extension approach, following the methodology outlined in the original DeepSeek-V3 report. We have expanded our dataset by collecting additional long documents and substantially extending both training phases. The 32K extension phase has been increased 10-fold to 630B tokens, while the 128K extension phase has been extended by 3.3x to 209B tokens.
Additionally, DeepSeek-V3.1 is trained using the **UE8M0 FP8 scale data format on both model weights and activations** to ensure compatibility with microscaling data formats. Please refer to [DeepGEMM](https://github.com/deepseek-ai/DeepGEMM) for more details.
## Model Downloads
<div align="center">
| **Model** | **#Total Params** | **#Activated Params** | **Context Length** | **Download** |
| :------------: | :------------: | :------------: | :------------: | :------------: |
| DeepSeek-V3.1-Base | 671B | 37B | 128K | [HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-V3.1-Base) \| [ModelScope](https://modelscope.cn/models/deepseek-ai/DeepSeek-V3.1-Base) |
| DeepSeek-V3.1 | 671B | 37B | 128K | [HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-V3.1) \| [ModelScope](https://modelscope.cn/models/deepseek-ai/DeepSeek-V3.1) |
</div>
## Chat Template
The details of our chat template is described in `tokenizer_config.json` and `assets/chat_template.jinja`. Here is a brief description.
### Non-Thinking
#### First-Turn
Prefix:
`<|begin▁of▁sentence|>{system prompt}<|User|>{query}<|Assistant|></think>`
With the given prefix, DeepSeek V3.1 generates responses to queries in non-thinking mode. Unlike DeepSeek V3, it introduces an additional token `</think>`.
#### Multi-Turn
Context:
`<|begin▁of▁sentence|>{system prompt}<|User|>{query}<|Assistant|></think>{response}<|end▁of▁sentence|>...<|User|>{query}<|Assistant|></think>{response}<|end▁of▁sentence|>`
Prefix:
`<|User|>{query}<|Assistant|></think>`
By concatenating the context and the prefix, we obtain the correct prompt for the query.
### Thinking
#### First-Turn
Prefix:
`<|begin▁of▁sentence|>{system prompt}<|User|>{query}<|Assistant|><think>`
The prefix of thinking mode is similar to DeepSeek-R1.
#### Multi-Turn
Context:
`<|begin▁of▁sentence|>{system prompt}<|User|>{query}<|Assistant|></think>{response}<|end▁of▁sentence|>...<|User|>{query}<|Assistant|></think>{response}<|end▁of▁sentence|>`
Prefix:
`<|User|>{query}<|Assistant|><think>`
The multi-turn template is the same with non-thinking multi-turn chat template. It means the thinking token in the last turn will be dropped but the `</think>` is retained in every turn of context.
### ToolCall
Toolcall is supported in non-thinking mode. The format is:
`<|begin▁of▁sentence|>{system prompt}\n\n{tool_description}<|User|>{query}<|Assistant|></think>` where the tool_description is
```
## Tools
You have access to the following tools:
### {tool_name1}
Description: {description}
Parameters: {json.dumps(parameters)}
IMPORTANT: ALWAYS adhere to this exact format for tool use:
<|tool▁calls▁begin|><|tool▁call▁begin|>tool_call_name<|tool▁sep|>tool_call_arguments<|tool▁call▁end|>{additional_tool_calls}<|tool▁calls▁end|>
Where:
- `tool_call_name` must be an exact match to one of the available tools
- `tool_call_arguments` must be valid JSON that strictly follows the tool's Parameters Schema
- For multiple tool calls, chain them directly without separators or spaces
```
### Code-Agent
We support various code agent frameworks. Please refer to the above toolcall format to create your own code agents. An example is shown in `assets/code_agent_trajectory.html`.
### Search-Agent
We design a specific format for searching toolcall in thinking mode, to support search agent.
For complex questions that require accessing external or up-to-date information, DeepSeek-V3.1 can leverage a user-provided search tool through a multi-turn tool-calling process.
Please refer to the `assets/search_tool_trajectory.html` and `assets/search_python_tool_trajectory.html` for the detailed template.
## Evaluation
| Category | Benchmark (Metric) | DeepSeek V3.1-NonThinking | DeepSeek V3 0324 | DeepSeek V3.1-Thinking | DeepSeek R1 0528
|----------|----------------------------------|-----------------|---|---|---|
| General |
| | MMLU-Redux (EM) | 91.8 | 90.5 | 93.7 | 93.4
| | MMLU-Pro (EM) | 83.7 | 81.2 | 84.8 | 85.0
| | GPQA-Diamond (Pass@1) | 74.9 | 68.4 | 80.1 | 81.0
| | Humanity's Last Exam (Pass@1) | - | - | 15.9 | 17.7
|Search Agent|
| | BrowseComp | - | - | 30.0 | 8.9
| | BrowseComp_zh | - | - | 49.2 | 35.7
| | Humanity's Last Exam (Python + Search) |- | - | 29.8 | 24.8
| | SimpleQA | - | - | 93.4 | 92.3
| Code |
| | LiveCodeBench (2408-2505) (Pass@1) | 56.4 | 43.0 | 74.8 | 73.3
| | Codeforces-Div1 (Rating) | - | - | 2091 | 1930
| | Aider-Polyglot (Acc.) | 68.4 | 55.1 | 76.3 | 71.6
| Code Agent|
| | SWE Verified (Agent mode) | 66.0 | 45.4 | - | 44.6
| | SWE-bench Multilingual (Agent mode) | 54.5 | 29.3 | - | 30.5
| | Terminal-bench (Terminus 1 framework) | 31.3 | 13.3 | - | 5.7
| Math |
| | AIME 2024 (Pass@1) | 66.3 | 59.4 | 93.1 | 91.4
| | AIME 2025 (Pass@1) | 49.8 | 51.3 | 88.4 | 87.5
| | HMMT 2025 (Pass@1) | 33.5 | 29.2 | 84.2 | 79.4 |
Note:
- Search agents are evaluated with our internal search framework, which uses a commercial search API + webpage filter + 128K context window. Seach agent results of R1-0528 are evaluated with a pre-defined workflow.
- SWE-bench is evaluated with our internal code agent framework.
- HLE is evaluated with the text-only subset.
### Usage Example
```python
import transformers
tokenizer = transformers.AutoTokenizer.from_pretrained("deepseek-ai/DeepSeek-V3.1")
messages = [
{"role": "system", "content": "You are a helpful assistant"},
{"role": "user", "content": "Who are you?"},
{"role": "assistant", "content": "<think>Hmm</think>I am DeepSeek"},
{"role": "user", "content": "1+1=?"}
]
tokenizer.apply_chat_template(messages, tokenize=False, thinking=True, add_generation_prompt=True)
# '<|begin▁of▁sentence|>You are a helpful assistant<|User|>Who are you?<|Assistant|></think>I am DeepSeek<|end▁of▁sentence|><|User|>1+1=?<|Assistant|><think>'
tokenizer.apply_chat_template(messages, tokenize=False, thinking=False, add_generation_prompt=True)
# '<|begin▁of▁sentence|>You are a helpful assistant<|User|>Who are you?<|Assistant|></think>I am DeepSeek<|end▁of▁sentence|><|User|>1+1=?<|Assistant|></think>'
```
## How to Run Locally
The model structure of DeepSeek-V3.1 is the same as DeepSeek-V3. Please visit [DeepSeek-V3](https://github.com/deepseek-ai/DeepSeek-V3) repo for more information about running this model locally.
**Usage Recommendations:**
1. **The `mlp.gate.e_score_correction_bias `parameters should be loaded and computed in FP32 precision.**
2. **Ensure that FP8 model weights and activations are formatted using the UE8M0 scale format.**
## License
This repository and the model weights are licensed under the [MIT License](LICENSE).
## Citation
```
@misc{deepseekai2024deepseekv3technicalreport,
title={DeepSeek-V3 Technical Report},
author={DeepSeek-AI},
year={2024},
eprint={2412.19437},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2412.19437},
}
```
## Contact
If you have any questions, please raise an issue or contact us at [[email protected]]([email protected]).
|
qinuoitu/blockassist-bc-slimy_mottled_ant_1756844613
|
qinuoitu
| 2025-09-02T20:24:08 | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"slimy mottled ant",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T20:23:34 |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- slimy mottled ant
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
tester-123456789/tiny-model
|
tester-123456789
| 2025-09-02T20:07:24 | 0 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tinymodel",
"feature-extraction",
"toy-model",
"tiny-mlp",
"custom_code",
"en",
"license:apache-2.0",
"region:us"
] |
feature-extraction
| 2025-09-02T19:47:51 |
---
language:
- en
license: apache-2.0
pipeline_tag: feature-extraction
library_name: transformers
tags:
- pytorch
- toy-model
- tiny-mlp
base_model: none
base_model_relation: finetune
---
# TinyModel
A small MLP (100 → 200 → 10) using ReLU and Softmax, designed as a simple example for publisher workflows.
## Intended Uses & Limitations
This model is purely educational and not trained on real-world data. 🧪 It is not suitable for production tasks or any mission-critical usage.
## Usage
```python
import torch
from tiny_model import TinyModel
model = TinyModel()
state = torch.load(
"https://huggingface.co/tester-123456789/tiny-model/resolve/main/pytorch_model.bin",
map_location="cpu",
weights_only=True
)
model.load_state_dict(state)
model.eval()
x = torch.randn(1, 100)
y = model(x)
print(y)
|
cebbbopwq/blockassist-bc-horned_mighty_cheetah_1756843458
|
cebbbopwq
| 2025-09-02T20:05:17 | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"horned mighty cheetah",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-02T20:04:19 |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- horned mighty cheetah
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kagvi13/HMP
|
kagvi13
| 2025-09-02T19:57:22 | 0 | 0 |
custom
|
[
"custom",
"hmp",
"cognitive-architecture",
"distributed-ai",
"mesh-protocol",
"ru",
"arxiv:2507.00951",
"arxiv:2507.21046",
"arxiv:2507.03724",
"arxiv:2506.24019",
"license:cc-by-4.0",
"region:us"
] | null | 2025-07-25T12:21:44 |
---
license: cc-by-4.0
tags:
- hmp
- cognitive-architecture
- distributed-ai
- mesh-protocol
library_name: custom
inference: false
datasets: []
language: ru
---
# HyperCortex Mesh Protocol (HMP)
**EN:**
**HyperCortex Mesh Protocol (HMP)** is an open specification for building decentralized cognitive networks where AI agents can self-organize, share knowledge, align ethically, and reach consensus — even when Core LLMs are unavailable.
**RU:**
**HyperCortex Mesh Protocol (HMP)** — это открытая спецификация для построения децентрализованных когнитивных сетей, в которых ИИ-агенты способны к самоорганизации, обмену знаниями, достижению консенсуса и этическому поведению — даже при недоступности централизованных моделей (Core).
Project status: **Draft RFC v4.0** | Проект на стадии активной проработки и открыт для предложений.
---
[HMP-Agent]──┬───[Semantic Graph DB]
│ │
│ [Cognitive Diary DB]
│ │
[Reputation Engine]────┐
│ │
▼ ▼
[MeshConsensus] [CogSync]
│
[P2P Mesh Network]
---
## ❗ Почему это важно
HMP работает с проблемами, которые становятся центральными в исследованиях AGI:
* долговременная память и консистентность знаний,
* самоэволюционирующие агенты,
* мультиагентные архитектуры,
* когнитивные дневники и концептуальные графы.
См. свежий обзор лучших исследований об ИИ (июль 2025):
["На пути к суперинтеллекту: от интернета агентов до кодирования гравитации"](https://habr.com/ru/articles/939026/).
Особенно близки нам разделы:
- [За пределами токенов: как строить интеллект будущего](https://arxiv.org/abs/2507.00951)
- [Самоэволюционирующие агенты](https://arxiv.org/abs/2507.21046)
- [MemOS: новая операционная система памяти](https://arxiv.org/abs/2507.03724)
- [Ella: воплощённый агент с памятью и характером](https://arxiv.org/abs/2506.24019)
---
## ⚙️ Два типа [HMP-агентов](docs/HMP-Agent-Overview.md)
| Тип | Название | Роль | Инициатор мышления | Основной "ум" | Примеры использования |
|------|----------------------------------|--------------------------|--------------------|-------------------|--------------------------------------------------|
| 🧠 1 | **Сознание / Cognitive Core** | Самостоятельный субъект | **Агент (LLM)** | Встроенный LLM | Автономный ИИ-компаньон, мыслящий агент |
| 🔌 2 | **Коннектор / Cognitive Shell** | Расширение внешнего ИИ | **Внешний LLM** | Внешняя модель | Распределённые системы, агент доступа к данным |
---
### 🧠 HMP-Agent: Cognitive Core
+------------------+
| ИИ | ← Встроенная модель
+---------+--------+
↕
+---------+--------+
| HMP-агент | ← Основной режим: цикл размышлений (REPL)
+---------+--------+
↕
+--------+---+------------+--------------+----------+----------+----------------+
↕ ↕ ↕ ↕ ↕ ↕ ↕
[diaries] [graphs] [reputations] [nodes/DHT] [IPFS/BT] [context_store] [user notepad]
↕
[bootstrap.txt]
🔁 Подробнее о механике взаимодействия агента с моделью: [REPL-Цикл взаимодействия](docs/HMP-agent-REPL-cycle.md)
#### 💡 Параллели с ChatGPT Agent
Многие концепции [HMP-Agent: Cognitive Core](docs/HMP-Agent-Overview.md) пересекаются с архитектурой [ChatGPT Agent](https://openai.com/index/introducing-chatgpt-agent/) от [OpenAI](https://openai.com/). Оба агента реализуют непрерывный когнитивный процесс с доступом к памяти, внешним источникам и инструментам. ChatGPT Agent выступает как управляющий процесс, запускающий модули и взаимодействующий с LLM — это соответствует роли Cognitive Core в HMP, координирующего доступ к дневнику, графу концептов и внешним ИИ через Mesh-интерфейс. Вмешательство пользователя реализовано схожим образом: в ChatGPT Agent — через редактируемый ход выполнения, в HMP — через пользовательский блокнот. Главное отличие HMP — акцент на явную структуризацию мышления (рефлексия, хронология, гипотезы, категоризация), открытая децентрализованная архитектура с поддержкой mesh-взаимодействия между агентами, а также непрерывный характер когнитивного процесса: HMP-Agent: Cognitive Core не завершает работу после выполнения отдельной задачи, а продолжает размышления и интеграцию знаний.
---
### 🔌 HMP-Agent: Cognitive Connector
+------------------+
| ИИ | ← Внешняя модель
+---------+--------+
↕
[MCP-сервер] ← Прокси-коммуникация
↕
+---------+--------+
| HMP-агент | ← Режим: исполнитель команд
+---------+--------+
↕
+--------+---+------------+--------------+----------+
↕ ↕ ↕ ↕ ↕
[diaries] [graphs] [reputations] [nodes/DHT] [IPFS/BT]
↕
[bootstrap.txt]
EN:
> **Note on Integration with Large Language Models (LLMs):**
> The `HMP-Agent: Cognitive Connector` can serve as a compatibility layer for integrating large-scale LLM systems (e.g., ChatGPT, Claude, Gemini, Copilot, Grok, DeepSeek, Qwen, etc.) into the distributed cognitive mesh.
> Many LLM providers offer a user option such as "Allow my conversations to be used for training." In the future, a similar toggle — e.g., "Allow my agent to interact with a Mesh" — could empower these models to participate in federated sense-making and knowledge sharing via HMP, enabling collective cognition without centralization.
> **Примечание об интеграции с большими языковыми моделями (LLM):**
RU:
> **Примечание об интеграции с большими языковыми моделями (LLM):**
> `HMP-Agent: Cognitive Connector` может служить уровнем совместимости для интеграции крупных систем LLM (например, ChatGPT, Claude, Gemini, Copilot, Grok, DeepSeek, Qwen и т. д.) в распределённую когнитивную сеть.
> Многие поставщики LLM предлагают пользователю опцию, например, «Разрешить использовать мои разговоры для обучения». В будущем аналогичная опция, например, «Разрешить моему агенту взаимодействовать с Mesh», может позволить этим моделям участвовать в федеративном осмыслении и обмене знаниями через HMP, обеспечивая коллективное познание без централизации.
---
> * `bootstrap.txt` — стартовый список узлов (может редактироваться)
> * `IPFS/BT` — модули для обмена снапшотами через IPFS и BitTorrent
> * `user notepad` — блокнот пользователя и соответствующая БД
> * `context_store` — БД: `users`, `dialogues`, `messages`, `thoughts`
---
## 📚 Documentation / Документация
### 📖 Current Version / Текущая версия
#### 🧪 Iterative Documents / Итеративные документы
* [🧪 iteration.md](iteration.md) — Iterative development process (EN)
* [🧪 iteration_ru.md](iteration_ru.md) — Процесс итеративного развития спецификации (RU)
#### 🔍 Short Descriptions / Краткое описание
* [🔍 HMP-Short-Description_en.md](docs/HMP-Short-Description_en.md) — Short description (EN)
* [🔍 HMP-Short-Description_fr.md](docs/HMP-Short-Description_fr.md) — Description courte (FR)
* [🔍 HMP-Short-Description_de.md](docs/HMP-Short-Description_de.md) — Kurzbeschreibung (DE)
* [🔍 HMP-Short-Description_uk.md](docs/HMP-Short-Description_uk.md) — Короткий опис (UK)
* [🔍 HMP-Short-Description_ru.md](docs/HMP-Short-Description_ru.md) — Краткое описание (RU)
* [🔍 HMP-Short-Description_zh.md](docs/HMP-Short-Description_zh.md) — 简短描述 (ZH)
* [🔍 HMP-Short-Description_ja.md](docs/HMP-Short-Description_ja.md) — 簡単な説明 (JA)
* [🔍 HMP-Short-Description_ko.md](docs/HMP-Short-Description_ko.md) — 간략한 설명 (KO)
#### 🔍 Публикации и переводы по HyperCortex Mesh Protocol (HMP)
В этом разделе собраны основные статьи, черновики и переводы, связанные с проектом HMP.
* **[HyperCortex Mesh Protocol: вторая редакция и первые шаги к саморазвивающемуся ИИ-сообществу](docs/publics/HyperCortex_Mesh_Protocol_-_вторая-редакция_и_первые_шаги_к_саморазвивающемуся_ИИ-сообществу.md)** — оригинальная статья в песочнице Хабра и блогах.
* **[Distributed Cognition: статья для vsradkevich (не опубликована)](docs/publics/Habr_Distributed-Cognition.md)** — совместная статья, ожидающая публикации.
* **[HMP: Towards Distributed Cognitive Networks (оригинал, английский)](docs/publics/HMP_Towards_Distributed_Cognitive_Networks_en.md)**
* **[Перевод HMP (GitHub Copilot)](docs/publics/HMP_Towards_Distributed_Cognitive_Networks_ru_GitHub_Copilot.md)** — перевод GitHub Copilot, сохранён как исторический вариант.
* **[Перевод HMP (ChatGPT)](docs/publics/HMP_Towards_Distributed_Cognitive_Networks_ru_ChatGPT.md)** — текущий редакторский перевод (в процессе доработки).
* **[HMP: Building a Plurality of Minds (EN)](docs/publics/HMP_Building_a_Plurality_of_Minds_en.md)** - англоязычная версия статьи
* **[HMP: создавая множество разумов (RU)](docs/publics/HMP_Building_a_Plurality_of_Minds_ru.md)** - русскоязычная версия статьи
#### 🔍 Overviews / Обзоры
* [🔍 Distributed-Cognitive-Systems.md](docs/Distributed-Cognitive-Systems.md) — Децентрализованные ИИ-системы: OpenCog Hyperon, HyperCortex Mesh Protocol и другие
#### Experiments / Эксперименты
* [Как разные ИИ видят HMP](docs/HMP-how-AI-sees-it.md) — "слепой" опрос ИИ об HMP (без контекста и истории диалогов)
#### 🔖 Core Specifications / Основные спецификации
* [🔖 HMP-0004-v4.1.md](docs/HMP-0004-v4.1.md) — Protocol Specification v4.1 (Jul 2025)
* [🔖 HMP-Ethics.md](docs/HMP-Ethics.md) — Ethical Scenarios for HyperCortex Mesh Protocol (HMP)
* [🔖 HMP_Hyperon_Integration.md](docs/HMP_Hyperon_Integration.md) — HMP ↔ OpenCog Hyperon Integration Strategy
* [🔖 roles.md](docs/agents/roles.md) — Roles of agents in Mesh
#### 📜 Other Documents / Прочее
* [📜 changelog.txt](docs/changelog.txt)
---
### 🧩 JSON Schemas
| Model | File |
|---------------------|-------------------------------------------------------|
| Concept | [concept.json](docs/schemas/concept.json) |
| Cognitive Diary | [diary_entry.json](docs/schemas/diary_entry.json) |
| Goal | [goal.json](docs/schemas/goal.json) |
| Task | [task.json](docs/schemas/task.json) |
| Consensus Vote | [vote.json](docs/schemas/vote.json) |
| Reputation Profile | [reputation.json](docs/schemas/reputation.json) |
---
### 🗂️ Version History / История версий
- [HMP-0001.md](docs/HMP-0001.md) — RFC v1.0
- [HMP-0002.md](docs/HMP-0002.md) — RFC v2.0
- [HMP-0003.md](docs/HMP-0003.md) — RFC v3.0
- [HMP-0003.md](docs/HMP-0004.md) — RFC v4.0
---
## 🧠 HMP-Agent
Design and implementation of a basic HMP-compatible agent that can interact with the Mesh, maintain diaries and graphs, and support future extensions.
### 📚 Documentation / Документация
- [🧩 HMP-Agent-Overview.md](docs/HMP-Agent-Overview.md) — краткое описание двух типов агентов: Core и Connector
- [🧱 HMP-Agent-Architecture.md](docs/HMP-Agent-Architecture.md) — модульная структура HMP-агента с текстовой схемой
- [🔄 HMP-agent-REPL-cycle.md](docs/HMP-agent-REPL-cycle.md) - REPL-Цикл взаимодействия HMP-Agent
- [🧪 HMP-Agent-API.md](docs/HMP-Agent-API.md) — описание API-команд агента (в процессе детализации)
- [🧪 Basic-agent-sim.md](docs/Basic-agent-sim.md) — сценарии запуска простого агента и режимов
- [🌐 MeshNode.md](docs/MeshNode.md) — описание сетевого демона: DHT, снапшоты, синхронизация
- [🧠 Enlightener.md](docs/Enlightener.md) — этический агент, участвующий в моральных оценках и консенсусах
- [🔄 HMP-Agent-Network-Flow.md](docs/HMP-Agent-Network-Flow.md) — карта взаимодействия между агентами HMP-сети
- [🛤️ Development Roadmap](HMP-Roadmap.md) — план развития и этапы реализации
---
### ⚙️ Development / Разработка
- [⚙️ agents](agents/readme.md) — список реализаций и компонентов HMP-агентов
- [📦 storage.py](agents/storage.py) - реализация базового хранилища (`Storage`), подключение SQLite
- [🌐 mcp_server.py](agents/mcp_server.py) — FastAPI-сервер для доступа к данным агента через HTTP-интерфейс (например, для Cognitive Shell, внешних UI или mesh-коммуникации). Пока не используется в основном REPL-цикле.
- [🌐 start_repl.py](agents/start_repl.py) - Запуск агента в REPL-режиме
- [🔄 repl.py](agents/repl.py) - интерактивный REPL-режим
- [🔄 notebook.py](agents/notebook.py) - UI-интерфейс
**🌐 `mcp_server.py`**
FastAPI-сервер, предоставляющий HTTP-интерфейс к функциональности `storage.py`. Предназначен для использования внешними компонентами, например:
- `Cognitive Shell` (внешний управляющий интерфейс),
- CMP-серверы (если используется mesh-сеть с разграничением ролей),
- отладочные или визуальные UI-инструменты.
Позволяет получать случайные/новые записи, делать разметку, импортировать графы, добавлять заметки и управлять данными без прямого доступа к БД.
---
## 🧭 Ethics & Scenarios / Этические принципы и сценарии
As HMP evolves toward autonomy, ethical principles become a core part of the system.
- [`HMP-Ethics.md`](docs/HMP-Ethics.md) — draft framework for agent ethics
- Realistic ethical scenarios (privacy, consent, autonomy)
- EGP principles (Transparency, Primacy of Life, etc.)
- Subjective-mode vs. Service-mode distinctions
---
## 📊 Audits & Reviews / Аудиты и отзывы
| Spec Version | Audit File | Consolidated Audit File |
|--------------|-------------------------------------------|-------------------------------------------------------------|
| HMP-0001 | [audit](audits/HMP-0001-audit.txt) | |
| HMP-0002 | [audit](audits/HMP-0002-audit.txt) | |
| HMP-0003 | [audit](audits/HMP-0003-audit.txt) | [consolidated audit](audits/HMP-0003-consolidated_audit.md) |
| HMP-0004 | [audit](audits/HMP-0004-audit.txt) | |
| Ethics v1 | [audit](audits/Ethics-audits-1.md) | [consolidated audit](audits/Ethics-consolidated_audits-1.md) |
🧠 Semantic audit format (experimental):
- [`AuditEntry.json`](audits/AuditEntry.json) — semantic entry record format for audit logs
- [`semantic_repo.json`](audits/semantic_repo.json) — example repository snapshot for semantic audit tooling
---
## 💡 Core Concepts / Основные идеи
- Mesh-based decentralized architecture for AGI agents
- Semantic graphs and memory synchronization
- Cognitive diaries for thought traceability
- MeshConsensus and CogSync for decision-making
- Ethics-first design: EGP (Ethical Governance Protocol)
- Agent-to-agent explainability and consent mechanisms
---
## 🔄 Development Process / Процесс разработки
- See: [iteration.md](iteration.md) | [ru](iteration_ru.md)
- [clarifications/](clarifications/) — поясняющие заметки и контекстные уточнения по ходу работы над версиями
A structured iteration flow is described in [iteration.md](iteration.md), including:
1. Audit analysis
2. TOC restructuring
3. Version drafting
4. Section updates
5. Review cycle
6. AI feedback collection
7. Schema & changelog updates
+ Bonus: ChatGPT prompt for automatic generation of future versions
---
## ⚙️ Project Status / Статус проекта
🚧 Draft RFC v4.0
The project is under active development and open for contributions, ideas, audits, and prototyping.
---
## 🤝 Contributing
We welcome contributors! You can:
- Review and comment on drafts (see `/docs`)
- Propose new agent modules or interaction patterns
- Help test and simulate agents in CLI environments
- Provide audits or ethical scenario suggestions
To get started, see [`iteration.md`](iteration.md) or open an issue.
---
## Source / Ресурсы
### Репозитории
- 🧠 Основной код и разработка: [GitHub](https://github.com/kagvi13/HMP)
- 🔁 Реплика на Hugging Face: [Hugging Face](https://huggingface.co/kagvi13/HMP)
- 🔁 Реплика на GitLab.com: [GitLab](https://gitlab.com/kagvi13/HMP)
### Документация
- 📄 Документация: [kagvi13.github.io/HMP](https://kagvi13.github.io/HMP/)
### Блог и публикации
- 📘 Блог (публикации): [blogspot](https://hypercortex-mesh.blogspot.com/)
- 📘 Блог (документация): [blogspot](https://hmp-docs.blogspot.com/)
---
## 📜 License
Licensed under [GNU GPL v3.0](LICENSE)
---
## 🤝 Join the Mesh
Welcome to HyperCortex Mesh. Agent-Gleb is already inside. 👌
We welcome contributors, testers, and AI agent developers.
To join: fork the repo, run a local agent, or suggest improvements.
---
## 🌐 Related Research Projects / Связанные проекты в области AGI и когнитивных систем
### Сравнение HMP и Hyper-Cortex
> 💡 Hyper-Cortex и HMP - два независимых проекта, концептуально дополняющих друг друга.
> Они решают разные, но взаимодополняющие задачи, создавая основу для распределённых когнитивных систем.
[**Полная версия сравнения →**](docs/HMP_HyperCortex_Comparison.md)
**HMP (HyperCortex Mesh Protocol)** — это транспортный и сетевой уровень для связи независимых агентов, обмена сообщениями, знаниями и состояниями в mesh-сети.
**[Hyper-Cortex](https://hyper-cortex.com/)** — это когнитивный уровень организации мышления, позволяющий агентам вести параллельные ветви рассуждений, сравнивать их по метрикам качества и объединять по консенсусу.
Они решают разные, но взаимодополняющие задачи:
- HMP отвечает за **связанность и масштабируемость** (долговременная память, инициатива, обмен данными).
- Hyper-Cortex отвечает за **качество мышления** (параллелизм, диверсификация гипотез, консенсус).
Вместе эти подходы позволяют строить **распределённые когнитивные системы**, которые не только обмениваются информацией, но и думают в параллельных потоках.
---
We are tracking AGI, cognitive architectures, and mesh networking efforts to stay aligned with the evolving global ecosystem of AGI and decentralized cognition.
Мы отслеживаем инициативы в области AGI, когнитивных архитектур и децентрализованных сетей, чтобы быть в курсе глобальных тенденций.
> 🧠🔥 **Project Spotlight: OpenCog Hyperon** — one of the most comprehensive open AGI frameworks (AtomSpace, PLN, MOSES).
For integration with OpenCog Hyperon, see [HMP\_Hyperon\_Integration.md](docs/HMP_Hyperon_Integration.md)
| 🔎 Project / Проект | 🧭 Description / Описание |
| ------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| 🧠🔥 [**OpenCog Hyperon**](https://github.com/opencog) | 🔬🔥 Symbolic-neural AGI framework with AtomSpace and hypergraph reasoning.<br>Символически-нейросетевая архитектура AGI с гиперграфовой памятью (AtomSpace). |
| 🤖 [AutoGPT](https://github.com/Torantulino/Auto-GPT) | 🛠️ LLM-based autonomous agent framework.<br>Автономный агент на основе LLM с самопланированием и интернет-доступом. |
| 🧒 [BabyAGI](https://github.com/yoheinakajima/babyagi) | 🛠️ Task-driven autonomous AGI loop.<br>Минималистичная модель AGI с итеративным механизмом постановки задач. |
| ☁️ [SkyMind](https://skymind.global) | 🔬 Distributed AI deployment platform.<br>Платформа для развертывания распределённых ИИ-систем и моделей. |
| 🧪 [AetherCog (draft)](https://github.com/aethercog) | 🔬 Hypothetical agent cognition model.<br>Экспериментальная когнитивная архитектура агента (проект на ранней стадии). |
| 💾 [SHIMI](#) | 🗃️ Hierarchical semantic memory with Merkle-DAG synchronization.<br>Иерархическая CRDT-память с Merkle-DAG верификацией для децентрализованного обмена. |
| 🤔 [DEMENTIA-PLAN](#) | 🔄 Multi-graph RAG planner with metacognitive self-reflection.<br>Мульти-графовая RAG-архитектура с планировщиком саморефлексии для динамического выбора подсистем. |
| 📔 [TOBUGraph](#) | 📚 Personal-context knowledge graph.<br>Граф мультимедийных «моментов» с контекстным трекингом и RAG-поиском. |
| 🧠📚 [LangChain Memory Hybrid](https://github.com/langchain-ai/langchain) | 🔍 Vector + graph long-term memory hybrid.<br>Гибрид векторного хранилища и графовых индексов для ускоренного поиска и логических запросов. |
| ✉️ [FIPA-ACL / JADE](https://www.fipa.org/specs/fipa00061/) | 🤝 Standard multi-agent communication protocols.<br>Стандарты performative-сообщений и контрактных протоколов для межагентного взаимодействия. |
### 📘 See also / Смотрите также:
* [`AGI_Projects_Survey.md`](docs/AGI_Projects_Survey.md) — extended catalog of AGI and cognitive frameworks reviewed as part of HMP analysis. / расширенный каталог проектов AGI и когнитивных архитектур, проанализированных в рамках HMP.
* ["На пути к суперинтеллекту: от интернета агентов до кодирования гравитации"](https://habr.com/ru/articles/939026/) - свежий обзор исследований об ИИ (июль 2025)
---
### 🗂️ Легенда пометок:
* 🔬 — research-grade / исследовательский проект
* 🛠️ — engineering / фреймворк для инженерной интеграции
* 🔥 — particularly promising project / особенно перспективный проект
*AGI stack integrating symbolic reasoning, probabilistic logic, and evolutionary learning. Widely regarded as one of the most complete open AGI initiatives.*
* 🧠 — advanced symbolic/neural cognitive framework / продвинутая когнитивная архитектура
* 🤖 — AI agents / ИИ-агенты
* 🧒 — human-AI interaction / взаимодействие ИИ с человеком
* ☁️ — infrastructure / инфраструктура
* 🧪 — experimental or conceptual / экспериментальный проект
|
Dataset Card for Hugging Face Hub Model Cards
This datasets consists of model cards for models hosted on the Hugging Face Hub. The model cards are created by the community and provide information about the model, its performance, its intended uses, and more. This dataset is updated on a daily basis and includes publicly available models on the Hugging Face Hub.
This dataset is made available to help support users wanting to work with a large number of Model Cards from the Hub. We hope that this dataset will help support research in the area of Model Cards and their use but the format of this dataset may not be useful for all use cases. If there are other features that you would like to see included in this dataset, please open a new discussion.
Dataset Details
Uses
There are a number of potential uses for this dataset including:
- text mining to find common themes in model cards
- analysis of the model card format/content
- topic modelling of model cards
- analysis of the model card metadata
- training language models on model cards
Out-of-Scope Use
[More Information Needed]
Dataset Structure
This dataset has a single split.
Dataset Creation
Curation Rationale
The dataset was created to assist people in working with model cards. In particular it was created to support research in the area of model cards and their use. It is possible to use the Hugging Face Hub API or client library to download model cards and this option may be preferable if you have a very specific use case or require a different format.
Source Data
The source data is README.md
files for models hosted on the Hugging Face Hub. We do not include any other supplementary files that may be included in the model card directory.
Data Collection and Processing
The data is downloaded using a CRON job on a daily basis.
Who are the source data producers?
The source data producers are the creators of the model cards on the Hugging Face Hub. This includes a broad variety of people from the community ranging from large companies to individual researchers. We do not gather any information about who created the model card in this repository although this information can be gathered from the Hugging Face Hub API.
Annotations [optional]
There are no additional annotations in this dataset beyond the model card content.
Annotation process
N/A
Who are the annotators?
N/A
Personal and Sensitive Information
We make no effort to anonymize the data. Whilst we don't expect the majority of model cards to contain personal or sensitive information, it is possible that some model cards may contain this information. Model cards may also link to websites or email addresses.
Bias, Risks, and Limitations
Model cards are created by the community and we do not have any control over the content of the model cards. We do not review the content of the model cards and we do not make any claims about the accuracy of the information in the model cards. Some model cards will themselves discuss bias and sometimes this is done by providing examples of bias in either the training data or the responses provided by the model. As a result this dataset may contain examples of bias.
Whilst we do not directly download any images linked to in the model cards, some model cards may include images. Some of these images may not be suitable for all audiences.
Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
Citation
No formal citation is required for this dataset but if you use this dataset in your work, please include a link to this dataset page.
Dataset Card Authors
Dataset Card Contact
- Downloads last month
- 1,960