modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-04 18:27:43
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 539
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-04 18:27:26
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
coelacanthxyz/blockassist-bc-finicky_thriving_grouse_1757004400
|
coelacanthxyz
| 2025-09-04T17:16:40Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"finicky thriving grouse",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-04T17:16:33Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- finicky thriving grouse
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
russellyq/Qwen2.5-VL-7B-Instruct-Med-SFT-1e
|
russellyq
| 2025-09-04T16:42:08Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_5_vl",
"image-to-text",
"llama-factory",
"full",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-VL-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-7B-Instruct",
"license:other",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2025-09-04T16:31:40Z |
---
library_name: transformers
license: other
base_model: Qwen/Qwen2.5-VL-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: qwen2_5vl-7b-1e
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# qwen2_5vl-7b-1e
This model is a fine-tuned version of [Qwen/Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct) on the Med-R1-SFT-add-all dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- total_eval_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1.0
### Training results
### Framework versions
- Transformers 4.55.0
- Pytorch 2.5.1+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
matherchodhuuu/blockassist-bc-lightfooted_skilled_chameleon_1757002556
|
matherchodhuuu
| 2025-09-04T16:16:57Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"lightfooted skilled chameleon",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-04T16:16:41Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- lightfooted skilled chameleon
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
JayRay5/DIVEdoc_ffpos_beg
|
JayRay5
| 2025-09-04T16:13:38Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"DIVEdoc",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-04T16:10:35Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Rudra-madlads/blockassist-bc-jumping_swift_gazelle_1757001448
|
Rudra-madlads
| 2025-09-04T15:58:25Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"jumping swift gazelle",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-04T15:58:05Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- jumping swift gazelle
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
aidigitalqueen/lunascoop-lora
|
aidigitalqueen
| 2025-09-04T15:51:04Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"flux",
"text-to-image",
"lora",
"fal",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-09-04T15:42:53Z |
---
tags:
- flux
- text-to-image
- lora
- diffusers
- fal
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: lunascoop
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# lunascoop lora
<Gallery />
## Model description
LoRA da minha avatar Lunascoop (treinada no fal.ai)
## Trigger words
You should use `lunascoop` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/aidigitalqueen/lunascoop-lora/tree/main) them in the Files & versions tab.
## Training at fal.ai
Training was done using [fal.ai/models/fal-ai/flux-lora-fast-training](https://fal.ai/models/fal-ai/flux-lora-fast-training).
|
Trelis/Qwen3-4B_ds-arc-agi-1-partial-100-c1542_ds-arc-agi-1-refinement-finetuning-c81
|
Trelis
| 2025-09-04T15:50:19Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:Trelis/Qwen3-4B_ds-arc-agi-1-partial-100-c1542",
"base_model:finetune:Trelis/Qwen3-4B_ds-arc-agi-1-partial-100-c1542",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-04T15:49:18Z |
---
base_model: Trelis/Qwen3-4B_ds-arc-agi-1-partial-100-c1542
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Trelis
- **License:** apache-2.0
- **Finetuned from model :** Trelis/Qwen3-4B_ds-arc-agi-1-partial-100-c1542
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
matherchodhuuu/blockassist-bc-lightfooted_skilled_chameleon_1757000709
|
matherchodhuuu
| 2025-09-04T15:46:21Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"lightfooted skilled chameleon",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-04T15:46:13Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- lightfooted skilled chameleon
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
aleebaster/blockassist-bc-sly_eager_boar_1756996605
|
aleebaster
| 2025-09-04T15:03:25Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"sly eager boar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-04T15:03:18Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- sly eager boar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
coelacanthxyz/blockassist-bc-finicky_thriving_grouse_1756995690
|
coelacanthxyz
| 2025-09-04T14:49:49Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"finicky thriving grouse",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-04T14:49:43Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- finicky thriving grouse
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Rootu/blockassist-bc-snorting_fleecy_goose_1756997033
|
Rootu
| 2025-09-04T14:44:35Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"snorting fleecy goose",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-04T14:44:27Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- snorting fleecy goose
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
martibosch/milliontrees-detr-belem
|
martibosch
| 2025-09-04T14:37:33Z | 19 | 0 | null |
[
"safetensors",
"deformable_detr",
"license:gpl-3.0",
"region:us"
] | null | 2025-09-03T08:13:34Z |
---
license: gpl-3.0
---
# Fine-tuned milliontrees-detr model in Belem, Brazil
Fine-tuned [joshvm/milliontrees-detr](https://huggingface.co/joshvm/milliontrees-detr) using 860 annotations on Belem, Brazil.
## Metrics
| Model | Precision | Recall | F1-score |
|----------------|------------|------------|--------------|
| Pre-trained | 0.2798 | 0.1680 | 0.2099 |
| **Fine-tuned** | **0.7797** | **0.7082** | **0.7422** |
## Instructions to run
```python
from deepforest import main
config_args = {
"model": {"name": "martibosch/milliontrees-detr-belem"},
"score_thresh": 0.25,
"architecture": "DeformableDetr"
}
model = main.deepforest(config_args=config_args)
```
|
sovthpaw/senter-omni-model
|
sovthpaw
| 2025-09-04T14:35:55Z | 0 | 1 | null |
[
"safetensors",
"any-to-any",
"en",
"dataset:sovthpaw/senter-omni-data",
"base_model:Qwen/Qwen2.5-Omni-3B",
"base_model:finetune:Qwen/Qwen2.5-Omni-3B",
"license:apache-2.0",
"region:us"
] |
any-to-any
| 2025-09-04T02:59:03Z |
---
license: apache-2.0
datasets:
- sovthpaw/senter-omni-data
language:
- en
base_model:
- Qwen/Qwen2.5-Omni-3B
pipeline_tag: any-to-any
---
<div align="center">

🤘🤖
</div>
**🎯 ONE MODEL, ALL MODALITIES, CHAT & EMBED** - Unlike pipeline approaches, Senter-Omni is a single 4B parameter model that truly understands and reasons across text, images, audio, and video simultaneously.
**🔓 OPEN & UNCENSORED** - Apache 2.0 licensed with unrestricted responses for maximum utility.
**🧠 128K CONTEXT** - Extended RoPE scaling for handling massive documents and conversations.
**💾 MEMORY EFFICIENT** - 4-bit quantized model that fits on consumer GPUs while maintaining full multimodal capabilities.
---
</div>
## 🚀 **Quick Start**
### **Installation**
```bash
git clone https://github.com/SouthpawIN/senter-omni.git
cd senter-omni
pip install -r requirements.txt
# Download the quantized model (instructions below)
# Then run the demo:
python senter_omni_demo.py
```
### **Basic Usage**
```python
from omni import OmniClient
# Initialize Senter-Omni
client = OmniClient()
# Streaming chat
response = client.chat([
{"role": "user", "content": "Hello Senter!"}
], stream=True)
# Multimodal chat with image
response = client.chat([
{"role": "user", "content": [
{"type": "image", "image": "photo.jpg"},
{"type": "text", "text": "What do you see?"}
]}
])
# Cross-modal embeddings
embedding = client.embed("any content", modality="auto")
```
---
## 🎭 **Multimodal Capabilities**
### **Text Understanding & Generation**
- **Mathematical Reasoning**: Step-by-step problem solving
- **Code Generation**: Python, JavaScript, and more
- **Creative Writing**: Stories, scripts, poetry
- **Technical Analysis**: Complex explanations and documentation
### **Visual Understanding**
- **Image Analysis**: Detailed descriptions of visual content
- **Geometric Recognition**: Shapes, colors, spatial relationships
- **Creative Interpretation**: Stories inspired by images
- **Technical Diagrams**: Understanding charts, graphs, schematics
### **Audio Processing**
- **Sound Analysis**: Identifying audio content and patterns
- **Speech Understanding**: Transcribing and interpreting spoken content
- **Music Analysis**: Recognizing musical elements and genres
- **Environmental Audio**: Identifying sounds from various sources
### **Cross-Modal Reasoning**
- **Unified Understanding**: Connecting information across modalities
- **Contextual Analysis**: Using multiple inputs for better reasoning
- **Creative Synthesis**: Combining visual, audio, and text for rich responses
### **Model Specifications**
- **Parameters**: 4B (quantized to 4-bit)
- **Context Length**: 128K tokens (RoPE scaled)
- **Memory Usage**: ~8GB VRAM
- **Inference Speed**: Real-time streaming
- **Modalities**: Text, Image, Audio, Video
### **Embedding Capabilities**
- **Unified Space**: 1024D embeddings for all modalities
- **Cross-Modal Search**: Find similar content across text, images, audio
- **Similarity Matching**: Cosine similarity in unified space
- **Memory Efficient**: Same model for chat and embeddings
---
## 🎯 **Real Examples**
### **Image Analysis**
```python
# Analyze geometric shapes
response = client.chat([
{"role": "user", "content": [
{"type": "image", "image": "test_assets/real_test_image.jpg"},
{"type": "text", "text": "What geometric shapes do you see?"}
]}
])
# Output: "I see a red square, blue square, and green oval arranged vertically"
```
### **Audio Understanding**
```python
# Process audio content
response = client.chat([
{"role": "user", "content": [
{"type": "audio", "audio": "test_assets/real_test_audio.wav"},
{"type": "text", "text": "What do you hear?"}
]}
])
# Output: "I hear an electric hum from a device like a radio or TV"
```
### **Creative Multimodal Storytelling**
```python
# Create stories from images
response = client.chat([
{"role": "user", "content": [
{"type": "image", "image": "shapes.jpg"},
{"type": "text", "text": "Create a story inspired by this image"}
]}
])
# Output: Rich, creative stories combining visual elements with narrative
```
### **Cross-Modal Embeddings**
```python
# Embed different modalities
text_emb = client.embed("beautiful mountain landscape")
image_emb = client.embed("mountain_photo.jpg", modality="image")
audio_emb = client.embed("nature_sounds.wav", modality="audio")
# All embeddings are in the same 1024D space for comparison
```
---
## 🔧 **Technical Architecture**
### **Model Details**
- **Base**: Qwen2.5-Omni-3B (Apache 2.0 licensed)
- **Quantization**: 4-bit NF4 for memory efficiency
- **Context Extension**: Yarn RoPE scaling to 128K
- **Streaming**: Custom TimingStreamer for real-time output
- **Embeddings**: Hash-based unified 1024D space
### **Training Data**
- **131,893 samples** from multiple high-quality datasets:
- 50,000 ShareGPT conversations (chat)
- 30,000 AgentCode samples (function calling)
- 20,000 Stack Overflow (coding)
- 30,000 Hermes-3 (instruction tuning)
- 1,893 Hermes function calling
### **Key Features**
- **XML Tag Support**: `<think>`, `<notepad>`, `<system>`, `<user>`, `<assistant>`
- **Uncensored Responses**: No content restrictions
- **Function Calling**: Tool integration capabilities
- **Memory Efficient**: Single model for chat and embeddings
---
## 📦 **Installation & Setup**
### **1. Clone Repository**
```bash
git clone https://github.com/SouthpawIN/senter-omni.git
cd senter-omni
```
### **2. Install Dependencies**
```bash
pip install -r requirements.txt
```
### **3. Download Model**
The quantized model (3.5GB) is hosted on Hugging Face due to GitHub's 100MB file limit:
- **Dataset**: https://huggingface.co/datasets/SouthpawIN/senter-omni-data
```bash
# Option 1: Download from Hugging Face (Recommended)
git lfs install
git clone https://huggingface.co/SouthpawIN/senter-omni-model
cp -r senter-omni-model/* ./senter_omni_128k/
# Option 2: Manual download
# Download from: https://huggingface.co/SouthpawIN/senter-omni-model
```
## 🎮 **Interactive Demo**
The comprehensive demo showcases all capabilities:
```bash
python senter_omni_demo.py
```
**Demo Sections:**
1. **🎓 Training Capabilities** - Dataset overview and training features
2. **💬 Multimodal Chat** - Text, image, audio, and combined processing
3. **🔍 Cross-Modal Embeddings** - Unified embedding space demonstration
4. **🚀 Building Guide** - API usage and integration examples
---
## 🛠️ **API Reference**
### **Core Methods**
#### **`client.chat(messages, **kwargs)`**
```python
# Basic chat
response = client.chat([
{"role": "user", "content": "Hello!"}
])
# With parameters
response = client.chat(
messages=[{"role": "user", "content": "Hello!"}],
max_tokens=256,
temperature=0.7,
stream=True
)
# Multimodal
response = client.chat([
{"role": "user", "content": [
{"type": "image", "image": "photo.jpg"},
{"type": "text", "text": "Describe this image"}
]}
])
```
#### **`client.embed(content, modality="auto")`**
```python
# Text embedding
emb = client.embed("sample text")
# Image embedding
emb = client.embed("image.jpg", modality="image")
# Audio embedding
emb = client.embed("audio.wav", modality="audio")
# Auto-detect modality
emb = client.embed("[IMAGE] photo.jpg") # Detects as image
```
#### **`client.cross_search(query, top_k=5)`**
```python
# Search across modalities
results = client.cross_search("mountain landscape")
# Returns: {"text": [...], "image": [...], "audio": [...]}
```
#### **`client.retrieve_context(query, context_window=5)`**
```python
# Get relevant context
context = client.retrieve_context("nature scenes")
# Returns multimodal context items
```
---
### **Memory Usage**
- **Model Loading**: ~8GB VRAM
- **Inference**: ~10GB VRAM peak
- **Embeddings**: Shared model (no additional memory)
- **Context (128K)**: ~2GB additional for full context
### **Development Setup**
```bash
git clone https://github.com/SouthpawIN/senter-omni.git
cd senter-omni
pip install -r requirements.txt
python senter_omni_demo.py # Test installation
```
---
## 📄 **License**
**Apache 2.0 License** - See [LICENSE](LICENSE) for details.
This project uses:
- **Qwen2.5-Omni**: Apache 2.0 (Alibaba Cloud)
- **Training Datasets**: Various open licenses
- **Code**: Apache 2.0
---
## 🙏 **Acknowledgments**
- **Alibaba Cloud** for Qwen2.5-Omni architecture
- **Nous Research** for Hermes dataset and inspiration
- **Alignment Lab AI** for development and training
- **Unsloth** for efficient training framework
- **HuggingFace** for model hosting and tools
- **Open Source Community** for datasets and tools
---
<div align="center">
**🎭 EXPERIENCE THE FUTURE OF MULTIMODAL AI WITH SENTER-OMNI**
*Built with ❤️ by sovthpaw at Alignment Lab AI*
Donations:
https://www.paypal.me/Sellgames1l
</div>
|
mradermacher/NemoMix-Magcap-12B-i1-GGUF
|
mradermacher
| 2025-09-04T14:00:12Z | 3,005 | 1 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:mrcuddle/NemoMix-Magcap-12B",
"base_model:quantized:mrcuddle/NemoMix-Magcap-12B",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2025-09-03T19:43:40Z |
---
base_model: mrcuddle/NemoMix-Magcap-12B
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
weighted/imatrix quants of https://huggingface.co/mrcuddle/NemoMix-Magcap-12B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#NemoMix-Magcap-12B-i1-GGUF).***
static quants are available at https://huggingface.co/mradermacher/NemoMix-Magcap-12B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/NemoMix-Magcap-12B-i1-GGUF/resolve/main/NemoMix-Magcap-12B.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) |
| [GGUF](https://huggingface.co/mradermacher/NemoMix-Magcap-12B-i1-GGUF/resolve/main/NemoMix-Magcap-12B.i1-IQ1_S.gguf) | i1-IQ1_S | 3.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/NemoMix-Magcap-12B-i1-GGUF/resolve/main/NemoMix-Magcap-12B.i1-IQ1_M.gguf) | i1-IQ1_M | 3.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/NemoMix-Magcap-12B-i1-GGUF/resolve/main/NemoMix-Magcap-12B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/NemoMix-Magcap-12B-i1-GGUF/resolve/main/NemoMix-Magcap-12B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/NemoMix-Magcap-12B-i1-GGUF/resolve/main/NemoMix-Magcap-12B.i1-IQ2_S.gguf) | i1-IQ2_S | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/NemoMix-Magcap-12B-i1-GGUF/resolve/main/NemoMix-Magcap-12B.i1-IQ2_M.gguf) | i1-IQ2_M | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/NemoMix-Magcap-12B-i1-GGUF/resolve/main/NemoMix-Magcap-12B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 4.6 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/NemoMix-Magcap-12B-i1-GGUF/resolve/main/NemoMix-Magcap-12B.i1-Q2_K.gguf) | i1-Q2_K | 4.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/NemoMix-Magcap-12B-i1-GGUF/resolve/main/NemoMix-Magcap-12B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 5.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/NemoMix-Magcap-12B-i1-GGUF/resolve/main/NemoMix-Magcap-12B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/NemoMix-Magcap-12B-i1-GGUF/resolve/main/NemoMix-Magcap-12B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 5.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/NemoMix-Magcap-12B-i1-GGUF/resolve/main/NemoMix-Magcap-12B.i1-IQ3_S.gguf) | i1-IQ3_S | 5.7 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/NemoMix-Magcap-12B-i1-GGUF/resolve/main/NemoMix-Magcap-12B.i1-IQ3_M.gguf) | i1-IQ3_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/NemoMix-Magcap-12B-i1-GGUF/resolve/main/NemoMix-Magcap-12B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 6.2 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/NemoMix-Magcap-12B-i1-GGUF/resolve/main/NemoMix-Magcap-12B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 6.7 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/NemoMix-Magcap-12B-i1-GGUF/resolve/main/NemoMix-Magcap-12B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/NemoMix-Magcap-12B-i1-GGUF/resolve/main/NemoMix-Magcap-12B.i1-Q4_0.gguf) | i1-Q4_0 | 7.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/NemoMix-Magcap-12B-i1-GGUF/resolve/main/NemoMix-Magcap-12B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 7.2 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/NemoMix-Magcap-12B-i1-GGUF/resolve/main/NemoMix-Magcap-12B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 7.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/NemoMix-Magcap-12B-i1-GGUF/resolve/main/NemoMix-Magcap-12B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 7.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/NemoMix-Magcap-12B-i1-GGUF/resolve/main/NemoMix-Magcap-12B.i1-Q4_1.gguf) | i1-Q4_1 | 7.9 | |
| [GGUF](https://huggingface.co/mradermacher/NemoMix-Magcap-12B-i1-GGUF/resolve/main/NemoMix-Magcap-12B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 8.6 | |
| [GGUF](https://huggingface.co/mradermacher/NemoMix-Magcap-12B-i1-GGUF/resolve/main/NemoMix-Magcap-12B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 8.8 | |
| [GGUF](https://huggingface.co/mradermacher/NemoMix-Magcap-12B-i1-GGUF/resolve/main/NemoMix-Magcap-12B.i1-Q6_K.gguf) | i1-Q6_K | 10.2 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
senga-ml/dnote-header
|
senga-ml
| 2025-09-04T13:57:17Z | 62 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vision-encoder-decoder",
"image-to-text",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2025-06-04T08:55:37Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
bah63843/blockassist-bc-plump_fast_antelope_1756994031
|
bah63843
| 2025-09-04T13:54:42Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-04T13:54:33Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kafa22/blockassist-bc-regal_leggy_hummingbird_1756993607
|
kafa22
| 2025-09-04T13:47:27Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"regal leggy hummingbird",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-04T13:47:24Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- regal leggy hummingbird
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
qwersdfvg/blockassist-bc-meek_deadly_alligator_1756993302
|
qwersdfvg
| 2025-09-04T13:43:06Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"meek deadly alligator",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-04T13:41:45Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- meek deadly alligator
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
bboppp/blockassist-bc-timid_sharp_monkey_1756993141
|
bboppp
| 2025-09-04T13:39:43Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"timid sharp monkey",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-04T13:39:01Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- timid sharp monkey
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
canoplos112/blockassist-bc-yapping_sleek_squirrel_1756993040
|
canoplos112
| 2025-09-04T13:39:10Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yapping sleek squirrel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-04T13:37:55Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yapping sleek squirrel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kmpartner/k5pcmlra2-test
|
kmpartner
| 2025-09-04T13:38:24Z | 245 | 0 |
peft
|
[
"peft",
"tensorboard",
"diffusers",
"safetensors",
"arxiv:1910.09700",
"base_model:segmind/Segmind-Vega",
"base_model:adapter:segmind/Segmind-Vega",
"region:us"
] | null | 2025-08-09T06:08:24Z |
---
library_name: peft
base_model: segmind/Segmind-Vega
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.9.0
|
bboppp/blockassist-bc-reclusive_deadly_scorpion_1756993047
|
bboppp
| 2025-09-04T13:38:17Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"reclusive deadly scorpion",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-04T13:37:27Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- reclusive deadly scorpion
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
liukevin666/blockassist-bc-yawning_striped_cassowary_1756992998
|
liukevin666
| 2025-09-04T13:37:42Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yawning striped cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-04T13:37:34Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yawning striped cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
giovannidemuri/llama3b-llama8b-er-v550-seed2-seed2-hx-openmath-fpt-v2
|
giovannidemuri
| 2025-09-04T12:27:40Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-04T11:22:52Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ArunKr/smollm2-manim-qlora
|
ArunKr
| 2025-09-04T12:23:37Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"base_model:adapter:HuggingFaceTB/SmolLM2-135M",
"lora",
"transformers",
"text-generation",
"base_model:HuggingFaceTB/SmolLM2-135M",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-09-04T12:17:39Z |
---
library_name: peft
license: apache-2.0
base_model: HuggingFaceTB/SmolLM2-135M
tags:
- base_model:adapter:HuggingFaceTB/SmolLM2-135M
- lora
- transformers
pipeline_tag: text-generation
model-index:
- name: smollm2-manim-qlora
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smollm2-manim-qlora
This model is a fine-tuned version of [HuggingFaceTB/SmolLM2-135M](https://huggingface.co/HuggingFaceTB/SmolLM2-135M) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.17.1
- Transformers 4.56.0
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.22.0
|
akirafudo/blockassist-bc-keen_fast_giraffe_1756987326
|
akirafudo
| 2025-09-04T12:02:32Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-04T12:02:27Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
omerbektass/blockassist-bc-keen_fast_giraffe_1756987170
|
omerbektass
| 2025-09-04T12:00:25Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-04T11:59:48Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
liukevin666/blockassist-bc-yawning_striped_cassowary_1756987083
|
liukevin666
| 2025-09-04T11:59:25Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yawning striped cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-04T11:58:59Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yawning striped cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
lucasmg09/gemma-12b-thinking-ptbr
|
lucasmg09
| 2025-09-04T11:52:43Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
image-text-to-text
| 2025-09-04T11:42:38Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
omerbektass/blockassist-bc-keen_fast_giraffe_1756986725
|
omerbektass
| 2025-09-04T11:52:31Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-04T11:52:27Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
TheStageAI/Elastic-MN-GRAND-Gutenberg-Lyra4-Lyra-12B-DARKNESS
|
TheStageAI
| 2025-09-04T11:48:01Z | 40 | 4 | null |
[
"text-generation",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:DavidAU/MN-GRAND-Gutenberg-Lyra4-Lyra-12B-DARKNESS",
"base_model:quantized:DavidAU/MN-GRAND-Gutenberg-Lyra4-Lyra-12B-DARKNESS",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-06-13T09:26:42Z |
---
license: apache-2.0
base_model:
- DavidAU/MN-GRAND-Gutenberg-Lyra4-Lyra-12B-DARKNESS
base_model_relation: quantized
pipeline_tag: text-generation
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
---
# Elastic model: MN-GRAND-Gutenberg-Lyra4-Lyra-12B-DARKNESS. Fastest and most flexible models for self-serving.
Elastic models are the models produced by TheStage AI ANNA: Automated Neural Networks Accelerator. ANNA allows you to control model size, latency and quality with a simple slider movement. For each model, ANNA produces a series of optimized models:
* __XL__: Mathematically equivalent neural network, optimized with our DNN compiler.
* __L__: Near lossless model, with less than 1% degradation obtained on corresponding benchmarks.
* __M__: Faster model, with accuracy degradation less than 1.5%.
* __S__: The fastest model, with accuracy degradation less than 2%.
__Goals of elastic models:__
* Provide flexibility in cost vs quality selection for inference
* Provide clear quality and latency benchmarks
* Provide interface of HF libraries: transformers and diffusers with a single line of code
* Provide models supported on a wide range of hardware, which are pre-compiled and require no JIT.
* Provide the best models and service for self-hosting.
> It's important to note that specific quality degradation can vary from model to model. For instance, with an S model, you can have 0.5% degradation as well.

-----
## Inference
> Compiled versions are currently available only for batch sizes 1-4 (1-6 for S on 5090). Other versions are not yet accessible. Stay tuned for updates!
To infer our models, you just need to replace `transformers` import with `elastic_models.transformers`:
```python
import torch
from transformers import AutoTokenizer
from elastic_models.transformers import AutoModelForCausalLM
# Currently we require to have your HF token
# as we use original weights for part of layers and
# model configuration as well
model_name = "DavidAU/MN-GRAND-Gutenberg-Lyra4-Lyra-12B-DARKNESS"
hf_token = ''
device = torch.device("cuda")
# Create mode
tokenizer = AutoTokenizer.from_pretrained(
model_name, token=hf_token
)
model = AutoModelForCausalLM.from_pretrained(
model_name,
token=hf_token,
torch_dtype=torch.bfloat16,
attn_implementation="sdpa",
mode='S'
).to(device)
model.generation_config.pad_token_id = tokenizer.eos_token_id
# Inference simple as transformers library
prompt = "Describe basics of DNNs quantization."
messages = [
{
"role": "system",
"content": "You are a search bot, answer on user text queries."
},
{
"role": "user",
"content": prompt
}
]
chat_prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True, tokenize=False
)
inputs = tokenizer(chat_prompt, return_tensors="pt")
inputs.to(device)
if 'token_type_ids' in inputs:
del inputs['token_type_ids']
with torch.inference_mode():
generate_ids = model.generate(**inputs, max_length=500)
input_len = inputs['input_ids'].shape[1]
generate_ids = generate_ids[:, input_len:]
output = tokenizer.batch_decode(
generate_ids,
skip_special_tokens=True,
clean_up_tokenization_spaces=False
)[0]
# Validate answer
print(f"# Q:\n{prompt}\n")
print(f"# A:\n{output}\n")
```
__System requirements:__
* GPUs: Nvidia GeForce RTX 4090, Nvidia GeForce RTX 5090
* CPU: AMD, Intel
* Python: 3.10-3.12
To work with our models just run these lines in your terminal:
```shell
pip install thestage
pip install 'thestage-elastic-models[nvidia]'
pip install flash_attn==2.7.3 --no-build-isolation
# or for blackwell support
pip install 'thestage-elastic-models[blackwell]'
pip install torch==2.7.0+cu128 torchvision torchaudio --index-url https://download.pytorch.org/whl/cu128
# please download the appropriate version of Wheels for your system from https://github.com/Zarrac/flashattention-blackwell-wheels-whl-ONLY-5090-5080-5070-5060-flash-attention-/releases/tag/FlashAttention
mv flash_attn-2.7.4.post1-rtx5090-torch2.7.0cu128cxx11abiTRUE-cp311-linux_x86_64.whl flash_attn-2.7.4.post1-0rtx5090torch270cu128cxx11abiTRUE-cp311-cp311-linux_x86_64.whl
pip install flash_attn-2.7.4.post1-0rtx5090torch270cu128cxx11abiTRUE-cp311-cp311-linux_x86_64.whl
pip uninstall apex
```
Then go to [app.thestage.ai](https://app.thestage.ai), login and generate API token from your profile page. Set up API token as follows:
```shell
thestage config set --api-token <YOUR_API_TOKEN>
```
Congrats, now you can use accelerated models!
----
## Benchmarks
Benchmarking is one of the most important procedures during model acceleration. We aim to provide clear performance metrics for models using our algorithms. The `W8A8, int8 column` indicates that we applied W8A8 quantization with int8 data type to all linear layers and used the same calibration data as for ANNA. The S model achieves practically identical speed but much higher quality, as ANNA knows how to improve quantization quality on sensitive layers!
### Quality benchmarks
| Metric/Model | S | M | L | XL | Original | W8A8, int8 |
|---------------|---|---|---|----|----------|------------|
| arc_challenge | 56.20 | 55.88 | 56.57 | 57.80 | 57.80 | 53.10 | - |
| mmlu | 65.60 | 66.74 | 67.01 | 66.80 | 66.80 | 62.40 | - |
| piqa | 80.60 | 81.28 | 81.12 | 81.30 | 81.30 | 79.00 | - |
| winogrande | 74.40 | 74.27 | 75.61 | 76.00 | 76.00 | 71.00 | - |
* **MMLU**: Evaluates general knowledge across 57 subjects including science, humanities, engineering, and more. Shows model's ability to handle diverse academic topics.
* **PIQA**: Evaluates physical commonsense reasoning through questions about everyday physical interactions. Shows model's understanding of real-world physics concepts.
* **Arc Challenge**: Evaluates grade-school level multiple-choice questions requiring reasoning. Shows model's ability to solve complex reasoning tasks.
* **Winogrande**: Evaluates commonsense reasoning through sentence completion tasks. Shows model's capability to understand context and resolve ambiguity.
### Performance by Context Size
The tables below show performance (tokens per second) for different input context sizes across different GPU models and batch sizes:
> **Note:** Dash marks (`-`) in the table indicate that the data did not fit on the device.
**RTX 4090:**
*Batch Size 1:*
| Context | Input Tokens | S | M | L | XL | Original |
|---------|-------------|---|---|---|----|---------|
| Small | 256 | 64.4 | 55.4 | - | - | 34.2 | - |
| Medium | 1024 | 63.7 | 54.9 | - | - | - | - |
| Large | 4096 | 61.0 | 52.9 | - | - | - | - |
*Batch Size 2:*
| Context | Input Tokens | S | M | L | XL | Original |
|---------|-------------|---|---|---|----|---------|
| Small | 256 | 63.6 | 54.9 | - | - | 32.2 | - |
| Medium | 1024 | 62.5 | 54.0 | - | - | - | - |
| Large | 4096 | 58.2 | - | - | - | - | - |
*Batch Size 4:*
| Context | Input Tokens | S | M | L | XL | Original |
|---------|-------------|---|---|---|----|---------|
| Small | 256 | 62.4 | 53.9 | - | - | - | - |
| Medium | 1024 | 60.0 | 52.1 | - | - | - | - |
| Large | 4096 | 52.5 | - | - | - | - | - |
**RTX 5090:**
*Batch Size 1:*
| Context | Input Tokens | S | M | L | XL | Original |
|---------|-------------|---|---|---|----|---------|
| Small | 256 | 100.2 | 88.8 | 81.3 | - | 48.7 | - |
| Medium | 1024 | 99.4 | 88.3 | 80.7 | - | 47.2 | - |
| Large | 4096 | 94.9 | 84.6 | 77.7 | - | 41.1 | - |
*Batch Size 2:*
| Context | Input Tokens | S | M | L | XL | Original |
|---------|-------------|---|---|---|----|---------|
| Small | 256 | 99.6 | 88.4 | 80.7 | - | 44.8 | - |
| Medium | 1024 | 97.9 | 86.8 | 79.4 | - | 41.8 | - |
| Large | 4096 | 92.3 | 82.3 | 75.6 | - | 33.2 | - |
*Batch Size 4:*
| Context | Input Tokens | S | M | L | XL | Original |
|---------|-------------|---|---|---|----|---------|
| Small | 256 | 97.4 | 86.6 | 79.0 | - | 43.1 | - |
| Medium | 1024 | 94.7 | 84.1 | 77.0 | - | 38.2 | - |
| Large | 4096 | 81.1 | 73.3 | 67.8 | - | 24.5 | - |
*Note: Results show tokens per second (TPS) for text generation with 100 new tokens output. Performance varies based on GPU model, context size, and batch size.*
## Links
* Platform: [app.thestage.ai](https://app.thestage.ai/)
* __Subscribe for updates__: [TheStageAI X](https://x.com/TheStageAI)
<!-- * __Elastic models Github__: [app.thestage.ai](app.thestage.ai) -->
* __Contact email__: [email protected]
|
RealTarz/review-insight-enhanced-v2
|
RealTarz
| 2025-09-04T11:47:12Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:roberta-base",
"lora",
"transformers",
"base_model:FacebookAI/roberta-base",
"base_model:adapter:FacebookAI/roberta-base",
"license:mit",
"region:us"
] | null | 2025-09-04T11:47:11Z |
---
library_name: peft
license: mit
base_model: roberta-base
tags:
- base_model:adapter:roberta-base
- lora
- transformers
metrics:
- accuracy
model-index:
- name: review-insight-enhanced-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# review-insight-enhanced-v2
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1477
- Accuracy: 0.9484
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 393 | 0.1857 | 0.9386 |
| 0.4035 | 2.0 | 786 | 0.1570 | 0.9434 |
| 0.1795 | 3.0 | 1179 | 0.1477 | 0.9484 |
### Framework versions
- PEFT 0.17.1
- Transformers 4.56.0
- Pytorch 2.8.0+cu128
- Datasets 4.0.0
- Tokenizers 0.22.0
|
Mubashardigi/Bulk-Video-Generator-Tool-kit
|
Mubashardigi
| 2025-09-04T11:37:42Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-04T11:33:43Z |
---
title: Veobatch
emoji: 😻
colorFrom: red
colorTo: pink
sdk: gradio
sdk_version: 5.42.0
app_file: app.py
pinned: false
---
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
arif696/blockassist-bc-regal_spotted_pelican_1756985304
|
arif696
| 2025-09-04T11:30:28Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"regal spotted pelican",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-04T11:29:29Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- regal spotted pelican
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
arif696/blockassist-bc-regal_spotted_pelican_1756985045
|
arif696
| 2025-09-04T11:26:11Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"regal spotted pelican",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-04T11:25:14Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- regal spotted pelican
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Reihaneh/wav2vec2_gl_it_LID_50_epochs_5
|
Reihaneh
| 2025-09-04T11:24:22Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-03T13:57:26Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/Moxin-7B-LLM-GGUF
|
mradermacher
| 2025-09-04T11:14:24Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:moxin-org/Moxin-7B-LLM",
"base_model:quantized:moxin-org/Moxin-7B-LLM",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-04T10:08:47Z |
---
base_model: moxin-org/Moxin-7B-LLM
language:
- en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/moxin-org/Moxin-7B-LLM
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Moxin-7B-LLM-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Moxin-7B-LLM-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Moxin-7B-LLM-GGUF/resolve/main/Moxin-7B-LLM.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Moxin-7B-LLM-GGUF/resolve/main/Moxin-7B-LLM.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Moxin-7B-LLM-GGUF/resolve/main/Moxin-7B-LLM.Q3_K_M.gguf) | Q3_K_M | 4.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Moxin-7B-LLM-GGUF/resolve/main/Moxin-7B-LLM.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Moxin-7B-LLM-GGUF/resolve/main/Moxin-7B-LLM.Q4_K_S.gguf) | Q4_K_S | 4.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Moxin-7B-LLM-GGUF/resolve/main/Moxin-7B-LLM.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Moxin-7B-LLM-GGUF/resolve/main/Moxin-7B-LLM.Q6_K.gguf) | Q6_K | 6.8 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Moxin-7B-LLM-GGUF/resolve/main/Moxin-7B-LLM.Q8_0.gguf) | Q8_0 | 8.7 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Moxin-7B-LLM-GGUF/resolve/main/Moxin-7B-LLM.f16.gguf) | f16 | 16.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
serj444/blockassist-bc-carnivorous_pudgy_puffin_1756982970
|
serj444
| 2025-09-04T11:09:28Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"carnivorous pudgy puffin",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-04T11:09:15Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- carnivorous pudgy puffin
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
chansung/Qwen3-4B-CCRL-CUR-VAR-ASCE-NORMAL-B0.05-1E
|
chansung
| 2025-09-04T10:58:34Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:chansung/verifiable-coding-problems-python-v2",
"arxiv:2402.03300",
"base_model:Qwen/Qwen3-4B-Instruct-2507",
"base_model:finetune:Qwen/Qwen3-4B-Instruct-2507",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-03T15:57:37Z |
---
base_model: Qwen/Qwen3-4B-Instruct-2507
datasets: chansung/verifiable-coding-problems-python-v2
library_name: transformers
model_name: Qwen3-4B-CCRL-CUR-VAR-ASCE-NORMAL-B0.05-1E
tags:
- generated_from_trainer
- open-r1
- trl
- grpo
licence: license
---
# Model Card for Qwen3-4B-CCRL-CUR-VAR-ASCE-NORMAL-B0.05-1E
This model is a fine-tuned version of [Qwen/Qwen3-4B-Instruct-2507](https://huggingface.co/Qwen/Qwen3-4B-Instruct-2507) on the [chansung/verifiable-coding-problems-python-v2](https://huggingface.co/datasets/chansung/verifiable-coding-problems-python-v2) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="chansung/Qwen3-4B-CCRL-CUR-VAR-ASCE-NORMAL-B0.05-1E", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/chansung18/huggingface/runs/q5f7xid9)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.18.0.dev0
- Transformers: 4.52.0.dev0
- Pytorch: 2.6.0
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
aleebaster/blockassist-bc-sly_eager_boar_1756981316
|
aleebaster
| 2025-09-04T10:48:24Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"sly eager boar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-04T10:48:17Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- sly eager boar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
iproskurina/bert-base-cased-ihc-s4
|
iproskurina
| 2025-09-04T10:47:58Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-09-04T10:47:34Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mooperyou/blockassist-bc-carnivorous_crested_cheetah_1756982784
|
mooperyou
| 2025-09-04T10:46:46Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"carnivorous crested cheetah",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-04T10:46:25Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- carnivorous crested cheetah
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Rudra-madlads/blockassist-bc-jumping_swift_gazelle_1756982661
|
Rudra-madlads
| 2025-09-04T10:45:15Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"jumping swift gazelle",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-04T10:44:57Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- jumping swift gazelle
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Dania19862017/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-unseen_nocturnal_zebra
|
Dania19862017
| 2025-09-04T10:43:08Z | 150 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am unseen_nocturnal_zebra",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-31T15:36:16Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am unseen_nocturnal_zebra
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mooperyou/blockassist-bc-iridescent_mangy_warthog_1756982497
|
mooperyou
| 2025-09-04T10:42:06Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"iridescent mangy warthog",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-04T10:41:38Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- iridescent mangy warthog
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
coastalcph/Llama-2-7b-chat-1t_gsm8k-1.2t_diff_pv_evil
|
coastalcph
| 2025-09-04T10:32:02Z | 0 | 0 | null |
[
"safetensors",
"llama",
"region:us"
] | null | 2025-09-04T10:28:48Z |
# Combined Task Vector Model
This model was created by combining task vectors from multiple fine-tuned models.
## Task Vector Computation
```python
t_1 = TaskVector("meta-llama/Llama-2-7b-chat-hf", "coastalcph/Llama-2-7b-chat-gsm8k_bs8_2e-4")
t_2 = TaskVector("meta-llama/Llama-2-7b-chat-hf", "coastalcph/Llama-2-7b-chat-pv-prompts-non-evil")
t_combined = 1.0 * t_1 + 1.2 * t_2 - 1.2 * t_3
new_model = t_combined.apply_to("meta-llama/Llama-2-7b-chat-hf", scaling_coef=1.0)
```
Models Used
- Base Model: https://huggingface.co/meta-llama/Llama-2-7b-chat-hf
- Fine-tuned Model 1: https://huggingface.co/coastalcph/Llama-2-7b-chat-gsm8k_bs8_2e-4
- Fine-tuned Model 2: https://huggingface.co/coastalcph/Llama-2-7b-chat-pv-prompts-non-evil
Technical Details
- Creation Script Git Hash: d0db42d73be516ec04f0ecdc8003189e98b5f722
- Task Vector Method: Additive combination
- Args: {
"pretrained_model": "meta-llama/Llama-2-7b-chat-hf",
"finetuned_model1": "coastalcph/Llama-2-7b-chat-gsm8k_bs8_2e-4",
"finetuned_model2": "coastalcph/Llama-2-7b-chat-pv-prompts-non-evil",
"finetuned_model3": "coastalcph/Llama-2-7b-chat-pv-prompts-evil",
"output_model_name": "coastalcph/Llama-2-7b-chat-1t_gsm8k-1.2t_diff_pv_evil",
"output_dir": "/projects/nlp/data/constanzam/weight-interp/task-vectors/math_non_sycophant_12Aug",
"scaling_coef": 1.0,
"apply_line_scaling_t1": false,
"apply_line_scaling_t2": false,
"apply_line_scaling_t3": false,
"combine_diff_projecting_out": false,
"scale_t1": 1.0,
"scale_t2": 1.2,
"scale_t3": 1.2
}
|
youryoui/blockassist-bc-hulking_squeaky_seahorse_1756981680
|
youryoui
| 2025-09-04T10:28:22Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"hulking squeaky seahorse",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-04T10:28:00Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- hulking squeaky seahorse
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
jessicamae271985/blockassist-bc-darting_knobby_caribou_1756981079
|
jessicamae271985
| 2025-09-04T10:19:06Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"darting knobby caribou",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-04T10:18:59Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- darting knobby caribou
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
youryoui/blockassist-bc-carnivorous_crested_cheetah_1756980956
|
youryoui
| 2025-09-04T10:16:18Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"carnivorous crested cheetah",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-04T10:15:57Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- carnivorous crested cheetah
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Ziad177/fine_tuned_ArTsT
|
Ziad177
| 2025-09-04T10:15:56Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"speecht5",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-09-04T10:08:50Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Sibestan/Paul
|
Sibestan
| 2025-09-04T10:13:16Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-04T10:11:59Z |
---
license: apache-2.0
---
|
akirafudo/blockassist-bc-keen_fast_giraffe_1756980764
|
akirafudo
| 2025-09-04T10:13:07Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-04T10:13:02Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
sandwhy/trumecs-gemma3-cs-finetuned-v2
|
sandwhy
| 2025-09-04T10:10:56Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma3_text",
"trl",
"en",
"base_model:unsloth/gemma-3-270m-it",
"base_model:finetune:unsloth/gemma-3-270m-it",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-03T10:34:36Z |
---
base_model: unsloth/gemma-3-270m-it
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3_text
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** sandwhy
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-270m-it
This gemma3_text model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
jessicamae271985/blockassist-bc-darting_knobby_caribou_1756980421
|
jessicamae271985
| 2025-09-04T10:08:57Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"darting knobby caribou",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-04T10:08:01Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- darting knobby caribou
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
youryoui/blockassist-bc-freckled_beaked_tortoise_1756980236
|
youryoui
| 2025-09-04T10:04:17Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"freckled beaked tortoise",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-04T10:03:57Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- freckled beaked tortoise
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
vendi11/blockassist-bc-placid_placid_llama_1756980165
|
vendi11
| 2025-09-04T10:03:27Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"placid placid llama",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-04T10:03:24Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- placid placid llama
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
youryoui/blockassist-bc-curious_wild_rooster_1756980164
|
youryoui
| 2025-09-04T10:03:06Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"curious wild rooster",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-04T10:02:45Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- curious wild rooster
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
yaelahnal/blockassist-bc-mute_clawed_crab_1756980008
|
yaelahnal
| 2025-09-04T10:02:40Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"mute clawed crab",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-04T10:01:05Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- mute clawed crab
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
youryoui/blockassist-bc-stinky_chattering_shrew_1756980130
|
youryoui
| 2025-09-04T10:02:31Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stinky chattering shrew",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-04T10:02:10Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stinky chattering shrew
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
youryoui/blockassist-bc-toothy_pale_clam_1756980093
|
youryoui
| 2025-09-04T10:01:56Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"toothy pale clam",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-04T10:01:34Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- toothy pale clam
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
jessicamae271985/blockassist-bc-darting_knobby_caribou_1756979764
|
jessicamae271985
| 2025-09-04T09:58:05Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"darting knobby caribou",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-04T09:57:05Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- darting knobby caribou
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
bah63843/blockassist-bc-plump_fast_antelope_1756979636
|
bah63843
| 2025-09-04T09:54:46Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-04T09:54:35Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
frankwong2001/2_attempt_mxbai-embed-large-v1
|
frankwong2001
| 2025-09-04T09:54:37Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"dense",
"generated_from_trainer",
"dataset_size:4524",
"loss:MultipleNegativesRankingLoss",
"dataset:frankwong2001/ssf-train-valid-full-synthetic-batch10",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:mixedbread-ai/mxbai-embed-large-v1",
"base_model:finetune:mixedbread-ai/mxbai-embed-large-v1",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-09-04T09:54:21Z |
---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- dense
- generated_from_trainer
- dataset_size:4524
- loss:MultipleNegativesRankingLoss
base_model: mixedbread-ai/mxbai-embed-large-v1
widget:
- source_sentence: The Head of Engineering is at the forefront of new technology,
charting the port technology development and integration roadmaps. He/She works
with internal and external parties to invest and develop technology and infrastructure
solutions that meet the ports business objectives, while managing budgetary constraints.
He directs the use of new technology and equipment in the ports to drive greater
productivity and service excellence, while ensuring the high reliability of existing
port equipment through cost effective maintenance programmes. He is a core member
of the management team, contributes to the overall organisation strategy, inspires
a culture of process improvement to enhance workflow and efficiency, while mentoring
others in their work.
sentences:
- The Business Development Manager is responsible for enhancing the organization's
market presence and driving financial growth. He/She identifies and engages new
clients through networking, cold calling, advertising, and other strategies to
generate interest. He builds strong customer relationships, recognizes business
opportunities, negotiates and finalizes deals, and maintains a comprehensive understanding
of current market trends. He designs persuasive strategies and presentations to
win over potential clients. He may oversee the efforts of team members involved
in business development. Working in a fast-paced, dynamic environment, he frequently
travels to client locations and participates in networking events. He is proficient
with client relationship management and sales tools, as well as knowledgeable
about the organization's products and services, along with industry trends and
challenges. The Business Development Manager is self-driven and adept at establishing
clear and meaningful objectives. He demonstrates high resilience when facing obstacles
and appreciates the consultative selling approach, effectively leveraging marketing's
role in attracting, qualifying, and nurturing potential customers. He is articulate
and inventive in using his product and customer insights to secure deals.
- The Head of Engineering leads the advancement of new technologies and defines
the development and integration strategies for port technology. He/She collaborates
with both internal and external stakeholders to invest in and create technological
and infrastructural solutions that align with the business goals of the ports,
all while adhering to budgetary limits. He directs the implementation of innovative
technologies and equipment in the ports to boost productivity and service quality,
while also ensuring the dependability of current port equipment through economical
maintenance programs. As a vital member of the management team, he contributes
to the overarching strategy of the organization, fosters a culture of continuous
improvement to optimize workflow and efficiency, and mentors colleagues in their
professional development.
- The Chef de Cuisine is responsible for designing exquisite menus and overseeing
the kitchen staff to ensure high-quality meal preparation. He/She collaborates
with suppliers to source the freshest ingredients while managing kitchen inventory
and costs. The Chef de Cuisine also innovates culinary techniques and presents
dishes that enhance the dining experience, while ensuring the kitchen operates
smoothly during service. As a leader in the culinary team, he inspires creativity
and maintains standards of excellence in food presentation and flavor.
- source_sentence: The HSE Manager oversees all activities in the Health, Safety and
Environment (HSE) department and is responsible for providing technical expertise
on HSE issues to relevant stakeholders. He/She leads the development of the Workplace
Safety and Health (WSH) and Environmental Management System (EMS) frameworks,
and evaluates the organisations WSH and EMS systems to ensure compliance with
pertinent government regulations and organisational health, safety and environmental
guidelines. He reviews WSH and environmental accident and incident findings and
trends to recommend improvements. Furthermore, he coordinates the development
and maintenance of the organisations Major Hazard Installation (MHI) Safety Case.
The HSE Manager is a senior member of the organisations crisis management team
and manages the development of the organisations emergency response and crisis
management plans. He is responsible for managing the organisations Safe System
of Work (SSoW) framework to ensure that work activities are carried out safely.
In addition, he coaches and mentors HSE department personnel and drives departmental
performance to achieve the organisations HSE goals. The HSE Manager actively promotes
a safe workplace culture across the organisation. As a department manager, he
is required to have good leadership, interpersonal and resource management skills.
sentences:
- The Commodities Trader is responsible for daily trading operations, which involve
executing trades according to established plans and monitoring both portfolio
positions and market trends. He/She identifies potential opportunities on local
and regional levels that can improve portfolio performance. The role requires
maintaining and strengthening relationships with trading partners while possessing
a solid understanding of trading operations. With strong analytical and logical
skills, he develops insights into the commodity market that aids in optimizing
the portfolio and enhancing trading efficiency. He is resourceful, collaborative,
and possesses excellent negotiation abilities.
- The HSE Manager is responsible for overseeing all functions within the Health,
Safety and Environment (HSE) department and providing technical guidance on HSE
matters to relevant stakeholders. He/She leads the creation of the Workplace Safety
and Health (WSH) and Environmental Management System (EMS) frameworks and assesses
the organisation's WSH and EMS systems to ensure alignment with applicable government
regulations and organisational health, safety, and environmental standards. He
reviews findings and trends related to WSH and environmental incidents to suggest
improvements. Additionally, he coordinates the development and upkeep of the organisation's
Major Hazard Installation (MHI) Safety Case. As a key member of the organisation's
crisis management team, the HSE Manager manages the formulation of emergency response
and crisis management plans. He is also tasked with overseeing the organisation's
Safe System of Work (SSoW) framework to guarantee that work activities are conducted
safely. Moreover, he mentors and coaches personnel within the HSE department and
drives performance to meet the organisation's HSE objectives. The HSE Manager
is dedicated to fostering a culture of safety throughout the workplace. As a department
manager, he is expected to possess strong leadership, interpersonal, and resource
management skills.
- The HSE Coordinator manages various tasks within the Health, Safety, and Emergency
(HSE) division and provides operational support on emergency management issues
to different departments. He/She supervises the implementation of the Workplace
Safety and Health (WSH) and Environmental Compliance Framework (ECF) and reviews
the organisation's WSH and ECF strategies to ensure alignment with industry standards
and internal safety protocols. He analyzes workplace safety and emergency findings
to propose strategies. Furthermore, he oversees the revision and development of
the organisation's Major Hazard Awareness (MHA) Safety Protocol. The HSE Coordinator
is a member of the organisation's operations team and manages the execution of
the organisation's operational response and safety protocols. He is tasked with
handling the organisation's Safety Management System (SMS) framework to ensure
that all operational activities are executed efficiently. Additionally, he provides
training and guidance to staff within the HSE division and enhances departmental
productivity to achieve the organisation's operational goals. The HSE Coordinator
promotes an efficient work environment across the organisation. As a team leader,
he is required to have effective communication, team-building, and project management
skills.
- source_sentence: The Town Gas Plant Maintenance Senior Technical Officer plans the
schedules for the preventive, predictive and corrective maintenance of town gas
production plants and ancillaries to ensure that town gas is stored and produced
efficiently in the plant. He/She monitors works done by contractors to ensure
projects meet the, organisational requirements. He prepares the technical specifications
for tenders and supports in tender evaluations of large projects. He builds staff
capabilities through on-the-job training, He issues work orders for Permits-to-Work,
and supervises works according to Safe System of Work (SSoW) practices. In times
of emergency, he implements emergency response plans and relevant safety procedures,
and supervises the Emergency Response Team on site incident management. He works
in the gas plant facility containing equipment such as pumps, tanks and valves,
where there is high focus on safety. He has good interpersonal skills to be able
to supervise junior team members and contractors, and coordinate with the production
team. He is meticulous and systematic in performing maintenance procedures. He
is agile and calm in responding effectively to faults and outages.
sentences:
- The Town Gas Plant Maintenance Junior Technical Officer manages the schedules
for routine, scheduled, and unscheduled maintenance of town gas distribution facilities
and associated components to ensure that town gas is utilized and consumed effectively
in the distribution network. He/She reviews tasks executed by subcontractors to
confirm that initiatives align with the project guidelines. He drafts operational
outlines for proposals and assists in project assessments of minor installations.
He develops team skills through classroom training, issues notifications for Maintenance
Work Orders, and directs tasks in line with Safe Work Practices (SWP). In non-critical
situations, he applies standard procedures and basic safety protocols while assisting
the Response Team in site management. He works in the gas distribution area, which
features apparatus such as compressors, valves, and regulators, where there is
a notable emphasis on compliance. He has average communication skills to help
oversee novice employees and subcontractors, and liaises with the operations team.
He is casual and informal in executing maintenance tasks and is slow to react
to issues and interruptions.
- The Site Director/Head is tasked with guiding the manufacturing facility towards
its strategic goals by setting and communicating key performance indicators (KPIs),
promoting a collaborative culture among departments, and managing financial planning
and budgeting processes. He/She seeks out and identifies investment opportunities
to enhance manufacturing operations and improve facilities. Additionally, he mentors
and cultivates talent for future leadership roles while overseeing learning and
development, succession planning, and talent management initiatives. He ensures
compliance with Health, Safety and Environment (HSE) policies, international regulations,
and Current Good Manufacturing Practices (CGMPs) across the manufacturing site.
He is responsible for developing business continuity plans and leading responses
to significant incidents or events. The Site Director/Head holds overall accountability
for the manufacturing site's performance and is an inspiring, people-focused leader
dedicated to motivating large teams towards excellence. He possesses a strategic,
forward-thinking approach and a global perspective when making plans and decisions
for the organization.
- The Town Gas Plant Maintenance Senior Technical Officer is responsible for planning
the schedules for preventive, predictive, and corrective maintenance of town gas
production facilities and related equipment to ensure efficient storage and production.
He/She oversees the work performed by contractors to guarantee that all projects
comply with organizational standards. He prepares technical specifications for
tenders and assists in evaluating large project proposals. He enhances staff capabilities
through on-the-job training, issues work orders for Permits-to-Work, and supervises
operations in accordance with Safe System of Work (SSoW) practices. During emergencies,
he executes emergency response plans and relevant safety protocols while leading
the Emergency Response Team in on-site incident management. He operates in the
gas plant environment, which includes equipment like pumps, tanks, and valves,
with a strong emphasis on safety. He possesses excellent interpersonal skills
to effectively supervise junior team members and contractors, as well as coordinate
with the production team. He demonstrates meticulousness and systematic approaches
in maintenance tasks and remains agile and composed when addressing faults and
outages.
- source_sentence: The Waste and Recyclables Collection Executive assists with the
management of waste and recyclables collection operations. This includes overseeing
the management of organisational resources, collection routes, work procedures
and schedules, incidents and reports to the management. He/She is also required
to plan collection routes, compile and analyse data, recommend suitable operational
plans and/or equipment to improve work processes and service quality of the organisation.
He works in a waste management facility and performs site visits when necessary.
He is expected to communicate with his stakeholders and clients as part of his
role in performing operational duties. He is organised, responsive, approachable,
able to multi-task and capable of interacting with stakeholders.
sentences:
- The Waste and Recyclables Collection Executive is responsible for managing waste
and recyclables collection operations. This includes overseeing the management
of organizational resources, collection routes, work procedures, schedules, and
reporting incidents to management. He/She is also tasked with planning collection
routes, compiling and analyzing data, and recommending appropriate operational
plans and equipment to enhance work processes and service quality. He works in
a waste management facility and conducts site visits as needed. He is expected
to engage with stakeholders and clients while performing operational duties. He
is organized, responsive, approachable, capable of multi-tasking, and adept at
interacting with stakeholders.
- The Waste and Recyclables Management Coordinator handles the supervision of waste
management operations. This involves managing organizational logistics, delivery
routes, workflow protocols, schedules, and documenting incidents for review. He/She
is also responsible for strategizing delivery routes, gathering and interpreting
information, and suggesting effective logistical plans and tools to optimize workflow
and service standards. He operates in a waste processing center and performs inspections
when required. He is expected to liaise with his team and customers as part of
his operational responsibilities. He is structured, reactive, friendly, skilled
at multitasking, and proficient in communicating with clients.
- The Pastry Chef is responsible for inspecting the prepared pastries to ensure
that quality standards are upheld before the products are served. He/She innovates
new recipes to refresh menus and decorates pastries with various icings and toppings.
He is expected to oversee the daily operations of the pastry and baking kitchen
while planning continuous improvement initiatives within the team. He also suggests
enhancements to improve customer service performance. Well-groomed and resourceful,
he has excellent problem-solving abilities and maintains composure in high-pressure
situations. He should exhibit strong attention to detail, creativity, and leadership
qualities. He may be employed in specialist pastry shops or patisseries, as well
as restaurants and hotels. He should possess comprehensive knowledge of sanitation
principles, baking techniques, and nutrition principles, and is adept at collaborating
with multi-cultural teams.
- source_sentence: The Operations Risk and Control Manager is responsible for managing
risk and control activities for the organisation and ensuring compliance with
any applicable guidelines, laws and regulations. He/She will monitor high risk
operational and emerging risk incidents with the aim of strengthening the organisation's
control environment and improving control processes. He conducts investigations
to identify risk incidents and determine corrective actions, and develops incident
response and crisis management protocols to deal with potential emergencies. The
Operations Risk and Control Manager possesses analytical capabilities and a keen
eye for pinpointing sources of risks or potential crises. He is a quick thinker
who is able to make decisions under tight timelines so as to address and resolve
risk incidents as they arise and adapt to the changing regulatory environment.
sentences:
- The Operations Risk and Control Manager is tasked with overseeing risk and control
measures within the organization, ensuring adherence to relevant guidelines, laws,
and regulations. He/She will assess high-risk operational incidents and emerging
threats to enhance the control framework and refine control processes. He conducts
thorough investigations to pinpoint risk occurrences and formulate corrective
measures, while also developing incident response and crisis management strategies
for potential emergencies. The Operations Risk and Control Manager has strong
analytical skills and is adept at identifying sources of risk or potential crises.
He is a decisive thinker who can make timely decisions to address and resolve
risk incidents as they emerge, adapting to the evolving regulatory landscape.
- The Operations Compliance Manager is responsible for overseeing compliance and
audit processes for the organization while ensuring alignment with various industry
standards and practices. He/She will evaluate low-risk operational activities
and existing compliance issues to enhance the compliance framework and streamline
audit processes. He conducts reviews to assess compliance violations and suggests
improvements, while also creating compliance training and awareness programs for
all employees. The Operations Compliance Manager possesses strong organizational
skills and is effective in identifying areas of improvement or compliance gaps.
He is a strategic planner who can implement changes to enhance compliance measures
over time, adapting to the shifting market trends.
- The Arts Educators are responsible for designing, implementing, and evaluating
learning experiences while utilizing effective assessment techniques to ensure
that learners meet established standards. Their teaching is enriched by their
own artistic practice in their selected art form. With a solid grasp of effective
teaching methodologies and learning strategies, they skillfully adjust these approaches
to cater to specific contexts, student needs, and educational goals. They guide
learners in realizing their full potential in their craft and deepening their
understanding and appreciation of artistic endeavors. Arts Educators foster creativity
and equip students with the necessary tools to explore their ideas and imagination.
They deliver arts education programs across various settings, including schools,
universities, community centers, welfare organizations, and co-curricular activities,
serving a diverse range of students. They are committed to enhancing arts education
through the development and refinement of pedagogies, programs, and curricula.
Additionally, they actively engage with arts and arts education organizations
while mentoring emerging artists. They engage in self-reflection and adopt a critical
approach to their teaching and artistic practice, often developing a distinctive
teaching style that reflects their individuality.
datasets:
- frankwong2001/ssf-train-valid-full-synthetic-batch10
pipeline_tag: sentence-similarity
library_name: sentence-transformers
---
# SentenceTransformer based on mixedbread-ai/mxbai-embed-large-v1
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [mixedbread-ai/mxbai-embed-large-v1](https://huggingface.co/mixedbread-ai/mxbai-embed-large-v1) on the [ssf-train-valid-full-synthetic-batch10](https://huggingface.co/datasets/frankwong2001/ssf-train-valid-full-synthetic-batch10) dataset. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [mixedbread-ai/mxbai-embed-large-v1](https://huggingface.co/mixedbread-ai/mxbai-embed-large-v1) <!-- at revision db9d1fe0f31addb4978201b2bf3e577f3f8900d2 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 1024 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- [ssf-train-valid-full-synthetic-batch10](https://huggingface.co/datasets/frankwong2001/ssf-train-valid-full-synthetic-batch10)
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False, 'architecture': 'BertModel'})
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("frankwong2001/2_attempt_mxbai-embed-large-v1")
# Run inference
queries = [
"The Operations Risk and Control Manager is responsible for managing risk and control activities for the organisation and ensuring compliance with any applicable guidelines, laws and regulations. He/She will monitor high risk operational and emerging risk incidents with the aim of strengthening the organisation\u0027s control environment and improving control processes. He conducts investigations to identify risk incidents and determine corrective actions, and develops incident response and crisis management protocols to deal with potential emergencies. The Operations Risk and Control Manager possesses analytical capabilities and a keen eye for pinpointing sources of risks or potential crises. He is a quick thinker who is able to make decisions under tight timelines so as to address and resolve risk incidents as they arise and adapt to the changing regulatory environment.",
]
documents = [
'The Operations Risk and Control Manager is tasked with overseeing risk and control measures within the organization, ensuring adherence to relevant guidelines, laws, and regulations. He/She will assess high-risk operational incidents and emerging threats to enhance the control framework and refine control processes. He conducts thorough investigations to pinpoint risk occurrences and formulate corrective measures, while also developing incident response and crisis management strategies for potential emergencies. The Operations Risk and Control Manager has strong analytical skills and is adept at identifying sources of risk or potential crises. He is a decisive thinker who can make timely decisions to address and resolve risk incidents as they emerge, adapting to the evolving regulatory landscape.',
'The Operations Compliance Manager is responsible for overseeing compliance and audit processes for the organization while ensuring alignment with various industry standards and practices. He/She will evaluate low-risk operational activities and existing compliance issues to enhance the compliance framework and streamline audit processes. He conducts reviews to assess compliance violations and suggests improvements, while also creating compliance training and awareness programs for all employees. The Operations Compliance Manager possesses strong organizational skills and is effective in identifying areas of improvement or compliance gaps. He is a strategic planner who can implement changes to enhance compliance measures over time, adapting to the shifting market trends.',
'The Arts Educators are responsible for designing, implementing, and evaluating learning experiences while utilizing effective assessment techniques to ensure that learners meet established standards. Their teaching is enriched by their own artistic practice in their selected art form. With a solid grasp of effective teaching methodologies and learning strategies, they skillfully adjust these approaches to cater to specific contexts, student needs, and educational goals. They guide learners in realizing their full potential in their craft and deepening their understanding and appreciation of artistic endeavors. Arts Educators foster creativity and equip students with the necessary tools to explore their ideas and imagination. They deliver arts education programs across various settings, including schools, universities, community centers, welfare organizations, and co-curricular activities, serving a diverse range of students. They are committed to enhancing arts education through the development and refinement of pedagogies, programs, and curricula. Additionally, they actively engage with arts and arts education organizations while mentoring emerging artists. They engage in self-reflection and adopt a critical approach to their teaching and artistic practice, often developing a distinctive teaching style that reflects their individuality.',
]
query_embeddings = model.encode_query(queries)
document_embeddings = model.encode_document(documents)
print(query_embeddings.shape, document_embeddings.shape)
# [1, 1024] [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(query_embeddings, document_embeddings)
print(similarities)
# tensor([[0.9659, 0.7083, 0.2425]])
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### ssf-train-valid-full-synthetic-batch10
* Dataset: [ssf-train-valid-full-synthetic-batch10](https://huggingface.co/datasets/frankwong2001/ssf-train-valid-full-synthetic-batch10) at [b687585](https://huggingface.co/datasets/frankwong2001/ssf-train-valid-full-synthetic-batch10/tree/b68758513f8ec1b0c3891bcd284e05a599f51bce)
* Size: 4,524 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 54 tokens</li><li>mean: 168.61 tokens</li><li>max: 404 tokens</li></ul> | <ul><li>min: 57 tokens</li><li>mean: 163.11 tokens</li><li>max: 369 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 135.91 tokens</li><li>max: 374 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>The Multi-Utility Operations Team Leader leads the day-to-day power plant operations by assigning tasks to junior team members, performs high voltage switching operational works and drives the rectification of all major plant faults, defects and outages. He/She supervises the first line maintenance works. He develops staff capabilities through on-the-job training and coaching. He monitors Permits-to-Work procedures, and ensures works are done according to Safe System of Work (SSoW) practices. In times of emergency, he facilitates the implementation of emergency response plans and relevant safety procedures. He also supervises the Emergency Response Team on site incident management. He works at the power plant station and may be required to perform shift work. He possesses good leadership and interpersonal skills in leading the operations teams. He is also systematic and able to respond to situations quickly in times of faults or outages.</code> | <code>The Multi-Utility Operations Team Leader is responsible for managing the daily operations of the power plant by delegating tasks to junior team members, executing high voltage switching operations, and addressing all significant plant faults, defects, and outages. He/She oversees first line maintenance activities and enhances staff capabilities through on-the-job training and coaching. He monitors Permits-to-Work procedures to ensure compliance with Safe System of Work (SSoW) practices. In emergencies, he facilitates the execution of emergency response plans and relevant safety protocols, while also supervising the Emergency Response Team during on-site incidents. He works at the power plant station and may be required to perform shift work. He demonstrates strong leadership and interpersonal skills in guiding the operations teams and is systematic, responding swiftly to faults or outages.</code> | <code>The Multi-Utility Operations Team Supervisor manages the daily logistics for the distribution center by assigning tasks to assistant staff, oversees low voltage electrical installation projects, and addresses all minor warehouse issues and delays. He/She coordinates routine inventory checks and enhances staff efficiency through training sessions and workshops. He monitors compliance with shipping regulations and ensures operations adhere to standard operating procedures (SOP). In critical situations, he facilitates the execution of logistical plans and relevant operational protocols, while also supervising the Inventory Management Team during stock assessments. He works at the distribution center and may be required to perform regular office hours. He demonstrates excellent organizational and communication skills in managing the logistics teams and is methodical, adapting quickly to challenges or delays.</code> |
| <code>The Technician (Component Repair & OverhaulMechanical) performs maintenance, repair and overhaul (MRO) tasks for aircraft components in accordance with technical manuals and standard operating procedures (SOPs). He/She examines parts for maintenance, repair or replacement. He/She troubleshoots component defects and takes corrective actions to restore components to the desired performance requirements. He also performs special processes and repair of composite structures, and documents all completed tasks. He may be authorised by the organisation to perform quality control functions, including inspection of incoming materials and outgoing serviced items, and registration of non-conformances. He may also be authorised to perform level 1 non-destructive testing (NDT) functions under supervision, perform evaluations for acceptance or rejection of aircraft components, and record results as specified in the work instructions. He complies with airworthiness and legislative requirements, and t...</code> | <code>The Technician (Component Repair & Overhaul Mechanical) is responsible for performing maintenance, repair, and overhaul (MRO) activities on aircraft components according to technical manuals and standard operating procedures (SOPs). He/She inspects parts for maintenance, repair, or replacement needs, troubleshoots component defects, and implements corrective actions to ensure components meet performance standards. Additionally, he/she carries out special processes and repairs of composite structures while documenting all completed tasks. The technician may also be authorized to conduct quality control functions, such as inspecting incoming materials and outgoing serviced items, as well as registering non-conformances. Furthermore, he/she may perform level 1 non-destructive testing (NDT) functions under supervision, evaluate aircraft components for acceptance or rejection, and record results as outlined in work instructions. He/She adheres to airworthiness and legislative requirements, ...</code> | <code>The Chef prepares gourmet meals and creates unique recipes for a fine dining restaurant. He/She manages kitchen staff, ensures food safety standards are met, and collaborates with suppliers to source fresh ingredients. Additionally, he/she designs menus that highlight seasonal produce and oversees the presentation of dishes to enhance customer experience. The chef conducts food tastings and works to innovate culinary techniques, while maintaining a clean and organized kitchen environment. He/She may also participate in promotional events to showcase the restaurant's offerings and engage with guests.</code> |
| <code>The Relationship Management Director - Small and Medium Enterprises is responsible for defining strategies for team members to achieve mass sales acquisition. He/She provides oversight to due diligence, compliance and Anti-Money Laundering (AML) processes carried out by team members. He sets policies and guidelines for ongoing support processes pertaining to credit responsibilities. He guides his team to achieve their performance targets and ensures they have the training necessary to deliver on their responsibilities. The Relationship Management Director - Small and Medium Enterprises is a strong leader who provides mentoring and coaching to his team members to allow them to succeed in their roles. He is a strong communicator with internal and external stakeholders. He is always looking for opportunities to provide enhanced services to clients. He uses analytics and problem solving capabilities to foster an environment that will yield results. He is accountable for the defined standar...</code> | <code>The Relationship Management Director - Small and Medium Enterprises is tasked with developing strategies that enable team members to achieve significant sales growth. He/She supervises the due diligence, compliance, and Anti-Money Laundering (AML) procedures executed by the team. He establishes policies and guidelines for ongoing support processes related to credit responsibilities. He mentors his team to meet their performance goals and ensures they receive the necessary training to fulfill their duties. The Relationship Management Director - Small and Medium Enterprises is an effective leader who provides guidance and support to help his team thrive in their positions. He excels in communication with both internal and external stakeholders. He consistently seeks opportunities to enhance client services. He leverages analytics and problem-solving skills to create a results-oriented environment. He is responsible for upholding the standards he sets for his team.</code> | <code>The Relationship Management Director - Large Enterprises is responsible for creating strategies for team members to achieve substantial market share. He/She oversees the financial audits, regulatory compliance, and Anti-Bribery measures conducted by team members. He formulates policies and frameworks for ongoing management processes relating to financial responsibilities. He directs his team to exceed their sales targets and ensures they have the resources needed to perform their duties. The Relationship Management Director - Large Enterprises is a proactive leader who offers training and support to his team members to enable them to excel in their functions. He is an effective communicator with clients and vendors. He frequently identifies opportunities to improve operational efficiencies. He utilizes data analysis and strategic planning to cultivate an environment that fosters success. He is responsible for the established benchmarks he sets for his team.</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim",
"gather_across_devices": false
}
```
### Evaluation Dataset
#### ssf-train-valid-full-synthetic-batch10
* Dataset: [ssf-train-valid-full-synthetic-batch10](https://huggingface.co/datasets/frankwong2001/ssf-train-valid-full-synthetic-batch10) at [b687585](https://huggingface.co/datasets/frankwong2001/ssf-train-valid-full-synthetic-batch10/tree/b68758513f8ec1b0c3891bcd284e05a599f51bce)
* Size: 1,131 evaluation samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 64 tokens</li><li>mean: 169.57 tokens</li><li>max: 348 tokens</li></ul> | <ul><li>min: 62 tokens</li><li>mean: 163.13 tokens</li><li>max: 331 tokens</li></ul> | <ul><li>min: 21 tokens</li><li>mean: 135.5 tokens</li><li>max: 323 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>The Assistant Equipment Engineer applies engineering principles and techniques to support equipment engineering processes in a manufacturing environment to meet organisational objectives. He/She also assists in analysing equipment maintenance issues. In addition, the Assistant Equipment Engineer participates in equipment improvement projects, and partakes in the development of maintenance plans in accordance with organisational objectives. The Assistant Equipment Engineer is required to have strong communication skills, good teamwork and an analytical mind to perform his role well to achieve the desired organisational outcomes.</code> | <code>The Assistant Equipment Engineer utilizes engineering principles and techniques to enhance equipment engineering processes within a manufacturing setting, aligning with organizational goals. He/She also aids in evaluating equipment maintenance challenges. Furthermore, the Assistant Equipment Engineer engages in equipment enhancement initiatives and contributes to the formulation of maintenance strategies in line with organizational objectives. Strong communication skills, effective teamwork, and analytical thinking are essential for the Assistant Equipment Engineer to succeed in achieving the desired organizational results.</code> | <code>The Assistant Mechanical Engineer employs design principles and techniques to assist mechanical engineering tasks in a construction environment to fulfill project requirements. He/She also helps in reviewing machinery performance issues. Additionally, the Assistant Mechanical Engineer takes part in machinery optimization projects and contributes to the creation of operational strategies that meet project goals. Strong leadership abilities, effective collaboration, and critical thinking are necessary for the Assistant Mechanical Engineer to excel in reaching the intended project outcomes.</code> |
| <code>The Brokerage Supervisor/ Freight Supervisor is responsible for liaising with customers, logistics operators and customs officials and supervising the custom clearance/freight forwarding operations to ensure goods are cleared through customs or quarantine in accordance with import and export laws and regulations. Analytical and systematic, he/she is required to supervise a freight operations team to execute operations in a timely manner to meet business and customers' requirements. He/She is also expected to work with internal and external stakeholders to accomplish his work.</code> | <code>The Brokerage Supervisor/Freight Supervisor is tasked with coordinating with customers, logistics providers, and customs authorities while overseeing the customs clearance and freight forwarding processes to ensure that goods comply with import and export regulations. With a strong analytical and systematic approach, he/she leads a freight operations team to execute tasks promptly, meeting both business and customer needs. Additionally, he/she collaborates with internal and external stakeholders to achieve work objectives.</code> | <code>The Freight Operations Manager is responsible for interacting with suppliers, transportation companies, and regulatory agencies while managing the delivery and logistics services to guarantee that products adhere to supply chain protocols. With a focus on detail-oriented and organized practices, he/she directs a logistics team to carry out operations efficiently, fulfilling both company and supplier expectations. Furthermore, he/she engages with internal and external partners to fulfill his/her duties.</code> |
| <code>The Production Planner is responsible for managing and executing production plans and schedules to ensure that products are delivered to customers on time and within schedule. He/She plans for the entire production supply chain from feedstock to production, storage and distribution, and analyses production data to optimise production and inventory control. The Production Planner coordinates with the maintenance planning team to align production targets with the planning of maintenance and turnaround schedules. He supports the reporting of plant production status and raw materials inventories, and highlights issues that may affect production output. He monitors feedstock movement to ensure minimal interruption to the production schedule. In addition, he identifies opportunities for continuous improvement in the organisations supply chain operations. The Production Planner works closely with the production, maintenance planning, sales and logistics teams, and interfaces with suppliers an...</code> | <code>The Production Planner is tasked with overseeing and implementing production schedules to guarantee timely delivery of products to customers. He/She is responsible for planning the complete production supply chain, from the initial feedstock to production, storage, and distribution, while analyzing production data to enhance production efficiency and inventory management. The Production Planner collaborates with the maintenance planning team to synchronize production objectives with maintenance and turnaround schedules. He supports the reporting of plant production status and raw material inventories, addressing any issues that could impact production output. He ensures smooth feedstock movement to minimize disruptions to the production timeline and identifies opportunities for ongoing improvements in the organization's supply chain operations. The Production Planner works in close partnership with the production, maintenance planning, sales, and logistics teams, while also engaging wi...</code> | <code>The Software Developer creates applications and software solutions tailored to meet client needs, focusing on coding, debugging, and testing software programs. He/She collaborates with cross-functional teams to design user-friendly interfaces and enhance user experience. The Software Developer is responsible for maintaining and updating existing software, ensuring optimal performance and security standards are met. He conducts code reviews and provides technical support to other team members while staying updated on the latest industry trends and technologies.</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim",
"gather_across_devices": false
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 16
- `gradient_accumulation_steps`: 16
- `learning_rate`: 2e-05
- `weight_decay`: 0.01
- `max_grad_norm`: 0.5
- `num_train_epochs`: 5
- `lr_scheduler_type`: cosine
- `warmup_ratio`: 0.1
- `warmup_steps`: 1500
- `bf16`: True
- `tf32`: False
- `load_best_model_at_end`: True
- `gradient_checkpointing`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 16
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.01
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 0.5
- `num_train_epochs`: 5
- `max_steps`: -1
- `lr_scheduler_type`: cosine
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 1500
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: False
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `hub_revision`: None
- `gradient_checkpointing`: True
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `liger_kernel_config`: None
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
- `router_mapping`: {}
- `learning_rate_mapping`: {}
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss |
|:-------:|:------:|:-------------:|:---------------:|
| 1.0 | 9 | 0.0734 | 0.0209 |
| 2.0 | 18 | 0.0584 | 0.0204 |
| 3.0 | 27 | 0.0542 | 0.0195 |
| 4.0 | 36 | 0.0527 | 0.0169 |
| **5.0** | **45** | **0.0443** | **0.0156** |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.12.11
- Sentence Transformers: 5.1.0
- Transformers: 4.55.0
- PyTorch: 2.8.0+cu128
- Accelerate: 1.10.0
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
mradermacher/mistral7b_malay_tuned-GGUF
|
mradermacher
| 2025-09-04T09:54:03Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"mistral",
"en",
"base_model:DanHauri/mistral7b_malay_tuned",
"base_model:quantized:DanHauri/mistral7b_malay_tuned",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-04T08:36:30Z |
---
base_model: DanHauri/mistral7b_malay_tuned
language:
- en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/DanHauri/mistral7b_malay_tuned
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#mistral7b_malay_tuned-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/mistral7b_malay_tuned-GGUF/resolve/main/mistral7b_malay_tuned.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/mistral7b_malay_tuned-GGUF/resolve/main/mistral7b_malay_tuned.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/mistral7b_malay_tuned-GGUF/resolve/main/mistral7b_malay_tuned.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/mistral7b_malay_tuned-GGUF/resolve/main/mistral7b_malay_tuned.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/mistral7b_malay_tuned-GGUF/resolve/main/mistral7b_malay_tuned.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/mistral7b_malay_tuned-GGUF/resolve/main/mistral7b_malay_tuned.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/mistral7b_malay_tuned-GGUF/resolve/main/mistral7b_malay_tuned.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/mistral7b_malay_tuned-GGUF/resolve/main/mistral7b_malay_tuned.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/mistral7b_malay_tuned-GGUF/resolve/main/mistral7b_malay_tuned.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/mistral7b_malay_tuned-GGUF/resolve/main/mistral7b_malay_tuned.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/mistral7b_malay_tuned-GGUF/resolve/main/mistral7b_malay_tuned.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/mistral7b_malay_tuned-GGUF/resolve/main/mistral7b_malay_tuned.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Maarij-Aqeel/lunar_lander_RL
|
Maarij-Aqeel
| 2025-09-04T09:53:26Z | 16 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-09-02T05:49:19Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 287.69 +/- 18.45
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
mradermacher/TiTan-Llama-3.2-1B-GGUF
|
mradermacher
| 2025-09-04T09:47:54Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"lora",
"sft",
"trl",
"unsloth",
"fine-tuned",
"en",
"dataset:theprint/titles-n-tags-alpaca",
"base_model:theprint/TiTan-Llama-3.2-1B",
"base_model:adapter:theprint/TiTan-Llama-3.2-1B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-04T09:34:35Z |
---
base_model: theprint/TiTan-Llama-3.2-1B
datasets:
- theprint/titles-n-tags-alpaca
language: en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- lora
- sft
- transformers
- trl
- unsloth
- fine-tuned
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/theprint/TiTan-Llama-3.2-1B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#TiTan-Llama-3.2-1B-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/TiTan-Llama-3.2-1B-GGUF/resolve/main/TiTan-Llama-3.2-1B.Q2_K.gguf) | Q2_K | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/TiTan-Llama-3.2-1B-GGUF/resolve/main/TiTan-Llama-3.2-1B.Q3_K_S.gguf) | Q3_K_S | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/TiTan-Llama-3.2-1B-GGUF/resolve/main/TiTan-Llama-3.2-1B.Q3_K_M.gguf) | Q3_K_M | 0.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/TiTan-Llama-3.2-1B-GGUF/resolve/main/TiTan-Llama-3.2-1B.Q3_K_L.gguf) | Q3_K_L | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/TiTan-Llama-3.2-1B-GGUF/resolve/main/TiTan-Llama-3.2-1B.IQ4_XS.gguf) | IQ4_XS | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/TiTan-Llama-3.2-1B-GGUF/resolve/main/TiTan-Llama-3.2-1B.Q4_K_S.gguf) | Q4_K_S | 0.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/TiTan-Llama-3.2-1B-GGUF/resolve/main/TiTan-Llama-3.2-1B.Q4_K_M.gguf) | Q4_K_M | 0.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/TiTan-Llama-3.2-1B-GGUF/resolve/main/TiTan-Llama-3.2-1B.Q5_K_S.gguf) | Q5_K_S | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/TiTan-Llama-3.2-1B-GGUF/resolve/main/TiTan-Llama-3.2-1B.Q5_K_M.gguf) | Q5_K_M | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/TiTan-Llama-3.2-1B-GGUF/resolve/main/TiTan-Llama-3.2-1B.Q6_K.gguf) | Q6_K | 1.1 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/TiTan-Llama-3.2-1B-GGUF/resolve/main/TiTan-Llama-3.2-1B.Q8_0.gguf) | Q8_0 | 1.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/TiTan-Llama-3.2-1B-GGUF/resolve/main/TiTan-Llama-3.2-1B.f16.gguf) | f16 | 2.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
youryoui/blockassist-bc-toothy_pale_clam_1756979097
|
youryoui
| 2025-09-04T09:45:18Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"toothy pale clam",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-04T09:44:57Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- toothy pale clam
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Chukky10z/blockassist-bc-mammalian_jumping_cougar_1756978936
|
Chukky10z
| 2025-09-04T09:42:52Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"mammalian jumping cougar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-04T09:42:46Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- mammalian jumping cougar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
youryoui/blockassist-bc-pouncing_camouflaged_chameleon_1756978918
|
youryoui
| 2025-09-04T09:42:21Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"pouncing camouflaged chameleon",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-04T09:41:58Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- pouncing camouflaged chameleon
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
starwarindia/addrparser-iaparser
|
starwarindia
| 2025-09-04T09:37:53Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"address-parsing",
"finetuned",
"checkpoints",
"base_model:Qwen/Qwen2.5-0.5B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-0.5B-Instruct",
"region:us"
] | null | 2025-09-04T08:32:36Z |
---
base_model: Qwen/Qwen2.5-0.5B-Instruct
library_name: peft
tags:
- address-parsing
- finetuned
- checkpoints
---
# AddrParser-Qwen05B (PEFT Adapter)
This repository contains **100+ training checkpoints** for a PEFT-finetuned model based on [Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct).
The model was trained for **address parsing** (extracting structured address components from free-text input).
📌 The **final and recommended checkpoint is `checkpoint-108168`**, but all intermediate checkpoints (`checkpoint-1000` → `checkpoint-108168`) are included for reproducibility and research.
---
## Model Details
- **Base model:** Qwen/Qwen2.5-0.5B-Instruct
- **Method:** PEFT (LoRA adapters)
- **Language:** English + Indian addresses (mixed formatting)
- **Task:** Address parsing (NLP → structured fields)
- **Author:** [starwarindia](https://huggingface.co/starwarindia)
- **License:** MIT (same as base model unless specified)
---
## Repo Structure
```
.
├── adapter_config.json
├── adapter_model.safetensors # adapter weights
├── tokenizer.json / vocab.json # tokenizer files
├── training_args.bin
├── checkpoint-1000/
├── checkpoint-10000/
├── checkpoint-20000/
│ ...
├── checkpoint-108168/ # ✅ final checkpoint
```
---
## Usage
You can directly load the **final checkpoint (recommended):**
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
base_model = "Qwen/Qwen2.5-0.5B-Instruct"
peft_model = "starwarindia/addrparser-iaparser/checkpoint-108168"
tokenizer = AutoTokenizer.from_pretrained(base_model)
model = AutoModelForCausalLM.from_pretrained(base_model)
model = PeftModel.from_pretrained(model, peft_model)
text = "Flat No 12, Green Park Apartments, MG Road, Bangalore 560001"
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
If you want to explore **other versions**, just change the path (e.g. `checkpoint-50000`).
---
## Checkpoints
- ✅ **Final:** `checkpoint-108168`
- 🧪 Intermediate: `checkpoint-1000`, `checkpoint-10000`, … `checkpoint-107000`
---
## Intended Uses
- Training analysis (study performance over training steps)
- Research in **address parsing & sequence tagging**
- Production use: **recommended to use `checkpoint-108168`**
⚠️ **Out-of-scope:** Not suitable for general-purpose reasoning or unrelated tasks.
---
## Limitations & Risks
- May not generalize perfectly on unseen global address formats
- Trained primarily on English + Indian addresses
- Sensitive to formatting variations (punctuation, missing fields)
---
## Citation
If you use this work, please cite:
```bibtex
@misc{addrparser2025,
title = {AddrParser-Qwen05B (PEFT Adapter)},
author = {starwarindia},
howpublished = {\url{https://huggingface.co/starwarindia/addrparser-iaparser}},
year = {2025}
}
```
---
## Acknowledgements
- [Qwen Team](https://huggingface.co/Qwen) for the base model
- Hugging Face PEFT library
- Google Cloud for training infrastructure
|
youryoui/blockassist-bc-shiny_hardy_stork_1756978604
|
youryoui
| 2025-09-04T09:37:05Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"shiny hardy stork",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-04T09:36:44Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- shiny hardy stork
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
hoveyc/comfyui-models
|
hoveyc
| 2025-09-04T09:32:49Z | 11 | 0 |
diffusers
|
[
"diffusers",
"tflite",
"onnx",
"safetensors",
"license:apache-2.0",
"region:us"
] | null | 2025-07-30T04:17:42Z |
---
license: apache-2.0
---
|
youryoui/blockassist-bc-stinky_chattering_shrew_1756978283
|
youryoui
| 2025-09-04T09:31:45Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stinky chattering shrew",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-04T09:31:24Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stinky chattering shrew
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
GeneroGral/Llama-3.1-8B_BBQ_Stereo_Task_1_dropout_wordMatch_FINAL
|
GeneroGral
| 2025-09-04T09:28:37Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/Meta-Llama-3.1-8B-Instruct",
"base_model:finetune:unsloth/Meta-Llama-3.1-8B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-04T09:28:28Z |
---
base_model: unsloth/Meta-Llama-3.1-8B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** GeneroGral
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Meta-Llama-3.1-8B-Instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
vendi11/blockassist-bc-placid_placid_llama_1756977903
|
vendi11
| 2025-09-04T09:25:46Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"placid placid llama",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-04T09:25:42Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- placid placid llama
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
klmdr22/blockassist-bc-wild_loud_newt_1756977844
|
klmdr22
| 2025-09-04T09:24:51Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wild loud newt",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-04T09:24:44Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wild loud newt
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mradermacher/PersianSciQA-Qwen2.5-14B-GGUF
|
mradermacher
| 2025-09-04T09:16:38Z | 248 | 1 |
transformers
|
[
"transformers",
"gguf",
"base_model:adapter:Qwen/Qwen2.5-14B-Instruct",
"lora",
"sft",
"trl",
"fa",
"dataset:safora/PersianSciQA-Extractive",
"base_model:safora/PersianSciQA-Qwen2.5-14B",
"base_model:adapter:safora/PersianSciQA-Qwen2.5-14B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-30T01:12:41Z |
---
base_model: safora/PersianSciQA-Qwen2.5-14B
datasets:
- safora/PersianSciQA-Extractive
language: fa
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- base_model:adapter:Qwen/Qwen2.5-14B-Instruct
- lora
- sft
- transformers
- trl
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/safora/PersianSciQA-Qwen2.5-14B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#PersianSciQA-Qwen2.5-14B-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/PersianSciQA-Qwen2.5-14B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/PersianSciQA-Qwen2.5-14B-GGUF/resolve/main/PersianSciQA-Qwen2.5-14B.Q2_K.gguf) | Q2_K | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/PersianSciQA-Qwen2.5-14B-GGUF/resolve/main/PersianSciQA-Qwen2.5-14B.Q3_K_S.gguf) | Q3_K_S | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/PersianSciQA-Qwen2.5-14B-GGUF/resolve/main/PersianSciQA-Qwen2.5-14B.Q3_K_M.gguf) | Q3_K_M | 7.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/PersianSciQA-Qwen2.5-14B-GGUF/resolve/main/PersianSciQA-Qwen2.5-14B.Q3_K_L.gguf) | Q3_K_L | 8.0 | |
| [GGUF](https://huggingface.co/mradermacher/PersianSciQA-Qwen2.5-14B-GGUF/resolve/main/PersianSciQA-Qwen2.5-14B.IQ4_XS.gguf) | IQ4_XS | 8.3 | |
| [GGUF](https://huggingface.co/mradermacher/PersianSciQA-Qwen2.5-14B-GGUF/resolve/main/PersianSciQA-Qwen2.5-14B.Q4_K_S.gguf) | Q4_K_S | 8.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/PersianSciQA-Qwen2.5-14B-GGUF/resolve/main/PersianSciQA-Qwen2.5-14B.Q4_K_M.gguf) | Q4_K_M | 9.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/PersianSciQA-Qwen2.5-14B-GGUF/resolve/main/PersianSciQA-Qwen2.5-14B.Q5_K_S.gguf) | Q5_K_S | 10.4 | |
| [GGUF](https://huggingface.co/mradermacher/PersianSciQA-Qwen2.5-14B-GGUF/resolve/main/PersianSciQA-Qwen2.5-14B.Q5_K_M.gguf) | Q5_K_M | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/PersianSciQA-Qwen2.5-14B-GGUF/resolve/main/PersianSciQA-Qwen2.5-14B.Q6_K.gguf) | Q6_K | 12.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/PersianSciQA-Qwen2.5-14B-GGUF/resolve/main/PersianSciQA-Qwen2.5-14B.Q8_0.gguf) | Q8_0 | 15.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Alisia-7B-Instruct-V1-i1-GGUF
|
mradermacher
| 2025-09-04T09:14:07Z | 461 | 1 |
transformers
|
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"qwen2",
"en",
"fr",
"base_model:Gems234/Alisia-7B-Instruct-V1",
"base_model:quantized:Gems234/Alisia-7B-Instruct-V1",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-08-31T06:41:38Z |
---
base_model: Gems234/Alisia-7B-Instruct-V1
language:
- en
- fr
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
weighted/imatrix quants of https://huggingface.co/Gems234/Alisia-7B-Instruct-V1
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Alisia-7B-Instruct-V1-i1-GGUF).***
static quants are available at https://huggingface.co/mradermacher/Alisia-7B-Instruct-V1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Alisia-7B-Instruct-V1-i1-GGUF/resolve/main/Alisia-7B-Instruct-V1.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) |
| [GGUF](https://huggingface.co/mradermacher/Alisia-7B-Instruct-V1-i1-GGUF/resolve/main/Alisia-7B-Instruct-V1.i1-IQ1_S.gguf) | i1-IQ1_S | 2.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Alisia-7B-Instruct-V1-i1-GGUF/resolve/main/Alisia-7B-Instruct-V1.i1-IQ1_M.gguf) | i1-IQ1_M | 2.1 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Alisia-7B-Instruct-V1-i1-GGUF/resolve/main/Alisia-7B-Instruct-V1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Alisia-7B-Instruct-V1-i1-GGUF/resolve/main/Alisia-7B-Instruct-V1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/Alisia-7B-Instruct-V1-i1-GGUF/resolve/main/Alisia-7B-Instruct-V1.i1-IQ2_S.gguf) | i1-IQ2_S | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Alisia-7B-Instruct-V1-i1-GGUF/resolve/main/Alisia-7B-Instruct-V1.i1-IQ2_M.gguf) | i1-IQ2_M | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Alisia-7B-Instruct-V1-i1-GGUF/resolve/main/Alisia-7B-Instruct-V1.i1-Q2_K_S.gguf) | i1-Q2_K_S | 2.9 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Alisia-7B-Instruct-V1-i1-GGUF/resolve/main/Alisia-7B-Instruct-V1.i1-Q2_K.gguf) | i1-Q2_K | 3.1 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Alisia-7B-Instruct-V1-i1-GGUF/resolve/main/Alisia-7B-Instruct-V1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Alisia-7B-Instruct-V1-i1-GGUF/resolve/main/Alisia-7B-Instruct-V1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Alisia-7B-Instruct-V1-i1-GGUF/resolve/main/Alisia-7B-Instruct-V1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Alisia-7B-Instruct-V1-i1-GGUF/resolve/main/Alisia-7B-Instruct-V1.i1-IQ3_S.gguf) | i1-IQ3_S | 3.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Alisia-7B-Instruct-V1-i1-GGUF/resolve/main/Alisia-7B-Instruct-V1.i1-IQ3_M.gguf) | i1-IQ3_M | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/Alisia-7B-Instruct-V1-i1-GGUF/resolve/main/Alisia-7B-Instruct-V1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Alisia-7B-Instruct-V1-i1-GGUF/resolve/main/Alisia-7B-Instruct-V1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Alisia-7B-Instruct-V1-i1-GGUF/resolve/main/Alisia-7B-Instruct-V1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/Alisia-7B-Instruct-V1-i1-GGUF/resolve/main/Alisia-7B-Instruct-V1.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.5 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Alisia-7B-Instruct-V1-i1-GGUF/resolve/main/Alisia-7B-Instruct-V1.i1-Q4_0.gguf) | i1-Q4_0 | 4.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Alisia-7B-Instruct-V1-i1-GGUF/resolve/main/Alisia-7B-Instruct-V1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Alisia-7B-Instruct-V1-i1-GGUF/resolve/main/Alisia-7B-Instruct-V1.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Alisia-7B-Instruct-V1-i1-GGUF/resolve/main/Alisia-7B-Instruct-V1.i1-Q4_1.gguf) | i1-Q4_1 | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/Alisia-7B-Instruct-V1-i1-GGUF/resolve/main/Alisia-7B-Instruct-V1.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Alisia-7B-Instruct-V1-i1-GGUF/resolve/main/Alisia-7B-Instruct-V1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Alisia-7B-Instruct-V1-i1-GGUF/resolve/main/Alisia-7B-Instruct-V1.i1-Q6_K.gguf) | i1-Q6_K | 6.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
AnerYubo/blockassist-bc-prowling_pudgy_gerbil_1756977217
|
AnerYubo
| 2025-09-04T09:13:40Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"prowling pudgy gerbil",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-04T09:13:37Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- prowling pudgy gerbil
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
youryoui/blockassist-bc-iridescent_mangy_warthog_1756977190
|
youryoui
| 2025-09-04T09:13:34Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"iridescent mangy warthog",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-04T09:13:10Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- iridescent mangy warthog
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
youryoui/blockassist-bc-soft_curious_camel_1756977067
|
youryoui
| 2025-09-04T09:11:34Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"soft curious camel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-04T09:11:08Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- soft curious camel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
gopterwegop/blockassist-bc-smooth_aquatic_turtle_1756976958
|
gopterwegop
| 2025-09-04T09:10:15Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"smooth aquatic turtle",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-04T09:09:19Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- smooth aquatic turtle
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
youryoui/blockassist-bc-carnivorous_crested_cheetah_1756976949
|
youryoui
| 2025-09-04T09:09:33Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"carnivorous crested cheetah",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-04T09:09:10Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- carnivorous crested cheetah
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
youryoui/blockassist-bc-freckled_amphibious_dove_1756976834
|
youryoui
| 2025-09-04T09:07:39Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"freckled amphibious dove",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-04T09:07:15Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- freckled amphibious dove
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
youryoui/blockassist-bc-downy_thorny_pheasant_1756976606
|
youryoui
| 2025-09-04T09:03:48Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"downy thorny pheasant",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-04T09:03:26Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- downy thorny pheasant
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
youryoui/blockassist-bc-silent_sly_rabbit_1756976423
|
youryoui
| 2025-09-04T09:00:45Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"silent sly rabbit",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-04T09:00:23Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- silent sly rabbit
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
phamnhungoctuan/blockassist-bc-lethal_untamed_ostrich_1756976167
|
phamnhungoctuan
| 2025-09-04T08:59:52Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"lethal untamed ostrich",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-04T08:59:47Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- lethal untamed ostrich
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
lebar-mj/NLP-RLVR-checkpoints
|
lebar-mj
| 2025-09-04T08:56:18Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"grpo",
"arxiv:2402.03300",
"base_model:allenai/Llama-3.1-Tulu-3-8B-SFT",
"base_model:finetune:allenai/Llama-3.1-Tulu-3-8B-SFT",
"endpoints_compatible",
"region:us"
] | null | 2025-08-15T22:46:40Z |
---
base_model: allenai/Llama-3.1-Tulu-3-8B-SFT
library_name: transformers
model_name: NLP-RLVR-checkpoints
tags:
- generated_from_trainer
- trl
- grpo
licence: license
---
# Model Card for NLP-RLVR-checkpoints
This model is a fine-tuned version of [allenai/Llama-3.1-Tulu-3-8B-SFT](https://huggingface.co/allenai/Llama-3.1-Tulu-3-8B-SFT).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="lebar-mj/NLP-RLVR-checkpoints", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/mlebar-university-of-chicago/huggingface/runs/rr4qvts9)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.22.1
- Transformers: 4.56.0
- Pytorch: 2.8.0.dev20250319+cu128
- Datasets: 4.0.0
- Tokenizers: 0.22.0
## Citations
Cite GRPO as:
```bibtex
@article{shao2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
gsjang/ko-llama-3-korean-bllossom-8b-x-meta-llama-3-8b-instruct-skt
|
gsjang
| 2025-09-04T08:53:07Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:MLP-KTLim/llama-3-Korean-Bllossom-8B",
"base_model:merge:MLP-KTLim/llama-3-Korean-Bllossom-8B",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:merge:meta-llama/Meta-Llama-3-8B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-04T08:50:06Z |
---
base_model:
- MLP-KTLim/llama-3-Korean-Bllossom-8B
- meta-llama/Meta-Llama-3-8B-Instruct
library_name: transformers
tags:
- mergekit
- merge
---
# ko-llama-3-korean-bllossom-8b-x-meta-llama-3-8b-instruct-skt
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the Spectral Knowledge Transfer (SKT) merge method using [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) as a base.
### Models Merged
The following models were included in the merge:
* [MLP-KTLim/llama-3-Korean-Bllossom-8B](https://huggingface.co/MLP-KTLim/llama-3-Korean-Bllossom-8B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
dtype: bfloat16
tokenizer:
source: union
merge_method: skt
models:
- model: MLP-KTLim/llama-3-Korean-Bllossom-8B
- model: meta-llama/Meta-Llama-3-8B-Instruct
base_model: meta-llama/Meta-Llama-3-8B-Instruct
parameters:
beta: 12.0
gamma: 1.0
eps: 1.0e-08
energy_keep: 0.98
svd_on_cpu: false
t_fallback: 0.5
write_readme: README.md
```
|
youryoui/blockassist-bc-agile_short_penguin_1756975856
|
youryoui
| 2025-09-04T08:51:17Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"agile short penguin",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-04T08:50:56Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- agile short penguin
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
jinaai/jina-code-embeddings-1.5b-GGUF
|
jinaai
| 2025-09-04T08:47:41Z | 1,239 | 0 | null |
[
"gguf",
"arxiv:2508.21290",
"base_model:jinaai/jina-code-embeddings-1.5b",
"base_model:quantized:jinaai/jina-code-embeddings-1.5b",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:eu"
] | null | 2025-08-29T07:36:07Z |
---
base_model:
- jinaai/jina-code-embeddings-1.5b
base_model_relation: quantized
license: cc-by-nc-4.0
---
<p align="center">
<img src="https://huggingface.co/datasets/jinaai/documentation-images/resolve/main/logo.webp" alt="Jina AI: Your Search Foundation, Supercharged!" width="150px">
</p>
<p align="center">
<b>The GGUF version of the code embedding model trained by <a href="https://jina.ai/"><b>Jina AI</b></a>.</b>
</p>
# Jina Code Embeddings: A Small but Performant Code Embedding Model
## Intended Usage & Model Info
`jina-code-embeddings-1.5b-GGUF` is the **GGUF export** of our [jina-code-embeddings-1.5b](https://huggingface.co/jinaai/jina-code-embeddings-1.5b), built on [Qwen/Qwen2.5-Coder-1.5B](https://huggingface.co/Qwen/Qwen2.5-Coder-1.5B).
The model supports code retrieval and technical QA across **15+ programming languages** and multiple domains, including web development, software development, machine learning, data science, and educational coding problems.
### Key Features
| Feature | Jina Code Embeddings 1.5B GGUF |
|------------------------|--------------------------------|
| Base Model | Qwen2.5-Coder-1.5B |
| Supported Tasks | `nl2code`, `code2code`, `code2nl`, `code2completion`, `qa` |
| Max Sequence Length | 32768 (**recommended ≤ 8192**) |
| Embedding Vector Dim | **1536** |
| Matryoshka Dimensions | 128, 256, 512, 1024, 1536 (**client-side slice**) |
| Pooling Strategy | **MUST use `--pooling last`** (EOS) |
> **Matryoshka note:** `llama.cpp` always returns **896-d** embeddings for this model. To use 128, 256, 512, 1024, 1536, **slice client-side** (e.g., take the first *k* elements).
---
## Task Instructions
Prefix inputs with task-specific instructions:
```python
INSTRUCTION_CONFIG = {
"nl2code": {
"query": "Find the most relevant code snippet given the following query:\n",
"passage": "Candidate code snippet:\n"
},
"qa": {
"query": "Find the most relevant answer given the following question:\n",
"passage": "Candidate answer:\n"
},
"code2code": {
"query": "Find an equivalent code snippet given the following code snippet:\n",
"passage": "Candidate code snippet:\n"
},
"code2nl": {
"query": "Find the most relevant comment given the following code snippet:\n",
"passage": "Candidate comment:\n"
},
"code2completion": {
"query": "Find the most relevant completion given the following start of code snippet:\n",
"passage": "Candidate completion:\n"
}
}
````
Use the appropriate prefix for **queries** and **passages** at inference time.
---
## Install `llama.cpp`
Follow the official instructions: **[https://github.com/ggml-org/llama.cpp](https://github.com/ggml-org/llama.cpp)**
---
## Model files
Hugging Face repo (GGUF): **[https://huggingface.co/jinaai/jina-code-embeddings-1.5b-GGUF](https://huggingface.co/jinaai/jina-code-embeddings-1.5b-GGUF)**
Pick a file (e.g., `jina-code-embeddings-1.5b-F16.gguf`). You can either:
* **auto-download** by passing the **repo and file directly** to `llama.cpp`
* **use a local path** with `-m`
---
## HTTP service with `llama-server`
### Auto-download from Hugging Face (repo + file)
```bash
./llama-server \
--embedding \
--hf-repo jinaai/jina-code-embeddings-1.5b-GGUF \
--hf-file jina-code-embeddings-1.5b-F16.gguf \
--host 0.0.0.0 \
--port 8080 \
--ctx-size 32768 \
--ubatch-size 8192 \
--pooling last
```
### Local file
```bash
./llama-server \
--embedding \
-m /path/to/jina-code-embeddings-1.5b-F16.gguf \
--host 0.0.0.0 \
--port 8080 \
--ctx-size 32768 \
--ubatch-size 8192 \
--pooling last
```
> Tips: `-ngl <N>` to offload layers to GPU. Max context is 32768 but stick to `--ubatch-size` ≤ 8192 for best results.
---
## Query examples (HTTP)
### Native endpoint (`/embedding`)
```bash
curl -X POST http://localhost:8080/embedding \
-H "Content-Type: application/json" \
-d '{
"content": [
"Find the most relevant code snippet given the following query:\nprint hello world in python",
"Candidate code snippet:\nprint(\"Hello World!\")"
]
}'
```
### OpenAI-compatible (`/v1/embeddings`)
```bash
curl http://localhost:8080/v1/embeddings \
-H "Content-Type: application/json" \
-d '{
"input": [
"Find the most relevant code snippet given the following query:\nprint hello world in python",
"Candidate code snippet:\nprint(\"Hello World!\")"
]
}'
```
---
## Training & Evaluation
See our technical report: **[https://arxiv.org/abs/2508.21290](https://arxiv.org/abs/2508.21290)**
---
## Contact
Join our Discord: **[https://discord.jina.ai](https://discord.jina.ai)**
|
youryoui/blockassist-bc-durable_marine_bee_1756975402
|
youryoui
| 2025-09-04T08:43:44Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"durable marine bee",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-04T08:43:23Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- durable marine bee
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
youryoui/blockassist-bc-hulking_squeaky_seahorse_1756975295
|
youryoui
| 2025-09-04T08:41:56Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"hulking squeaky seahorse",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-04T08:41:36Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- hulking squeaky seahorse
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
youryoui/blockassist-bc-thick_tame_porcupine_1756975148
|
youryoui
| 2025-09-04T08:39:30Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thick tame porcupine",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-04T08:39:09Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thick tame porcupine
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
AeonOmniverse/SmolVLMEx01
|
AeonOmniverse
| 2025-09-04T08:35:45Z | 4 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:HuggingFaceTB/SmolVLM-Instruct",
"base_model:adapter:HuggingFaceTB/SmolVLM-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-07-31T06:07:40Z |
---
library_name: peft
license: apache-2.0
base_model: HuggingFaceTB/SmolVLM-Instruct
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: SmolVLMEx01
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SmolVLMEx01
This model is a fine-tuned version of [HuggingFaceTB/SmolVLM-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM-Instruct) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3
### Training results
### Framework versions
- PEFT 0.15.2
- Transformers 4.52.1
- Pytorch 2.5.1+cu121
- Datasets 3.5.1
- Tokenizers 0.21.1
|
youryoui/blockassist-bc-carnivorous_crested_cheetah_1756974922
|
youryoui
| 2025-09-04T08:35:45Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"carnivorous crested cheetah",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-04T08:35:23Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- carnivorous crested cheetah
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
dondesbond/blockassist-bc-moist_tame_tiger_1756973623
|
dondesbond
| 2025-09-04T08:34:49Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"moist tame tiger",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-04T08:34:35Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- moist tame tiger
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
acidjp/blockassist-bc-pesty_extinct_prawn_1756972050
|
acidjp
| 2025-09-04T08:29:58Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"pesty extinct prawn",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-04T08:29:50Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- pesty extinct prawn
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
hamedkharazmi/blockassist-bc-tough_webbed_hamster_1756970931
|
hamedkharazmi
| 2025-09-04T08:29:30Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tough webbed hamster",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-04T08:29:27Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tough webbed hamster
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
klmdr22/blockassist-bc-wild_loud_newt_1756974362
|
klmdr22
| 2025-09-04T08:26:45Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wild loud newt",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-04T08:26:41Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wild loud newt
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.