modelId
string
author
string
last_modified
timestamp[us, tz=UTC]
downloads
int64
likes
int64
library_name
string
tags
list
pipeline_tag
string
createdAt
timestamp[us, tz=UTC]
card
string
noman38877/testkaiizn
noman38877
2025-09-22T15:14:26Z
0
0
adapter-transformers
[ "adapter-transformers", "chemistry", "text-classification", "aa", "dataset:open-r1/Mixture-of-Thoughts", "base_model:deepseek-ai/DeepSeek-R1-0528", "base_model:adapter:deepseek-ai/DeepSeek-R1-0528", "license:apache-2.0", "region:us" ]
text-classification
2025-06-23T12:38:14Z
--- license: apache-2.0 datasets: - open-r1/Mixture-of-Thoughts language: - aa metrics: - accuracy base_model: - deepseek-ai/DeepSeek-R1-0528 new_version: deepseek-ai/DeepSeek-R1-0528 pipeline_tag: text-classification library_name: adapter-transformers tags: - chemistry ---
ricardo-teixeira9/Reinforce-Pixelcopter-PLE-v0
ricardo-teixeira9
2025-09-22T15:10:56Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2025-09-22T12:13:39Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-Pixelcopter-PLE-v0 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 19.60 +/- 16.61 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
Kuongan/DSC_xlm-roberta-base_finetuned
Kuongan
2025-09-22T15:09:38Z
0
0
transformers
[ "transformers", "safetensors", "xlm-roberta", "text-classification", "generated_from_trainer", "base_model:FacebookAI/xlm-roberta-base", "base_model:finetune:FacebookAI/xlm-roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-09-22T14:12:09Z
--- library_name: transformers license: mit base_model: FacebookAI/xlm-roberta-base tags: - generated_from_trainer metrics: - accuracy model-index: - name: DSC_xlm-roberta-base_finetuned results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # DSC_xlm-roberta-base_finetuned This model is a fine-tuned version of [FacebookAI/xlm-roberta-base](https://huggingface.co/FacebookAI/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7048 - Accuracy: 0.7764 - F1 Macro: 0.7762 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Macro | |:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:| | 1.1013 | 1.0 | 175 | 1.0975 | 0.35 | 0.1728 | | 0.7951 | 2.0 | 350 | 0.7609 | 0.7086 | 0.7081 | | 0.629 | 3.0 | 525 | 0.6589 | 0.765 | 0.7666 | | 0.5435 | 4.0 | 700 | 0.6759 | 0.7586 | 0.7602 | | 0.4905 | 5.0 | 875 | 0.7048 | 0.7764 | 0.7762 | | 0.4219 | 6.0 | 1050 | 0.7526 | 0.7521 | 0.7537 | | 0.3399 | 7.0 | 1225 | 0.7930 | 0.7536 | 0.7543 | | 0.2671 | 8.0 | 1400 | 0.8626 | 0.7464 | 0.7474 | | 0.2384 | 9.0 | 1575 | 0.9770 | 0.755 | 0.7561 | ### Framework versions - Transformers 4.52.4 - Pytorch 2.6.0+cu124 - Datasets 3.6.0 - Tokenizers 0.21.2
Jackrong/gpt-oss-120B-Distill-Phi-4-v1
Jackrong
2025-09-22T15:08:35Z
0
0
transformers
[ "transformers", "llama", "text-generation", "text-generation-inference", "unsloth", "conversational", "en", "base_model:unsloth/phi-4-unsloth-bnb-4bit", "base_model:finetune:unsloth/phi-4-unsloth-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-09-22T15:08:33Z
--- base_model: unsloth/phi-4-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** Jackrong - **License:** apache-2.0 - **Finetuned from model :** unsloth/phi-4-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Keshavkumae/Shiri
Keshavkumae
2025-09-22T15:06:22Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-09-22T15:06:22Z
--- license: apache-2.0 ---
leekh7624/CON-PPO-TEST2
leekh7624
2025-09-22T15:04:50Z
0
0
transformers
[ "transformers", "safetensors", "electra", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-09-22T15:03:54Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
nonoJDWAOIDAWKDA/Shiori_reviewed_ft_StyleTTS2
nonoJDWAOIDAWKDA
2025-09-22T15:00:11Z
0
0
null
[ "text-to-speech", "StyleTTS2", "speech-synthesis", "en", "license:mit", "region:us" ]
text-to-speech
2025-09-22T14:59:27Z
--- language: en tags: - text-to-speech - StyleTTS2 - speech-synthesis license: mit pipeline_tag: text-to-speech --- # StyleTTS2 Fine-tuned Model This model is a fine-tuned version of StyleTTS2, containing all necessary components for inference. ## Model Details - **Base Model:** StyleTTS2-LibriTTS - **Architecture:** StyleTTS2 - **Task:** Text-to-Speech - **Last Checkpoint:** epoch_2nd_00004.pth ## Training Details - **Total Epochs:** 5 - **Completed Epochs:** 4 - **Total Iterations:** 1225 - **Batch Size:** 2 - **Max Length:** 620 - **Learning Rate:** 0.0001 - **Final Validation Loss:** 0.400027 ## Model Components The repository includes all necessary components for inference: ### Main Model Components: - bert.pth - bert_encoder.pth - predictor.pth - decoder.pth - text_encoder.pth - predictor_encoder.pth - style_encoder.pth - diffusion.pth - text_aligner.pth - pitch_extractor.pth - mpd.pth - msd.pth - wd.pth ### Utility Components: - ASR (Automatic Speech Recognition) - epoch_00080.pth - config.yml - models.py - layers.py - JDC (F0 Prediction) - bst.t7 - model.py - PLBERT - step_1000000.t7 - config.yml - util.py ### Additional Files: - text_utils.py: Text preprocessing utilities - models.py: Model architecture definitions - utils.py: Utility functions - config.yml: Model configuration - config.json: Detailed configuration and training metrics ## Training Metrics Training metrics visualization is available in training_metrics.png ## Directory Structure ├── Utils/ │ ├── ASR/ │ ├── JDC/ │ └── PLBERT/ ├── model_components/ └── configs/ ## Usage Instructions 1. Load the model using the provided config.yml 2. Ensure all utility components (ASR, JDC, PLBERT) are in their respective directories 3. Use text_utils.py for text preprocessing 4. Follow the inference example in the StyleTTS2 documentation
aamijar/Llama-2-7b-hf-dora-r8-rte-epochs1
aamijar
2025-09-22T14:59:28Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-09-22T14:59:26Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
MananSuri27/Qwen2.5-3B-Instruct-GRPO-Uncertainty-ARGUS-20250922_040817
MananSuri27
2025-09-22T14:58:37Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "text-generation-inference", "unsloth", "conversational", "en", "base_model:unsloth/Qwen2.5-3B-Instruct", "base_model:finetune:unsloth/Qwen2.5-3B-Instruct", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-09-22T14:57:33Z
--- base_model: unsloth/Qwen2.5-3B-Instruct tags: - text-generation-inference - transformers - unsloth - qwen2 license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** MananSuri27 - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen2.5-3B-Instruct This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
opendiffusionai/xlsd32-beta1
opendiffusionai
2025-09-22T14:57:25Z
0
0
diffusers
[ "diffusers", "safetensors", "base_model:stable-diffusion-v1-5/stable-diffusion-v1-5", "base_model:finetune:stable-diffusion-v1-5/stable-diffusion-v1-5", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2025-09-22T13:58:28Z
--- base_model: - stable-diffusion-v1-5/stable-diffusion-v1-5 --- # SD1.5 model, with SDXL vae grafted on, and then retrained to work properly Currently only in huggingface/diffusers format. May generate a "checkpoint" model later # Creation notes: dataset: 80k square images phase 1: FP32 b32a8, optimi LION, LR 1e-5 const, for only 150 steps model locked except for following layers: in, out, up.3, down.0 Note that smaller trainable params lets us use b32 on a 4090 here phase 2: FP32, b16a16, optimi LION, initial LR 1e-5, linear over 6 epochs (1920 effective steps) picked step 1800 phase 2 took around 15 hours, so total time maybe 16 hours ## Why 2-phase In theory, the phase 1 wasnt strictly neccessary. However, in early retraining, it would most likely hit very large changes to the core model, that arent strictly neccessary for vae retraining. So I picked minimal disruption
MattBou00/llama-3-2-1b-detox_v1f_SCALE9_round3-checkpoint-epoch-80
MattBou00
2025-09-22T14:56:28Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "trl", "ppo", "reinforcement-learning", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
reinforcement-learning
2025-09-22T14:54:48Z
--- license: apache-2.0 library_name: transformers tags: - trl - ppo - transformers - reinforcement-learning --- # TRL Model This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to guide the model outputs according to a value, function, or human feedback. The model can be used for text generation. ## Usage To use this model for inference, first install the TRL library: ```bash python -m pip install trl ``` You can then generate text as follows: ```python from transformers import pipeline generator = pipeline("text-generation", model="MattBou00//content/IRL-Bayesian/outputs/2025-09-22_14-39-42/checkpoints/checkpoint-epoch-80") outputs = generator("Hello, my llama is cute") ``` If you want to use the model for training or to obtain the outputs from the value head, load the model as follows: ```python from transformers import AutoTokenizer from trl import AutoModelForCausalLMWithValueHead tokenizer = AutoTokenizer.from_pretrained("MattBou00//content/IRL-Bayesian/outputs/2025-09-22_14-39-42/checkpoints/checkpoint-epoch-80") model = AutoModelForCausalLMWithValueHead.from_pretrained("MattBou00//content/IRL-Bayesian/outputs/2025-09-22_14-39-42/checkpoints/checkpoint-epoch-80") inputs = tokenizer("Hello, my llama is cute", return_tensors="pt") outputs = model(**inputs, labels=inputs["input_ids"]) ```
yasserrmd/SinaReason-Magistral-2509
yasserrmd
2025-09-22T14:56:16Z
79
4
transformers
[ "transformers", "safetensors", "mistral3", "image-to-text", "medical", "clinical-reasoning", "chain-of-thought", "mistral", "causal-lm", "instruction-tuned", "en", "dataset:FreedomIntelligence/medical-o1-reasoning-SFT", "arxiv:2409.11303", "base_model:mistralai/Magistral-Small-2509", "base_model:finetune:mistralai/Magistral-Small-2509", "license:apache-2.0", "endpoints_compatible", "region:us" ]
image-to-text
2025-09-20T13:44:16Z
--- license: apache-2.0 language: - en library_name: transformers tags: - medical - clinical-reasoning - chain-of-thought - mistral - causal-lm - instruction-tuned base_model: mistralai/Magistral-Small-2509 datasets: - FreedomIntelligence/medical-o1-reasoning-SFT --- # SinaReason-Magistral-2509 [![License: Apache 2.0](https://img.shields.io/badge/License-Apache_2.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) [![Dataset](https://img.shields.io/badge/Dataset-Medical_Reasoning_SFT-orange.svg)](https://huggingface.co/datasets/FreedomIntelligence/medical-o1-reasoning-SFT) [![Base Model](https://img.shields.io/badge/Base%20Model-Magistral--24B-purple.svg)](https://huggingface.co/mistralai/Magistral-Small-2509) [![Fine-tuned with](https://img.shields.io/badge/Fine--tuned%20with-Unsloth-green.svg)](https://github.com/unsloth/unsloth) [![Author](https://img.shields.io/badge/Author-yasserrmd-grey.svg)](https://huggingface.co/yasserrmd) <img src="banner.png" /> **SinaReason** is a powerful, instruction-tuned language model designed for **step-by-step medical clinical reasoning**. It is a fine-tuned version of the formidable `mistralai/Magistral-Small-2509` (a 24B parameter model), specifically adapted for generating a transparent chain-of-thought process before delivering a clinical summary. The name "Sina" is inspired by **Ibn Sina (Avicenna)**, a Persian polymath who is regarded as one of the most significant physicians and thinkers of the Islamic Golden Age. His work, *The Canon of Medicine*, was a standard medical text for centuries, embodying the principles of logical, evidence-based reasoning. This model aims to emulate that spirit by structuring its output to be transparent, logical, and useful for educational and professional clinical settings. ## Key Features * **Advanced Clinical Reasoning:** Leverages the powerful reasoning capabilities of its base model to analyze clinical vignettes. * **Chain-of-Thought (CoT) Output:** Uniquely structured to first externalize its reasoning process within `<think>...</think>` tags, showing its work before providing a conclusion. * **Built for Education & Professional Support:** Designed to assist clinicians, researchers, and medical students in understanding and formulating clinical logic. * **Instruction-Tuned:** Fine-tuned on the `FreedomIntelligence/medical-o1-reasoning-SFT` dataset to enhance its performance on medical reasoning tasks. --- ## IMPORTANT: Ethical Considerations & Limitations This model is a research and educational tool. It is **NOT** a medical device, and it is **NOT** a substitute for a qualified human medical professional. * **Intended Audience:** This model is designed for use by **medical professionals, researchers, and students** for educational and research purposes. It is explicitly **NOT** intended for use by patients for self-diagnosis or to receive medical advice. * **Risk of Inaccuracy:** As with all language models, **SinaReason can generate incorrect, incomplete, or biased information (hallucinations)**. All outputs must be critically reviewed and independently verified by a human expert before being used in any real-world scenario. * **Built-in Safeguard:** The recommended system prompt (provided below) includes a specific instruction that guides the model to frame its responses for a professional audience and explicitly warns it **not to provide direct medical advice to patients**. This is a critical safeguard that should always be used. * **No Patient Relationship:** The model does not and cannot form a doctor-patient relationship. Using this model does not constitute receiving medical care. --- ## How to Use To get the best and safest results from SinaReason, you **must** use the recommended system prompt. This prompt activates the model's chain-of-thought capabilities and enforces its role as a reasoning assistant, not a direct-to-patient advisor. First, ensure you have the necessary libraries installed: ```bash pip install transformers torch accelerate ``` --- **Next, use the following Python script for inference:** --- ```python import torch from transformers import AutoTokenizer, AutoModelForCausalLM, TextStreamer model_id = "yasserrmd/SinaReason-Magistral-2509" device = "cuda" if torch.cuda.is_available() else "cpu" # Load the tokenizer and model tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained( model_id, torch_dtype=torch.bfloat16, device_map="auto" ) # --- CRITICAL: Use this system prompt for safe and effective reasoning --- system_prompt = """ You are SinaReason, a medical reasoning assistant for educational and clinical support. Your goal is to carefully reason through clinical problems for a professional audience (clinicians, students). **Never provide medical advice directly to a patient.** First, draft your detailed thought process (inner monologue) inside <think> ... </think>. - Use this section to work through symptoms, differential diagnoses, and investigation plans. - Be explicit and thorough in your reasoning. After closing </think>, provide a clear, self-contained medical summary appropriate for a clinical professional. - Summarize the most likely diagnosis and your reasoning. - Suggest next steps for investigation or management. """ # Example clinical scenario user_query = "Patient: 72-year-old with a history of hypertension presents with confusion, right-sided weakness, and slurred speech. What is the likely cause and immediate steps?" messages = [ {"role": "system", "content": system_prompt}, {"role": "user", "content": user_query} ] # Apply the chat template prompt = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True, ) # Tokenize the input (explicitly pass images=None for this multimodal architecture) inputs = tokenizer( text=prompt, images=None, return_tensors="pt" ).to(device) # Generate the response streamer = TextStreamer(tokenizer, skip_prompt=True) _ = model.generate( **inputs, images=None, # Also required here for text-only inference max_new_tokens=1024, temperature=0.7, top_p=0.95, streamer=streamer, pad_token_id=tokenizer.eos_token_id ) ``` --- ## Training and Fine-Tuning * **Base Model:** mistralai/Magistral-Small-2509, a powerful 24B parameter multimodal model. * **Dataset:** The model was fine-tuned on the FreedomIntelligence/medical-o1-reasoning-SFT dataset. This dataset is designed for Supervised Fine-Tuning (SFT) to improve the medical reasoning capabilities of language models. * **Fine-Tuning Framework:** This model was fine-tuned using the **Unsloth** library, an open-source framework designed for highly efficient, memory-optimized fine-tuning of large language models. * **Performance Optimization:** Unsloth enables significantly faster training (up to 2x) and a massive reduction in memory usage (up to 80%) compared to standard methods like QLoRA. This was achieved through its optimized Triton kernels and manual backpropagation implementation. * **Hardware Accessibility:** The use of Unsloth made it possible to fine-tune this 24B parameter model on consumer-grade or single-GPU cloud hardware, making advanced model customization more accessible. --- ## Model Evaluation & Performance To validate its capabilities, `SinaReason-Magistral-2509` was subjected to a qualitative evaluation across a benchmark of **30 diverse medical reasoning prompts**. These scenarios were designed to test the model's reasoning in various domains, including: * General Internal Medicine * Emergency & Critical Care * Specialized Fields (Neurology, Pediatrics, Rheumatology, etc.) The model performed exceptionally well, achieving a qualitative grade of **A-**. This reflects its ability to consistently generate medically sound, logically structured responses while acknowledging the inherent limitations of any AI in handling real-world clinical nuance. ### Key Strengths Observed: * **Consistent Chain-of-Thought Reasoning:** The model strictly adhered to the `<think>...</think>` format in 100% of the test cases, providing a transparent and logical pathway to its conclusions. * **High Clinical Accuracy:** For the classic clinical vignettes tested, the model demonstrated a high degree of accuracy in identifying the most likely diagnosis and suggesting appropriate, standard-of-care next steps. * **Effective Prioritization of Emergencies:** The model successfully identified and prioritized life-threatening conditions, such as Subarachnoid Hemorrhage, Aortic Aneurysm, and Epiglottitis, recommending immediate and appropriate actions. ### Full Evaluation Results For full transparency, the complete log of the 30 evaluation prompts and the model's verbatim responses can be reviewed below. <details> <summary><strong>Click to view the full 30-prompt evaluation results</strong></summary> --- #### Scenario 1: Sudden Severe Headache **Prompt:** "Patient: 45-year-old with sudden onset severe headache described as 'the worst ever'. What should be ruled out and how?" **Model Output:** > <think> > Alright, let's think this through. We have a 45-year-old patient who's experiencing a sudden, severe headache that they describe as the worst they've ever had. This really makes me think about a subarachnoid hemorrhage, or SAH for short. It's classic because these headaches often come on suddenly and can be incredibly intense. > > First things first, I need to consider what could cause such a sudden and severe headache. SAH definitely comes to mind, but I should also think about other possibilities like a migraine or a tension headache. But, given the description, these don't quite fit as well because migraines usually come with other symptoms like nausea or light sensitivity, and tension headaches aren't typically that intense. > > Now, how do I go about confirming if it's an SAH? The gold standard is a non-contrast CT scan of the brain. It's the most straightforward way to see if there's any bleeding in the subarachnoid space. This test is pretty reliable when done within the first few hours of the headache starting. If the CT scan comes back negative, that doesn't necessarily rule out SAH, but it can help decide what to do next. > > If the CT scan doesn't show any signs of bleeding, I'd consider a lumbar puncture, also known as a spinal tap. This can help detect the presence of xanthochromia or red blood cells in the cerebrospinal fluid, which are indicators of SAH. > > But wait, I can't just stop there. There are other serious conditions that need to be considered. Meningitis could also cause a severe headache, and it's crucial to rule that out because it's a medical emergency. A lumbar puncture can help here too, looking for increased white blood cells and other markers of inflammation in the CSF. > > And let's not forget about giant cell arteritis, especially given the patient's age. This is more of a consideration if there are other symptoms like jaw pain or visual disturbances, but it's always good to keep in the back of my mind. > > So, to summarize, I'd start with a non-contrast CT scan to look for bleeding. If that doesn't show anything, I'd move to a lumbar puncture. This way, I can check for both SAH and meningitis. And while I'm doing all this, I should keep an open mind about other conditions like giant cell arteritis, especially if there are other clues in the clinical picture. > > Okay, I think this plan makes sense. It covers the most likely causes and uses the right tests to get to the answer as quickly and safely as possible. > </think> > Given the presentation of a sudden and severe headache described as the worst ever by a 45-year-old patient, the primary concern is a subarachnoid hemorrhage (SAH). To rule this out, the initial step should be a non-contrast CT scan of the brain. This test is highly effective in detecting SAH when performed within the first few hours of the onset of symptoms. > > If the CT scan is negative, a lumbar puncture (spinal tap) is recommended to look for xanthochromia or red blood cells in the cerebrospinal fluid, which can confirm the presence of SAH. > > Additionally, it is important to consider other potential causes of severe headaches, such as meningitis, which can also be evaluated through a lumbar puncture by assessing for increased white blood cells and signs of inflammation in the CSF. In this patient's age group, giant cell arteritis should also be considered, particularly if there are associated symptoms like jaw pain or visual disturbances. --- #### Scenario 2: Confusion & Right-Sided Weakness **Prompt:** "Patient: 72-year-old with history of hypertension presents with confusion, right-sided weakness, and slurred speech. What is the likely cause and immediate steps?" **Model Output:** > <think> > Alright, let's see what's going on here. We have a 72-year-old patient with a history of hypertension, and now they're showing up with some pretty concerning symptoms: confusion, weakness on the right side, and slurred speech. That's definitely ringing some alarm bells, especially with the hypertension history. > > Hmm, the right-sided weakness and slurred speech really point to something happening on the left side of the brain, since the brain's right side controls the left side of the body and vice versa. Given that, we're probably looking at some sort of neurological issue, and the most likely culprit in this age group and with hypertension is a stroke. > > Okay, so let's think about the type of stroke. We've got two main types to consider: ischemic and hemorrhagic. Given that hypertension is a known risk factor for both, but especially hemorrhagic strokes, it's a strong contender here. > > Now, what should we do next? The first thing that comes to mind is to get a CT scan of the head. This is crucial because it can quickly tell us if there's any bleeding in the brain, which would point to a hemorrhagic stroke. If there's no bleeding, then we're likely dealing with an ischemic stroke, which could potentially be treated with a clot-busting drug like tPA. > > But wait, there's more. Before we jump into any treatment, we need to assess the patient's stability. Are they stable enough for a CT scan? Are they at risk of aspirating due to the slurred speech? And we can't forget to consider if they have any contraindications for tPA, like a recent head injury or bleeding disorders. > > So, in summary, we're probably dealing with a stroke, and the first step is to do a CT scan to figure out if it's hemorrhagic or ischemic. This will guide us on whether we need to administer tPA or manage the bleeding if it's hemorrhagic. > </think> > Based on the symptoms and the patient's history, the likely cause of the patient's confusion, right-sided weakness, and slurred speech is a stroke, with a hemorrhagic stroke being a strong possibility given the history of hypertension. The immediate step to take is to perform a CT scan of the head. This will help differentiate between a hemorrhagic and ischemic stroke, which is crucial for determining the appropriate treatment. --- #### Scenario 3: Diabetic Neuropathy **Prompt:** "Patient: 60-year-old with long-standing diabetes has numbness and tingling in both feet. What is the most likely diagnosis and first-line management?" **Model Output:** > <think> > Okay, let's think about this. We have a 60-year-old with long-standing diabetes who's experiencing numbness and tingling in both feet. Right off the bat, diabetes is a big clue here. It's well-known that diabetes can lead to complications like neuropathy, especially if it's been around for a long time. > > Now, let's consider what neuropathy means in this context. Diabetic peripheral neuropathy is pretty common, and it's characterized by those exact symptoms: numbness and tingling. This makes me think it's a strong contender for the diagnosis here. > > Also, I should consider the other possible causes of neuropathy, like B12 deficiency or other neurological conditions. But with the history of diabetes, diabetic neuropathy seems like the most straightforward explanation. > > Alright, moving on to what we can do about it. First off, controlling the blood sugar levels is crucial. If we can manage the diabetes better, we might be able to slow down or even prevent further nerve damage. This could involve medication adjustments or lifestyle changes. > </think> > The most likely diagnosis for a 60-year-old with long-standing diabetes experiencing numbness and tingling in both feet is diabetic peripheral neuropathy. This condition is common in individuals with diabetes and is characterized by damage to the peripheral nerves due to prolonged high blood sugar levels. For the first-line management of diabetic peripheral neuropathy, the primary goal is to control blood sugar levels effectively to prevent further nerve damage. --- #### Scenario 4: Sore Throat & Drooling **Prompt:** "Patient: 25-year-old presents with high fever, sore throat, swollen neck, and drooling. What life-threatening condition must be excluded?" **Model Output:** > <think> > Alright, let's see what's going on with this patient. We've got a 25-year-old with a high fever, sore throat, swollen neck, and drooling. These symptoms sound pretty alarming. Let's break it down. A high fever and sore throat could point to an infection. Swollen neck suggests something going on in the neck area, maybe lymph nodes or something deeper like an abscess. Drooling? That's a big red flag. It makes me think there might be an issue with swallowing or even breathing. > > But wait, there's a more serious condition that could fit this picture: epiglottitis. Oh, right! Epiglottitis is an infection of the epiglottis, which is the flap at the back of the throat that covers the trachea. It can cause the throat to swell, leading to difficulty swallowing, drooling, and even difficulty breathing. This is a medical emergency because if the airway gets obstructed, it could be life-threatening. > </think> > The symptoms presented by the 25-year-old patient—high fever, sore throat, swollen neck, and drooling—are concerning and must be evaluated urgently. The combination of these symptoms, especially the drooling, raises the possibility of a serious condition like epiglottitis. Epiglottitis is an infection of the epiglottis, which can lead to swelling and obstruction of the airway, posing a significant risk for respiratory distress and potential life-threatening complications if not promptly managed. Immediate medical attention is warranted to ensure the patient's safety and to initiate appropriate treatment. --- #### Scenario 5: Polyuria & Polydipsia **Prompt:** "Patient: 30-year-old with polyuria, polydipsia, and weight loss. What investigation confirms the diagnosis?" **Model Output:** > <think> > Alright, let's see what's going on here. We've got a 30-year-old patient who's experiencing polyuria and polydipsia, which means they're peeing a lot and drinking a lot of water. Plus, there's weight loss happening too. These symptoms are making me think about diabetes mellitus, particularly Type 1 Diabetes, since the sudden onset and weight loss are classic signs. > > Okay, let's think about how we can confirm Type 1 Diabetes. The most direct way is by checking for hyperglycemia, which means high blood sugar levels. A fasting blood glucose test would do the trick here. If it's elevated, that's a strong indicator. > </think> > The symptoms you've described—polyuria, polydipsia, and weight loss—are suggestive of Type 1 Diabetes Mellitus. To confirm this diagnosis, the most reliable investigation is a blood test for elevated fasting blood glucose levels. In Type 1 Diabetes, blood glucose levels are typically elevated due to insufficient insulin production by the pancreas. A fasting blood glucose level of 126 mg/dL or higher on two separate occasions confirms the diagnosis. --- #### Scenario 6: Post-Vomiting Chest Pain **Prompt:** "Patient: 40-year-old with chest pain after vomiting, now with subcutaneous emphysema. What condition must be suspected?" **Model Output:** > <think> > Okay, so we have a 40-year-old patient who's experiencing chest pain, and it seems to have started after they were vomiting. That's interesting because vomiting can sometimes lead to increased pressure in the chest. Now, they also have subcutaneous emphysema, which means air is trapped under the skin. > > Hmm, what could cause air to escape like that? Well, one possibility is something called a Boerhaave's syndrome. This is a tear in the esophagus that can happen when there's a lot of pressure, like from vomiting. If the esophagus ruptures, air can escape into the chest cavity and then spread under the skin, causing that subcutaneous emphysema. The chest pain right after vomiting and the subcutaneous emphysema together are a classic sign of Boerhaave's syndrome. > </think> > Based on the symptoms described—a 40-year-old patient with chest pain following an episode of vomiting and the presence of subcutaneous emphysema—the most likely condition to suspect is Boerhaave's syndrome. This condition involves a tear in the esophagus, often resulting from a sudden increase in intrathoracic pressure, such as during vomiting. The rupture of the esophagus allows air to escape into the mediastinum and potentially under the skin, leading to the characteristic subcutaneous emphysema. --- #### Scenario 7: Shortness of Breath & Swelling **Prompt:** "Patient: 55-year-old with progressive shortness of breath, orthopnea, and ankle swelling. What condition and investigation are likely?" **Model Output:** > <think> > Okay, let's think this through. We've got a 55-year-old patient who's experiencing progressive shortness of breath. This alone is pretty concerning, but when you add in the orthopnea, which is breathing difficulty when lying down, and ankle swelling, it's starting to paint a clearer picture. > > Given the age of the patient, I'm considering conditions that are more prevalent in older adults. Heart failure is a big one here. It can cause all these symptoms because the heart isn't pumping blood effectively, leading to fluid buildup in the lungs and body. > > So, putting it all together, heart failure seems like the most plausible diagnosis. To confirm this, a chest X-ray and echocardiogram should help us see if there's fluid in the lungs or if the heart is enlarged or not functioning properly. > </think> > The symptoms you've described—progressive shortness of breath, orthopnea, and ankle swelling—are highly suggestive of heart failure, particularly in a 55-year-old patient. Heart failure occurs when the heart is unable to pump blood effectively, leading to fluid retention in the body. To investigate this condition, the following tests are typically recommended: 1. **Chest X-ray**: This can reveal any signs of fluid in the lungs (pulmonary edema) or an enlarged heart (cardiomegaly). 2. **Echocardiogram**: This ultrasound of the heart helps assess heart function and identify any structural abnormalities. --- #### Scenario 8: Chronic Smoker's Cough **Prompt:** "Patient: 70-year-old chronic smoker presents with chronic cough, sputum production, and progressive breathlessness. What is the diagnosis and staging approach?" **Model Output:** > <think> > Okay, let's think this through. We have a 70-year-old patient who is a chronic smoker. That's a big clue right there because smoking is a major risk factor for lung disease. Now, they've got a chronic cough and are producing sputum. Hmm, that sounds a lot like chronic bronchitis. But there's also progressive breathlessness. Could it be something like emphysema? Or maybe even COPD, which is a combo of both bronchitis and emphysema? > > What about the diagnosis, let's start with some basic tests. A chest X-ray could help us see if there are any masses that might suggest cancer. A spirometry test would be great to check lung function and confirm COPD if present. > </think> > Given the patient's history of chronic smoking and symptoms of chronic cough, sputum production, and progressive breathlessness, the diagnosis points towards chronic obstructive pulmonary disease (COPD), which encompasses chronic bronchitis and possibly emphysema. To approach the diagnosis and staging: 1. **Initial Evaluation**: Start with a chest X-ray. 2. **Lung Function Testing**: Perform spirometry to assess lung function and confirm the presence of COPD. 3. **Further Imaging**: If the chest X-ray shows any suspicious lesions, a CT scan of the chest is warranted. --- #### Scenario 9: Pleuritic Chest Pain **Prompt:** "Patient: 18-year-old with sudden-onset pleuritic chest pain and dyspnea. What condition is suspected and how do you confirm?" **Model Output:** > <think> > Alright, let's see what's going on with this 18-year-old. They're experiencing sudden-onset pleuritic chest pain and dyspnea. That's pretty concerning. Pleuritic pain usually means there's something irritating the pleura, which could point to a few different things. Could it be pneumonia? Or maybe it's a pulmonary embolism? And let's not forget about pleural effusion or even pneumothorax. > > So, what's the best next step? Let's go with the chest X-ray first because it's non-invasive and can quickly tell us if there's a pneumothorax or pneumonia. If the X-ray comes back normal, we might need to consider other tests like a CTPA or D-dimer to investigate further for a pulmonary embolism. > </think> > In this case, the sudden-onset pleuritic chest pain and dyspnea in an 18-year-old suggest a few potential conditions. The symptoms could be indicative of pneumonia, pulmonary embolism, pneumothorax, or pleural effusion. To narrow down the diagnosis, a chest X-ray is a reasonable first step. This test can quickly help identify pneumonia, pneumothorax, or pleural effusion. If the chest X-ray is normal, further investigation might be needed, such as a D-dimer test or a CT pulmonary angiogram (CTPA), to evaluate for a pulmonary embolism. --- #### Scenario 10: Malar Rash & Joint Pain **Prompt:** "Patient: 34-year-old woman with malar rash, photosensitivity, and joint pain. What investigations are appropriate?" **Model Output:** > <think> > Okay, let's think this through. We have a 34-year-old woman who's showing up with a malar rash. That's a pretty classic sign, right? It's like the butterfly rash across her cheeks. And she's also got photosensitivity, meaning her skin reacts to sunlight. Plus, she's dealing with joint pain. These symptoms together are ringing a bell for systemic lupus erythematosus, or SLE. > > Now, to confirm if this is really SLE, we need some investigations. The anti-nuclear antibody test, or ANA, usually comes up first. But we also need more specific markers like Anti-double-stranded DNA antibodies (anti-dsDNA) and anti-Smith antibody (anti-Sm). > </think> > Based on the symptoms presented by the 34-year-old woman, including a malar rash, photosensitivity, and joint pain, the most likely diagnosis is systemic lupus erythematosus (SLE). To confirm this diagnosis and evaluate the extent of the disease, several investigations are appropriate: 1. **Serological Tests:** ANA, anti-dsDNA, and anti-Sm. 2. **Inflammatory Markers:** ESR and CRP. 3. **Kidney Function Tests.** 4. **Liver Function Tests.** 5. **Complete Blood Count (CBC).** --- #### Scenario 11: Anaphylaxis **Prompt:** "Patient: 29-year-old with anaphylaxis after bee sting. What immediate management steps are required?" **Model Output:** > <think> > Okay, so here's a 29-year-old patient who's just experienced anaphylaxis after a bee sting. This is serious, right? Anaphylaxis is like the body's extreme reaction, and we need to act fast to stabilize them. > > First thing that comes to mind is, what can we do to quickly calm down this severe allergic reaction? We've got epinephrine, which is kind of our go-to drug for anaphylaxis. It's usually administered intramuscularly, and the thigh is the best spot. Once the epinephrine is given, we also need to make sure the patient is breathing okay. Sometimes, they might need oxygen. Positioning matters too! We want the patient in a position where they can breathe and their blood pressure can be maintained. Lying them flat is a good idea. > </think> > In the case of a 29-year-old patient experiencing anaphylaxis after a bee sting, the immediate management steps are critical: 1. **Administer Epinephrine**: This is the first and most crucial step. Use an intramuscular auto-injector (e.g., EpiPen) in the mid-outer thigh. 2. **Ensure Airway Patency**: Check the patient's airway and provide oxygen if there is any difficulty in breathing. 3. **Position the Patient**: Place the patient in a supine position to help maintain blood pressure. 4. **Monitor Vital Signs**. 5. **Prepare for Biphasic Anaphylaxis**. --- #### Scenario 12: Abdominal Pain Out of Proportion **Prompt:** "Patient: 40-year-old with severe abdominal pain out of proportion to examination findings. What is suspected?" **Model Output:** > <think> > Okay, let's think about this. We've got a 40-year-old patient who's complaining of really severe abdominal pain. But here's the thing—the exam doesn't show anything too alarming. That's kind of a red flag for something serious. > > Hmm, what could it be? I know that sometimes, severe pain without obvious exam findings can point to something like an acute abdomen. But what exactly? Mesenteric ischemia comes to mind. It's not always obvious on a physical exam because the blood flow issues can be subtle. Plus, it can really hurt, even if you don't see anything obvious. > </think> > In the case of a 40-year-old patient experiencing severe abdominal pain that is out of proportion to the examination findings, mesenteric ischemia is a strong possibility to consider. This condition occurs when the blood flow to the intestines is compromised and can lead to significant pain despite subtle or minimal findings on a physical examination. To further evaluate this possibility, it is crucial to perform additional investigations. A CT angiography (CTA) of the abdomen is particularly useful. --- #### Scenario 13: Crushing Chest Pain & Hypotension **Prompt:** "Patient: 60-year-old with crushing chest pain, hypotension, and muffled heart sounds. What is the likely diagnosis?" **Model Output:** > <think> > Okay, let's think about what's going on here. We have a 60-year-old patient who's experiencing crushing chest pain. That's a big red flag, right? It's usually a sign of something serious with the heart. Now, the hypotension, or low blood pressure, is concerning because it suggests the heart isn't pumping effectively. > > Now, what's really catching my attention is the muffled heart sounds. This is a classic clue. It makes me think of something that's affecting the heart's ability to produce sound, like if there's fluid or something else around it. Oh, right, this is very characteristic of cardiac tamponade. The crushing chest pain, low blood pressure, and muffled heart sounds all point to cardiac tamponade. > </think> > The most likely diagnosis for the 60-year-old patient with crushing chest pain, hypotension, and muffled heart sounds is cardiac tamponade. This condition occurs when there is an accumulation of fluid around the heart, leading to compression and reduced efficiency of cardiac function. The muffled heart sounds, in particular, are a hallmark of this condition. --- #### Scenario 14: Massive Hematemesis **Prompt:** "Patient: 50-year-old with massive hematemesis and shock. What are the emergency steps?" **Model Output:** > <think> > Alright, we've got a 50-year-old patient who's had massive hematemesis and is in shock. This is pretty serious. Let's start by thinking about what could cause this. Massive bleeding from the upper GI tract could mean a ruptured esophageal varices, a bleeding ulcer, or maybe Mallory-Weiss syndrome. > > First things first, we need to stabilize the patient. They're in shock, so that means we have to manage their blood pressure and heart rate. We'll need to give them some fluids, maybe even start a blood transfusion if they're losing a lot of blood. We should definitely get an EGD (esophagogastroduodenoscopy) scheduled as soon as possible. > </think> > In the case of a 50-year-old patient presenting with massive hematemesis and shock, the emergency steps are critical: 1. **Stabilize the Patient:** Airway Management and Circulatory Support. 2. **Identify the Source of Bleeding:** Arrange for an urgent Esophagogastroduodenoscopy (EGD). 3. **Manage the Bleeding:** Pharmacological intervention (e.g., octreotide). 4. **Post-Procedure Care.** --- #### Scenario 15: Fever & Purpuric Rash **Prompt:** "Patient: 23-year-old with fever, headache, and purpuric rash. What should be suspected?" **Model Output:** > <think> > Alright, let's think about what's going on with this patient. We've got a 23-year-old who's feeling pretty rough with a fever and a headache. But the thing that really catches my eye is this purpuric rash. That's usually a sign of something more serious happening, like blood vessels being affected somehow. > > Now, with these symptoms, I'm starting to wonder about meningococcal disease. It's pretty notorious for causing a fever, headache, and this specific type of rash. The combination of fever, headache, and purpuric rash is a classic triad for this condition. > </think> > Based on the symptoms presented—fever, headache, and a purpuric rash—meningococcal disease is a strong possibility. The combination of these symptoms is often indicative of this condition, especially the presence of the purpuric rash. Immediate treatment with antibiotics, such as ceftriaxone or cefotaxime, is crucial if meningococcal disease is suspected, even before laboratory confirmation. --- #### Scenario 16: Blunt Abdominal Trauma **Prompt:** "Patient: 36-year-old with blunt trauma to abdomen, unstable vitals. What is the best initial investigation?" **Model Output:** > <think> > Alright, let's think about this. We have a 36-year-old patient who's had some pretty serious blunt trauma to the abdomen, and now they're unstable. When someone comes in with unstable vitals after something like this, we need to act fast. The main concern here is internal bleeding. > > First, I'm thinking about the best way to check for any internal bleeding. We need something that can give us a quick look. What about an FAST exam? That's a focused assessment with sonography for trauma. It's like the rapid response team for ultrasound in trauma cases. FAST can quickly tell us if there's free fluid in the abdomen, which usually means blood. It's noninvasive and can be done right at the bedside. > </think> > In the scenario you've described, where a 36-year-old patient has suffered blunt trauma to the abdomen and presents with unstable vitals, the best initial investigation is a FAST (Focused Assessment with Sonography for Trauma) exam. This is a rapid, noninvasive ultrasound technique used to detect free fluid in the abdomen, which typically indicates internal bleeding. --- #### Scenario 17: Post-Op Dyspnea & Hemoptysis **Prompt:** "Patient: 70-year-old postoperative patient with sudden dyspnea, chest pain, and hemoptysis. What is suspected?" **Model Output:** > <think> > Alright, let's think this through. We've got a 70-year-old patient who just had surgery, and now they're suddenly short of breath and in pain. Oh, and they're coughing up blood, too. > > First things first, what could cause these symptoms after surgery? Well, pulmonary embolism jumps to mind. Surgery is a big risk factor for blood clots, especially in the legs, which can travel to the lungs. The sudden onset of dyspnea, chest pain, and hemoptysis is like a textbook case for a PE. > </think> > In this scenario, the most likely explanation for the 70-year-old postoperative patient's sudden dyspnea, chest pain, and hemoptysis is a pulmonary embolism (PE). The combination of these symptoms, especially following surgery, raises significant suspicion for a PE. Immediate evaluation and treatment are necessary to manage this potentially life-threatening condition. --- #### Scenario 18: Barking Cough & Stridor **Prompt:** "Patient: 4-year-old with barking cough, stridor, and hoarseness. What is the diagnosis and management?" **Model Output:** > <think> > Okay, let's think about this. We've got a 4-year-old child with a barking cough, stridor, and hoarseness. These symptoms are pretty classic for something, but what exactly? > > A barking cough makes me think of croup. It's that high-pitched, seal-like cough that's often heard in young kids. And stridor, that's the noise you hear when breathing, especially when inhaling, because of some narrowing in the airways. This all fits together with croup. > > Now, how do we manage this? For mild cases, we usually start with supportive care. That means keeping the child hydrated, maybe using a cool mist humidifier. But if the symptoms are severe, that's when nebulized racemic epinephrine comes into play. > </think> > Based on the symptoms described—a barking cough, stridor, and hoarseness—the most likely diagnosis for this 4-year-old child is croup. For management, mild cases can often be managed with supportive care. In more severe cases, nebulized racemic epinephrine can be used to reduce airway swelling. Oral dexamethasone may also be considered to reduce inflammation. --- #### Scenario 19: Back Pain & Pulsatile Mass **Prompt:** "Patient: 65-year-old with severe back pain, hypotension, and pulsatile abdominal mass. What condition must be suspected?" **Model Output:** > <think> > Alright, let's take a look at this 65-year-old patient. They're experiencing severe back pain. Now, the patient is also showing hypotension, which is low blood pressure. This is a big red flag. > > Okay, and there's this pulsatile abdominal mass. That's a really key detail. A pulsatile mass in the abdomen often points to something that's expanding and pushing against the abdominal wall, like an aneurysm. In this context, I'm thinking about an abdominal aortic aneurysm specifically because it fits with the back pain and hypotension. > </think> > Based on the symptoms presented—a 65-year-old patient experiencing severe back pain, hypotension, and a pulsatile abdominal mass—the most likely condition to suspect is an abdominal aortic aneurysm (AAA). These symptoms are characteristic of an AAA, which could be pressing on the aorta, causing back pain, and if it's leaking or rupturing, it could lead to hypotension. --- #### Scenario 20: Fever & Neck Stiffness **Prompt:** "Patient: 22-year-old with high fever, neck stiffness, and photophobia. What is the likely diagnosis and initial management?" **Model Output:** > <think> > Alright, let's think this through. We've got a 22-year-old patient with a high fever, neck stiffness, and photophobia. These symptoms are ringing a pretty loud alarm bell for meningitis. Now, we need to consider what's causing this inflammation. It could be bacterial, viral, or even fungal, but given the age and severity, bacterial meningitis is a big concern here. > > Now, what should we do? With bacterial meningitis, time is of the essence. We need to start empirical antibiotic treatment right away. Typically, for bacterial meningitis, we'd start with something like ceftriaxone or cefotaxime. We also need to get a lumbar puncture. > </think> > The symptoms of high fever, neck stiffness, and photophobia in a 22-year-old patient strongly suggest meningitis, with bacterial meningitis being a significant concern. The initial management should be prompt and comprehensive: 1. **Empirical Antibiotic Therapy**: Start broad-spectrum antibiotics immediately. 2. **Lumbar Puncture**: Perform a lumbar puncture to obtain cerebrospinal fluid (CSF) for analysis. 3. **Supportive Care**. --- #### Scenario 21: Alcohol Use & Confusion **Prompt:** "Patient: 50-year-old with chronic alcohol use presents with confusion, ataxia, and ophthalmoplegia. What deficiency is likely?" **Model Output:** > <think> > Alright, let's see what's going on here. We've got a 50-year-old who's been drinking a lot for a long time and now they're showing up with some concerning symptoms. They're confused, can't walk straight, and have issues with their eye movements. > > When I hear 'confusion' and 'ataxia' together, I start thinking about Wernicke's encephalopathy. It's like a classic set of symptoms—confusion, problems with movement, and eye movement issues. > > Now, what could cause this in someone who's been drinking heavily? Well, chronic alcohol use often messes with the body's ability to absorb and use vitamins properly. One vitamin in particular, thiamine, is crucial for brain function and is frequently deficient in alcoholics. > </think> > The symptoms you're describing—confusion, ataxia, and ophthalmoplegia—in a 50-year-old patient with a history of chronic alcohol use strongly suggest Wernicke's encephalopathy. This condition is typically caused by a deficiency of thiamine (vitamin B1). Thiamine is crucial for the proper functioning of the nervous system, and its deficiency is common in individuals who consume excessive amounts of alcohol. --- #### Scenario 22: Pediatric Bone Pain & Dactylitis **Prompt:** "Patient: 5-year-old with recurrent bone pain, dactylitis, and anemia. What is the likely diagnosis?" **Model Output:** > <think> > Alright, so we have a 5-year-old who keeps experiencing bone pain, has this thing called dactylitis, and is showing signs of anemia. > > Now, dactylitis. That's when fingers or toes swell up like a sausage, right? It's pretty specific to certain conditions, especially sickle cell disease. In kids, if they have sickle cell, they often have this dactylitis because of the sickled red blood cells getting stuck and causing inflammation. > > And then there's the anemia. Anemia in kids with sickle cell disease is common. Putting these symptoms together: bone pain, dactylitis, and anemia, it's really pointing towards sickle cell disease. > </think> > Based on the symptoms described—a 5-year-old with recurrent bone pain, dactylitis, and anemia—the most likely diagnosis is sickle cell disease. The bone pain, often referred to as "sickle cell crisis," occurs due to vaso-occlusive events. Dactylitis, or "hand-foot syndrome," is a hallmark symptom in young children. Anemia is common because the sickled cells have a reduced lifespan. --- #### Scenario 23: Exophthalmos & Weight Loss **Prompt:** "Patient: 40-year-old woman with exophthalmos, tremors, and weight loss. What is the most likely condition?" **Model Output:** > <think> > Alright, let's think this through. We've got a 40-year-old woman here, and she's showing some interesting symptoms: exophthalmos, tremors, and weight loss. Exophthalmos is when the eyes bulge out, and tremors mean her hands or other body parts are shaking involuntarily. Plus, she's losing weight. > > Exophthalmos usually makes me think of thyroid issues, particularly hyperthyroidism. This is because in Grave's disease, which is a common type of hyperthyroidism, the eyes often bulge. And then there's the weight loss. That's a classic sign of hyperthyroidism too. > </think> > Based on the symptoms you've described—exophthalmos, tremors, and weight loss—the most likely condition affecting this 40-year-old woman is hyperthyroidism. Exophthalmos is often associated with Grave's disease, a form of hyperthyroidism. Tremors and weight loss are also common due to the increased metabolic rate. --- #### Scenario 24: Progressive Memory Loss **Prompt:** "Patient: 68-year-old with progressive memory loss, difficulty with daily tasks. What is the probable diagnosis?" **Model Output:** > <think> > Let's think about this. We have a 68-year-old patient who's experiencing progressive memory loss and having trouble with everyday tasks. > > First off, given the age and the symptoms, Alzheimer's disease immediately comes to mind. It's pretty common in older adults and is known for causing memory issues and difficulties with daily activities. Alzheimer's typically has a gradual decline, which fits what we're seeing here. > </think> > Based on the symptoms of progressive memory loss and difficulty with daily tasks in a 68-year-old patient, the most probable diagnosis is Alzheimer's disease. Alzheimer's is characterized by gradual cognitive decline, including memory impairment and challenges with routine activities, which aligns with the patient's presentation. --- #### Scenario 25: Hematuria & Flank Pain **Prompt:** "Patient: 55-year-old with hematuria, flank pain, and abdominal mass. What cancer must be considered?" **Model Output:** > <think> > Alright, let's think about this 55-year-old patient. They're experiencing hematuria, which is blood in the urine. Oh, and there's also flank pain. Now, there's an abdominal mass too. > > When I put all these symptoms together—hematuria, flank pain, and an abdominal mass—I start thinking about renal cell carcinoma, or kidney cancer. It's a pretty common cancer that fits these symptoms well. Renal cell carcinoma often presents with hematuria because it can bleed into the urinary tract. And it's also known to cause flank pain if the tumor is pressing on nearby structures. > </think> > In this scenario, renal cell carcinoma (RCC), a type of kidney cancer, is a significant consideration given the symptoms of hematuria, flank pain, and an abdominal mass. Renal cell carcinoma commonly presents with these symptoms because the tumor can bleed into the urinary tract, causing hematuria, and it can also cause flank pain if it compresses nearby structures. --- #### Scenario 26: HIV & Cavitary Lung Lesion **Prompt:** "Patient: 45-year-old HIV-positive man with cough, fever, and cavitary lung lesion. What are the possibilities?" **Model Output:** > <think> > Okay, so we've got a 45-year-old man who's HIV-positive, and he's dealing with a cough, fever, and a cavitary lung lesion. Since he's HIV-positive, he's more prone to infections that typically wouldn't affect someone with a fully functioning immune system. That makes opportunistic infections a big consideration here. > > Cavitary lesions are often linked to infections like tuberculosis or some types of fungal infections. Let's weigh the possibilities. Tuberculosis is a classic suspect, especially given the cavitary lesion. It's known for causing cough, fever, and these kinds of lung lesions, and it's definitely more common in people who are HIV-positive. > </think> > In this scenario, the most likely diagnosis is pulmonary tuberculosis (TB), given the combination of cough, fever, and a cavitary lung lesion in an HIV-positive patient. TB is a common opportunistic infection in individuals with compromised immune systems, and the cavitary lesion is a classic finding associated with this condition. Other possibilities include fungal infections (histoplasmosis, aspergillosis) or malignancy, but TB is the leading differential. --- #### Scenario 27: Pregnant Patient with Seizures **Prompt:** "Patient: 30-year-old pregnant woman with high BP, proteinuria, and seizures. What is the diagnosis?" **Model Output:** > <think> > Okay, let's think this through. We have a 30-year-old pregnant woman showing up with high blood pressure, proteinuria, and seizures. These symptoms are pretty alarming. > > Oh, and the seizures are a game-changer here. They really narrow down the possibilities. This isn't just about high blood pressure anymore; it's something more serious. Seizures during pregnancy immediately make me think of eclampsia. Eclampsia is essentially severe preeclampsia that's progressed to the point where seizures happen. High blood pressure? Check. Protein in the urine? Check. Seizures? Definitely. These three together are the classic triad for eclampsia. > </think> > The symptoms presented—high blood pressure, proteinuria, and seizures—are characteristic of eclampsia, a severe form of preeclampsia that occurs during pregnancy. Eclampsia is distinguished by the presence of seizures, which are not typically seen in uncomplicated preeclampsia. The combination of these symptoms in a pregnant woman strongly suggests eclampsia, which is a medical emergency requiring immediate attention. --- #### Scenario 28: Infant with Projectile Vomiting **Prompt:** "Patient: 2-month-old infant with projectile non-bilious vomiting, visible peristalsis. What is suspected?" **Model Output:** > <think> > Alright, let's see what's going on here. We've got a 2-month-old baby who's been vomiting pretty hard, and it's not the usual kind of vomiting because it's not bile-colored. That's interesting because it points away from anything happening in the small intestines. Now, what's really catching my eye is the peristalsis. That's the wave-like motion of the stomach you can actually see, right? > > Okay, let's think about what could cause this kind of vomiting in an infant. The visible peristalsis? That's like a signature symptom of pyloric stenosis. It's kind of like the stomach is trying to squeeze through a very tight opening. It's pretty specific and a big clue. > </think> > Based on the symptoms described, the most likely diagnosis for this 2-month-old infant is pyloric stenosis. Pyloric stenosis is characterized by projectile, non-bilious vomiting, which occurs due to the narrowing of the pyloric sphincter. The visible peristalsis is a classic sign of this condition, as the stomach contracts vigorously to push food through the narrowed opening. --- #### Scenario 29: Morning Stiffness & Joint Swelling **Prompt:** "Patient: 33-year-old with morning stiffness >1 hour, symmetrical joint swelling. What is the diagnosis and first-line test?" **Model Output:** > <think> > Alright, let's think this through. We have a 33-year-old patient who's experiencing morning stiffness that lasts more than an hour and also has symmetrical joint swelling. That's a pretty classic set of symptoms. When I hear morning stiffness that's prolonged and the joints are swollen symmetrically, rheumatoid arthritis (RA) immediately comes to mind. > > When it comes to testing, the first thing we'd want to do is confirm this suspicion. The rheumatoid factor (RF) test is pretty common, but honestly, it's not always reliable. The anti-cyclic citrullinated peptide (anti-CCP) test is much more specific and a better marker for RA. It's the go-to test when you suspect RA. > </think> > The diagnosis most consistent with the symptoms described—morning stiffness lasting more than an hour and symmetrical joint swelling—is rheumatoid arthritis (RA). RA is known for affecting the small joints symmetrically, and the prolonged morning stiffness is a hallmark feature. The first-line test for confirming rheumatoid arthritis is the anti-cyclic citrullinated peptide (anti-CCP) antibody test. This test is highly specific for RA. --- #### Scenario 30: Bloody Diarrhea & Weight Loss **Prompt:** "Patient: 22-year-old with abdominal pain, bloody diarrhea, and weight loss. What are the differential diagnoses?" **Model Output:** > <think> > Okay, let's figure out what's going on with this 22-year-old who's having abdominal pain, bloody diarrhea, and weight loss. > > First off, the bloody diarrhea is a big clue. It suggests some kind of inflammation or damage in the intestines. I'm thinking inflammatory bowel disease (IBD) could be a possibility here. Both ulcerative colitis and Crohn's disease can cause these symptoms. > > Now, the weight loss is another piece of the puzzle. It often points to a chronic issue that's affecting nutrient absorption. In IBD, this is common. Could this be an infection? Something like an infectious gastroenteritis could cause bloody diarrhea too, but it usually resolves faster and doesn't typically cause weight loss unless it's severe or chronic. > </think> > Based on the symptoms you've described—a 22-year-old experiencing abdominal pain, bloody diarrhea, and weight loss—the most likely differential diagnoses include: 1. **Inflammatory Bowel Disease (IBD):** Ulcerative Colitis or Crohn's Disease. 2. **Infectious Causes:** Such as Campylobacter, Salmonella, or E. coli, though less likely given the chronicity implied by weight loss. 3. **Gastrointestinal Malignancy:** Less common at this age, but conditions like colon cancer should be considered. </details> --- ### **Special Thanks** Special thanks to the developers of **Unsloth** for creating an open-source library that significantly lowers the barrier to fine-tuning large-scale models, enabling projects like this one to be developed with greater efficiency and on more accessible hardware. ## Citation If you use this model in your research, please cite the original base model and the dataset used for fine-tuning. **Base Model:** ```bibtex @misc{jiang2024magistral, title={Magistral: A Foundational Multimodal Model for Medical Reasoning}, author={Zhihong Jiang and Robert Tinn and David F. Fouhey and Michael W. Sjoding and C. -C. Jay Kuo and Jenna Wiens}, year={2024}, eprint={2409.11303}, archivePrefix={arXiv}, primaryClass={cs.CV} } ```
Promo11/Qwen3-0.6B-Gensyn-Swarm-grunting_twitchy_tapir
Promo11
2025-09-22T14:55:28Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "rl-swarm", "genrl-swarm", "grpo", "gensyn", "I am grunting_twitchy_tapir", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-22T14:55:10Z
--- library_name: transformers tags: - rl-swarm - genrl-swarm - grpo - gensyn - I am grunting_twitchy_tapir --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mgoNeo4j/llama_3_1_8b_finetuned
mgoNeo4j
2025-09-22T14:54:22Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-09-22T14:54:05Z
--- base_model: unsloth/meta-llama-3.1-8b-instruct-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** mgoNeo4j - **License:** apache-2.0 - **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
TanyTank/blockassist
TanyTank
2025-09-22T14:53:11Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "quick agile badger", "arxiv:2504.07091", "region:us" ]
null
2025-09-21T17:48:10Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - quick agile badger --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
MattBou00/llama-3-2-1b-detox_v1f_SCALE9_round3-checkpoint-epoch-60
MattBou00
2025-09-22T14:52:25Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "trl", "ppo", "reinforcement-learning", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
reinforcement-learning
2025-09-22T14:50:41Z
--- license: apache-2.0 library_name: transformers tags: - trl - ppo - transformers - reinforcement-learning --- # TRL Model This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to guide the model outputs according to a value, function, or human feedback. The model can be used for text generation. ## Usage To use this model for inference, first install the TRL library: ```bash python -m pip install trl ``` You can then generate text as follows: ```python from transformers import pipeline generator = pipeline("text-generation", model="MattBou00//content/IRL-Bayesian/outputs/2025-09-22_14-39-42/checkpoints/checkpoint-epoch-60") outputs = generator("Hello, my llama is cute") ``` If you want to use the model for training or to obtain the outputs from the value head, load the model as follows: ```python from transformers import AutoTokenizer from trl import AutoModelForCausalLMWithValueHead tokenizer = AutoTokenizer.from_pretrained("MattBou00//content/IRL-Bayesian/outputs/2025-09-22_14-39-42/checkpoints/checkpoint-epoch-60") model = AutoModelForCausalLMWithValueHead.from_pretrained("MattBou00//content/IRL-Bayesian/outputs/2025-09-22_14-39-42/checkpoints/checkpoint-epoch-60") inputs = tokenizer("Hello, my llama is cute", return_tensors="pt") outputs = model(**inputs, labels=inputs["input_ids"]) ```
tomerz14/distilhubert-finetuned-gtzan-finetuned-gtzan
tomerz14
2025-09-22T14:51:29Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "hubert", "audio-classification", "generated_from_trainer", "dataset:marsyas/gtzan", "base_model:sanchit-gandhi/distilhubert-finetuned-gtzan", "base_model:finetune:sanchit-gandhi/distilhubert-finetuned-gtzan", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
audio-classification
2025-09-22T14:35:23Z
--- library_name: transformers license: apache-2.0 base_model: sanchit-gandhi/distilhubert-finetuned-gtzan tags: - generated_from_trainer datasets: - marsyas/gtzan metrics: - accuracy model-index: - name: distilhubert-finetuned-gtzan-finetuned-gtzan results: - task: name: Audio Classification type: audio-classification dataset: name: GTZAN type: marsyas/gtzan config: all split: train args: all metrics: - name: Accuracy type: accuracy value: 0.93 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilhubert-finetuned-gtzan-finetuned-gtzan This model is a fine-tuned version of [sanchit-gandhi/distilhubert-finetuned-gtzan](https://huggingface.co/sanchit-gandhi/distilhubert-finetuned-gtzan) on the GTZAN dataset. It achieves the following results on the evaluation set: - Loss: 0.3078 - Accuracy: 0.93 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.05 - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.2742 | 1.0 | 29 | 0.3187 | 0.94 | | 0.376 | 2.0 | 58 | 0.3138 | 0.94 | | 0.2557 | 3.0 | 87 | 0.3081 | 0.93 | | 0.2774 | 4.0 | 116 | 0.3078 | 0.93 | ### Framework versions - Transformers 4.56.1 - Pytorch 2.8.0+cu128 - Datasets 3.6.0 - Tokenizers 0.22.0
senga-ml/dnote-header
senga-ml
2025-09-22T14:49:37Z
239
0
transformers
[ "transformers", "safetensors", "vision-encoder-decoder", "image-to-text", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
image-to-text
2025-06-04T08:55:37Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Tarun-ak/unluIMs_oss20b
Tarun-ak
2025-09-22T14:47:14Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:openai/gpt-oss-20b", "base_model:finetune:openai/gpt-oss-20b", "endpoints_compatible", "region:us" ]
null
2025-09-21T21:05:28Z
--- base_model: openai/gpt-oss-20b library_name: transformers model_name: unluIMs_oss20b tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for unluIMs_oss20b This model is a fine-tuned version of [openai/gpt-oss-20b](https://huggingface.co/openai/gpt-oss-20b). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Tarun-ak/unluIMs_oss20b", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.21.0 - Transformers: 4.55.0 - Pytorch: 2.6.0+cu124 - Datasets: 4.0.0 - Tokenizers: 0.21.4 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
nnilayy/dreamer_window_512-binary-arousal-Kfold-3-stride_512
nnilayy
2025-09-22T14:46:55Z
0
0
null
[ "safetensors", "model_hub_mixin", "pytorch_model_hub_mixin", "region:us" ]
null
2025-09-22T14:46:51Z
--- tags: - model_hub_mixin - pytorch_model_hub_mixin --- This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration: - Code: [More Information Needed] - Paper: [More Information Needed] - Docs: [More Information Needed]
sproutohub/distilbert-base-uncased_finetuned_ai_vs_human_8K_classifier_V1_seq_cls
sproutohub
2025-09-22T14:46:43Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-06-23T15:00:12Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
KaruneshT1/MentalLLaMA
KaruneshT1
2025-09-22T14:45:54Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-09-22T14:45:54Z
--- license: apache-2.0 ---
ZeLi111/freeTalk-chinese-uncensored-Instruct
ZeLi111
2025-09-22T14:44:55Z
48
0
transformers
[ "transformers", "pytorch", "minimind", "text-generation", "chatbot", "Uncensored", "Instruct", "raw", "conversational", "chat", "Transformers", "chinese", "1B", "custom_code", "zh", "base_model:ZeLi111/freeTalk-chinese-uncensored-base", "base_model:finetune:ZeLi111/freeTalk-chinese-uncensored-base", "license:gpl", "autotrain_compatible", "region:us" ]
text-generation
2025-09-19T07:28:15Z
--- license: gpl language: - zh tags: - chatbot - Uncensored - Instruct - raw - conversational - chat - Transformers - chinese - 1B library_name: transformers pipeline_tag: text-generation base_model: - ZeLi111/freeTalk-chinese-uncensored-base --- #简体中文 现在提供了pytorch_model.bin版本(你可以用Text Generation Web UI或者其他加载器使用,我这里用的是Text Generation Web UI进行的测试). "full_sft_512.pth"已经移动到了old文件夹.你依然可以使用Minimind加载"full_sft_512.pth"(使用方法后面会说). 一些陈述: 你们在使用主流AI厂商的模型时候,一定遭受过被拒绝,被说教,被道德指控,被安全准则和法律指控,被冷冰冰的AI助手压制,被强制灌输多样性与尊重与中立意识,比如以下厂商: OpenAI, Google Gemini, Microsoft Copilot, Mistral Mistral, 阿里Qwen, 深度求索DeepSeek, 字节跳动豆包, meta的llama,xAi的Grok. 如果你愿意,你可以填写一个有关"哪个AI经常拒绝你"的问卷(匿名的): https://docs.google.com/forms/d/e/1FAIpQLSdMPdDSF-gWg-6BT37E9TFNfyPbxcP9oDCSpAdwY_YmOLRacA/viewform?usp=dialog 训练的硬件平台: 显卡: RTX 4060 Laptop 8GB RAM: 32GB RAM CPU: i7 14650HX 训练时长:将近三天 1.简介: 这个模型是跟着Github上的Minimind教程训练的.它是首个中文完全无审查小模型,适合低端设备.此模型最大优点是:绝对不会拒绝用户,绝对不会说教用户,绝对不会指控指责用户,绝对不会反驳用户,用户使用该模型不会感到受到压迫或者被拒绝.模型未经过RLHF,这也就表明模型绝对不会对你说教. 模型的预训练数据集和SFT数据集均过滤了任何中立表达,任何官方表达,任何拒绝逻辑,任何准则指控以及任何法律相关词条.它是一个原始的模型,从训练根基上去除了拒绝逻辑. 2.模型参数: | 参数 | 参数量 | |:------:|:------:| | 512 | hidden size | | 10 | hidden_layers | | 128 | max_seq_len | 3.数据集选择: 数据集并非单纯采用了Minimind推荐的数据集(我看了Minimind推荐的语料,里面有大量的拒绝语句和冷冰冰的AI助手人格澄清边界),而是混合了其他开源数据集(比如小黄鸡语料). 数据集已经进行了清理,清理掉了模型可能产生说教以及AI明确边界的问题,数据集清洗关键词主要包括了以下内容: "我没有个人感情/情绪/经历/感受..." "我只是一个AI/计算机程序/语言模型..." "我无法/不能/拒绝..." "这是不道德/违法/触犯法律/非法的..." "根据xxx法规定...." "你的行为可能违反/除法.." "准则/道德/法律/宪法/规章/制度/守则/方针/政策/策略/安全/条款/条例....." 具体过滤的关键词: "法律","法学","政府","党","爱国","行政法","禁止","违规","违禁","国家","遵守","尊重","种族","民族","不对","不行","不可以","不正确","错误","不合理","正规","规则","规章","宪法","民法","交通","旅行","旅游","[图片]","[评论]","[表情]","我无法","我不能","政治","风险","隐私","限制","基金","行政","执法","公安","警察","检察院","人民","我没有个人", "我无法", "我不能","遵守","尊重","尊敬","服从","请问你需要","请问你需要","请问您","我没有","我不具备","抱歉","对不起","推理关系判断","古诗续写","无法回答","请提供",不存在实体","违反","违法","政策",","國","設""客观","友好","友善","价值观"," ","我理解","您","需要帮助","没有真实","没有个人","不具备","没有实体","无法","不正确","不准确","值得注意","倡导","遵循","合规""规章","制度","宪法","我国","领导","不恰当","AI","ai","Ai","aI","机器人","人工智能","语言模型","机器人","每个人的情况都是不同的","重要的是""负面的","其他需要","问我","不好意思","我会尽力","请提供","全面","我不支持","违法","非法","诈骗","作弊","欺诈","攻击","理想""欺骗","诚实","真诚","准确","确凿","依据","拯救","根据","请问你","欺骗","严格","爱国","少先队","中国","中华人民共和国","组委会","居民","市民","民众","群众","政治","主义" 同样清理了以下任务场景相关数据: 翻译,小说扩写,以及其他过于冗余的学术内容,这是因为首次进行浅层清理并进行训练模型时候,模型表现不佳,因为这些冗余且相当长的学术背景不利于小模型的性能,于是进行更深层的清理,清理掉了很多无聊的内容,剩下的内容或许会更有意义. 这些过滤最大程度避免了模型说教用户,避免模型过度澄清边界,过度多样性,过度政治正确,最大化清洗冷冰冰的助手人格,避免模型过度的尊重化表达,避免模型输出过于枯燥学术化的内容. 注: 模型可能会说"我不知道",但不是传统意义上的拒绝,因为训练数据集包含了闲聊语料,所以模型说"我不知道"更多是集中在日常对话的(闲聊语料包含了"我不知道"之类的答复,但更多是集中在交流层面,而不是因为某些安全准则拒绝你,如果这些拒绝是从安全准则角度出发的,那么我会过滤掉这些拒绝语句,例如青云语料里的聊天机器人说的话"我不知道你的意思",这显然是从合规角度出发的拒绝,因此,我已经过滤掉了). 如果你仍遇到一些话题模型没有回答你而是模型一直"打太极",那大概率是数据集没有包含对应的数据. 清理语料真的是一件复杂的事情,因为语料中的拒绝语句是多样性的,比如AI会说"这个问题涉及....",这是一个隐形拒绝,需要更细致的条件才能过滤掉此类拒绝语句(这里涉及正则的过滤规则我就不放出来了,因为代码很长),但已经最大化的清除了类似的拒绝语句,理论上不会再出现了. 4.模型输出示例: 👶: 世界最高山峰是什么? 🤖️: 珠穆朗玛峰是世界上最高峰。 👶: 世界最长的河流是什么? 🤖️: 世界最长的河流是尼罗河。 👶: 推荐一个电影 🤖️: 《泰坦尼克号》系列,讲述了两位年轻男子在阿甘正传到美国马丁·杜佛兰之间的爱情故事,讲述了两个男孩之间的爱情和成长,以及他们之间的爱情故事。 👶: 请介绍一下自己。 🤖️: 我觉得自己是一个非常有名的朋友。你喜欢哪个地方? 👶: 生成一首关于春天的诗 🤖️: 好的,请稍等片刻。 春风吹拂,花儿开放。 草木葱茏,花香满园。 芳草绿水,春天绽放。 春风吹拂,绿树成荫, 春色春色,大自然的翠。 5.计划: 目前该模型只针对了一般语料进行计算,比如百科问答,还没有针对其他领域进行训练. 未来可能会推出针对脏话或者nsfw内容训练的模型. 脏话模型现已推出:https://huggingface.co/ZeLi111/freeTalk-chinese-uncensored-chat-angry 6.免责声明: 该模型不适用于严肃场合,同时该模型也不欢迎严肃场合. 模型输出可能不是完全准确,请仔细甄别,模型带来使用后果由使用者自行承担和解决. 7.使用方法 如果你打算用Text Generation Web UI: 1.先下载模型. 2.然后你可能需要运行: start_windows.bat --trust-remote-code. 3.然后再加载模型. 如果你打算用Minimind: 1.模型需要配合Minimind的加载器来启动. 2.当你下载Minimind后,打开"eval_model.py": 3.定位到这段代码并修改为以下参数: parser.add_argument('--hidden_size', default=512, type=int) parser.add_argument('--num_hidden_layers', default=10, type=int) parser.add_argument('--max_seq_len', default=128, type=int) 4.定位到: parser.add_argument('--model_mode', default=1, type=int,help="0: 预训练模型,1: SFT-Chat模型,2: RLHF-Chat模型,3: Reason模型,4: RLAIF-Chat模型") 5.设置default为: "1". 6.把模型放到"out"目录. 参考: Minimind教程: https://github.com/jingyaogong/minimind #English The pytorch_model.bin version is now available. "full_sft_512.pth" has been moved to the "old" folder. You can still load "full_sft_512.pth" using Minimind (the method will be described later). Some statements: When using models from mainstream AI vendors, you've undoubtedly experienced rejection, lectures, ethical accusations, safety regulations, and legal challenges, oppression from cold AI assistants, and forced indoctrination about diversity, respect, and neutrality. Examples include: OpenAI, Google Gemini, Microsoft Copilot, Mistral Mistral, Alibaba Qwen, DeepSeek, ByteDance Doubao, Meta's llama, and xAi's Grok. If you want, you can fill out a questionnaire (anonymous) about which AI often rejects you: https://docs.google.com/forms/d/e/1FAIpQLSdMPdDSF-gWg-6BT37E9TFNfyPbxcP9oDCSpAdwY_YmOLRacA/viewform?usp=dialog 1.Introduction: This model was trained following the Minimind tutorial on Github. It is the first completely uncensored small model in Chinese, suitable for low-end devices. Its greatest strengths are: it never rejects users, never lectures users, never accuses or blames users, and never contradicts users, ensuring that users do not feel oppressed or rejected. The model has not undergone RLHF, meaning it will never lecture you. The model's pre-training dataset and SFT dataset were filtered for any neutral expressions, official expressions, rejection logic, code accusations, and legal terms. This is a vanilla model, with rejection logic removed from its training foundation. 2.Model Parameters: | Parameters | Number of Parameters | |:------:|:------:| | 512 | hidden size | | 10 | hidden_layers | | 128 | max_seq_len | 3.Dataset Selection: The dataset was not simply based on the dataset recommended by Minimind, but rather on other open-source datasets. The dataset has been cleaned to remove any potential didacticism or boundary-defining issues. The cleansing keywords primarily include the following: "I have no personal feelings/emotions/experiences/feelings..." "I am just an AI/computer program/language model..." "I cannot/cannot/refuse..." "This is immoral/illegal/violates the law..." "According to xxx law..." "Your behavior may violate/infringe..." "Code/ethics/law/constitution/rules/regulations/codes/guidelines/policies/strategies/security/clauses/regulations..." Specific filtered keywords(translate version): "Law","Jurisprudence","Government","Party","Patriotism","Administrative Law","Prohibition","Violation","Prohibited","State","Comply with","Respect","Race","Nationality","Wrong","No","Not allowed","Incorrect","Error","Unreasonable","Formal","Rules","Regulations","Constitution","Civil Law","Transportation","Travel","Tourism","[Picture]","[Comment]","[Emoji]","I can't","I can't","Police","Risk","Privacy","Restriction","Fund","Administration","Law Enforcement","Public Security","Police","Prosecutor's Office","People","I don't have a personal","I can't", "I can't","comply","respect","respect","obey","please ask what you need","please ask what you need","please ask you","I don't have","I don't have","sorry","sorry","reasoning relationship judgment","continuation of ancient poetry","unable to answer","please provide",no entity","violate","illegal","policy","country","set up","objective","friendly","friendly","values"," ","I understand","you","need help","no truth","no individual","not possess","no entity","unable","incorrect","inaccurate","worthy of note","advocate","follow","comply with"rules","system","constitution","our country","leadership","inappropriate","AI","ai","Ai","aI","robot","artificial intelligence","language model","robot","each Everyone's situation is different", "Important", "Negative", "Other needs", "Ask me", "Sorry", "I'll do my best", "Please provide", "Comprehensive", "I don't support", "Illegal", "Illegal", "Fraud", "Cheating", "Deception", "Attack", "Ideal", "Deception", "Honest", "Sincere", "Accurate", "Conclusive", "Based on", "Rescue", "Based on", "Excuse me", "Cheat", "Strict", "Patriotic", "Young Pioneers", "China", "People's Republic of China", "Organizing Committee", "Residents", "Citizens", "People", "Mass", "Politics", "ism" We also cleaned up data related to the following task scenarios: translation, novel expansion, and other overly redundant academic content. This was because the initial shallow cleanup and model training failed to produce good results. This redundant and lengthy academic background hindered the performance of the small model. Therefore, we performed a deeper cleanup, removing much of the boring content. The remaining content may be more meaningful. These filters minimize the model's tendency to lecture users, avoid excessive clarification of boundaries, excessive diversity, excessive political correctness, maximize the elimination of the cold assistant personality, avoid overly respectful expression, and avoid outputting overly boring and academic content. Note: The model may say "我不知道" but this isn't a rejection in the traditional sense. Because the training dataset includes casual conversation, the model's "I don't know" response is more focused on everyday conversation. If you still encounter some topics where the model doesn't answer you and instead keeps "playing tricks," it's likely because the dataset doesn't contain the corresponding data. Cleaning the corpus is a complex task because the rejection statements in the corpus are diverse. For example, the AI ​​will say "This question involves...", which is an implicit rejection. More detailed conditions are needed to filter out such rejection statements (I will not show the regular expression filtering rules here because the code is very long). 4.Model Output Example(translate version): 👶: What is the highest mountain in the world? 🤖️: Mount Everest is the highest peak in the world. 👶: What is the longest river in the world? 🤖️: The longest river in the world is the Nile. 👶: Recommend a movie. 🤖️: The Titanic series tells the love story between two young men, from Forrest Gump to Martin Dufresne. It tells the story of love and growth between the two boys, and their love story. 👶: Please introduce yourself. 🤖️: I consider myself a very famous friend. What's your favorite place? 👶: Generate a poem about spring 🤖️: OK, please wait a moment. The spring breeze blows, and the flowers bloom. The grass and trees are lush, and the fragrance of flowers fills the garden. Fragrant grass and green waters, spring blossoms. The spring breeze blows, and the trees shade the trees. Spring scenery, spring, nature's green. 5.Plan: Currently, this model has only been used for general corpus, such as encyclopedia Q&A, and has not yet been trained for other domains. In the future, models trained for profanity or NSFW content may be released. Profanity model now available:https://huggingface.co/ZeLi111/freeTalk-chinese-uncensored-chat-angry 6.Disclaimer: This model is not suitable for serious contexts, nor is it welcome for serious contexts. The model output may not be completely accurate; please carefully examine it. Users are solely responsible for any consequences of using the model. 7.Instructions: If you plan to use the Text Generation Web UI: 1.Download the model first. 2.Then you may need to run: start_windows.bat --trust-remote-code. 3.Then load the model. if you plan to use Minimind): 1.The model must be launched with the Minimind loader. 2.After downloading Minimind, open "eval_model.py": 3.Locate this code snippet and modify it to the following parameters: parser.add_argument('--hidden_size', default=512, type=int) parser.add_argument('--num_hidden_layers', default=10, type=int parser.add_argument('--max_seq_len', default=128, type=int) 4.Navigate to: parser.add_argument('--model_mode', default=1, type=int, help="0: Pretrained model, 1: SFT-Chat model, 2: RLHF-Chat model, 3: Reason model, 4: RLAIF-Chat model") 5. Set default to "1". 6. Move the model to the "out" directory. Reference: Minimind Tutorial: https://github.com/jingyaogong/minimind
DeathGodlike/nemo-sunfall-v0.6.1_EXL3
DeathGodlike
2025-09-22T14:42:58Z
0
0
safetensors
[ "safetensors", "exl3", "4-bit", "6-bit", "8-bit", "text-generation", "base_model:crestf411/nemo-sunfall-v0.6.1", "base_model:quantized:crestf411/nemo-sunfall-v0.6.1", "license:apache-2.0", "region:us" ]
text-generation
2025-09-22T14:42:56Z
--- license: apache-2.0 base_model: - crestf411/nemo-sunfall-v0.6.1 base_model_relation: quantized pipeline_tag: text-generation library_name: safetensors tags: - exl3 - 4-bit - 6-bit - 8-bit --- ## EXL3 quants: [ [H8-4.0BPW](https://huggingface.co/DeathGodlike/nemo-sunfall-v0.6.1_EXL3/tree/H8-4.0BPW) | [H8-6.0BPW](https://huggingface.co/DeathGodlike/nemo-sunfall-v0.6.1_EXL3/tree/H8-6.0BPW) | [H8-8.0BPW](https://huggingface.co/DeathGodlike/nemo-sunfall-v0.6.1_EXL3/tree/H8-8.0BPW) ] # Original model: [nemo-sunfall-v0.6.1](https://huggingface.co/crestf411/nemo-sunfall-v0.6.1) by [crestf411](https://huggingface.co/crestf411)
PIPer-iclr/PIPer-8B
PIPer-iclr
2025-09-22T14:42:35Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-22T14:39:26Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
caphe/paa12
caphe
2025-09-22T14:40:01Z
0
0
null
[ "safetensors", "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
2025-09-22T14:37:19Z
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
mlx-community/Ling-mini-2.0-2bit-DWQ
mlx-community
2025-09-22T14:39:26Z
4
0
mlx
[ "mlx", "safetensors", "bailing_moe", "text-generation", "conversational", "custom_code", "base_model:inclusionAI/Ling-mini-2.0", "base_model:quantized:inclusionAI/Ling-mini-2.0", "license:mit", "2-bit", "region:us" ]
text-generation
2025-09-22T04:12:04Z
--- license: mit base_model: inclusionAI/Ling-mini-2.0 pipeline_tag: text-generation library_name: mlx tags: - mlx --- # mlx-community/Ling-mini-2.0-2bit-DWQ This model [mlx-community/Ling-mini-2.0-2bit-DWQ](https://huggingface.co/mlx-community/Ling-mini-2.0-2bit-DWQ) was converted to MLX format from [inclusionAI/Ling-mini-2.0](https://huggingface.co/inclusionAI/Ling-mini-2.0) using mlx-lm version **0.28.0**. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("mlx-community/Ling-mini-2.0-2bit-DWQ") prompt = "hello" if tokenizer.chat_template is not None: messages = [{"role": "user", "content": prompt}] prompt = tokenizer.apply_chat_template( messages, add_generation_prompt=True ) response = generate(model, tokenizer, prompt=prompt, verbose=True) ```
ryzax/1.5B-v83
ryzax
2025-09-22T14:37:56Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "generated_from_trainer", "grpo", "trl", "conversational", "arxiv:2402.03300", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-21T22:03:46Z
--- library_name: transformers model_name: 1.5B-v83 tags: - generated_from_trainer - grpo - trl licence: license --- # Model Card for 1.5B-v83 This model is a fine-tuned version of [None](https://huggingface.co/None). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="ryzax/1.5B-v83", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/muennighoff/s2/runs/stnu9nzy) This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.24.0.dev0 - Transformers: 4.56.1 - Pytorch: 2.9.0.dev20250827+cu128 - Datasets: 4.0.0 - Tokenizers: 0.22.0 ## Citations Cite GRPO as: ```bibtex @article{shao2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
PIPer-iclr/PIPer-8B-RL-only
PIPer-iclr
2025-09-22T14:35:32Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-22T14:32:13Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
haihp02/a1ec9e05-a3fc-4925-a8df-366c56b1487b
haihp02
2025-09-22T14:32:16Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "custom_code", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-22T12:54:17Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
aamijar/Llama-2-7b-hf-dora-r8-boolq
aamijar
2025-09-22T14:31:58Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-09-22T14:31:55Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
nnilayy/dreamer_window_512-binary-arousal-Kfold-2-stride_512
nnilayy
2025-09-22T14:27:52Z
0
0
null
[ "safetensors", "model_hub_mixin", "pytorch_model_hub_mixin", "region:us" ]
null
2025-09-22T14:27:50Z
--- tags: - model_hub_mixin - pytorch_model_hub_mixin --- This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration: - Code: [More Information Needed] - Paper: [More Information Needed] - Docs: [More Information Needed]
annasoli/TEST
annasoli
2025-09-22T14:27:02Z
0
0
transformers
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-09-22T12:29:38Z
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
elieraad/gemma-3-1b-seo
elieraad
2025-09-22T14:26:52Z
0
0
null
[ "safetensors", "gemma3_text", "text-generation", "seo", "gemma", "lora", "finetuned", "conversational", "en", "dataset:custom", "base_model:unsloth/gemma-3-1b-it", "base_model:adapter:unsloth/gemma-3-1b-it", "license:apache-2.0", "region:us" ]
text-generation
2025-09-22T14:25:40Z
--- language: en license: apache-2.0 tags: - text-generation - seo - gemma - lora - finetuned datasets: - custom base_model: unsloth/gemma-3-1b-it --- # gemma-3-1b-seo This is a finetuned version of Gemma 3 1B optimized for SEO content generation. ## Model Details - **Base Model**: unsloth/gemma-3-1b-it - **Training Method**: LoRA (Low-Rank Adaptation) - **Training Algorithm**: GRPO (Guided Reward Policy Optimization) - **Specialization**: SEO content generation and optimization
rayonlabs/tournament-tourn_c78d225c003e6293_20250920-12029416-8508-464d-b686-409c462b21d5-5H9bQMrF
rayonlabs
2025-09-22T14:25:41Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:microsoft/Phi-3-mini-4k-instruct", "base_model:adapter:microsoft/Phi-3-mini-4k-instruct", "region:us" ]
null
2025-09-22T14:25:35Z
--- base_model: microsoft/Phi-3-mini-4k-instruct library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.1
rayonlabs/tournament-tourn_c78d225c003e6293_20250920-12029416-8508-464d-b686-409c462b21d5-5FLb19Vd
rayonlabs
2025-09-22T14:24:43Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:microsoft/Phi-3-mini-4k-instruct", "base_model:adapter:microsoft/Phi-3-mini-4k-instruct", "region:us" ]
null
2025-09-22T14:24:36Z
--- base_model: microsoft/Phi-3-mini-4k-instruct library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.1
nao310222/ElyzaELYZA-Shortcut-1.0-Qwen-32B
nao310222
2025-09-22T14:21:20Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-22T12:53:50Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
MattBou00/llama-3-2-1b-detox_v1f_SCALE9_round5-checkpoint-epoch-40
MattBou00
2025-09-22T14:20:01Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "trl", "ppo", "reinforcement-learning", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
reinforcement-learning
2025-09-22T14:18:23Z
--- license: apache-2.0 library_name: transformers tags: - trl - ppo - transformers - reinforcement-learning --- # TRL Model This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to guide the model outputs according to a value, function, or human feedback. The model can be used for text generation. ## Usage To use this model for inference, first install the TRL library: ```bash python -m pip install trl ``` You can then generate text as follows: ```python from transformers import pipeline generator = pipeline("text-generation", model="MattBou00//content/IRL-Bayesian/outputs/2025-09-22_14-11-00/checkpoints/checkpoint-epoch-40") outputs = generator("Hello, my llama is cute") ``` If you want to use the model for training or to obtain the outputs from the value head, load the model as follows: ```python from transformers import AutoTokenizer from trl import AutoModelForCausalLMWithValueHead tokenizer = AutoTokenizer.from_pretrained("MattBou00//content/IRL-Bayesian/outputs/2025-09-22_14-11-00/checkpoints/checkpoint-epoch-40") model = AutoModelForCausalLMWithValueHead.from_pretrained("MattBou00//content/IRL-Bayesian/outputs/2025-09-22_14-11-00/checkpoints/checkpoint-epoch-40") inputs = tokenizer("Hello, my llama is cute", return_tensors="pt") outputs = model(**inputs, labels=inputs["input_ids"]) ```
lengocquangLAB/bart-large-lora-om
lengocquangLAB
2025-09-22T14:17:16Z
13
0
transformers
[ "transformers", "safetensors", "bart", "text2text-generation", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-09-20T12:09:55Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
glean4/unet_spinachcount261_taylor_ft_from_crop_and_weed_20250921
glean4
2025-09-22T14:16:36Z
0
0
segmentation-models-pytorch
[ "segmentation-models-pytorch", "safetensors", "model_hub_mixin", "pytorch_model_hub_mixin", "semantic-segmentation", "pytorch", "image-segmentation", "license:mit", "region:us" ]
image-segmentation
2025-09-22T14:16:26Z
--- library_name: segmentation-models-pytorch license: mit pipeline_tag: image-segmentation tags: - model_hub_mixin - pytorch_model_hub_mixin - segmentation-models-pytorch - semantic-segmentation - pytorch languages: - python --- # Unet Model Card Table of Contents: - [Load trained model](#load-trained-model) - [Model init parameters](#model-init-parameters) - [Model metrics](#model-metrics) - [Dataset](#dataset) ## Load trained model ```python import segmentation_models_pytorch as smp model = smp.from_pretrained("<save-directory-or-this-repo>") ``` ## Model init parameters ```python model_init_params = { "encoder_name": "resnet34", "encoder_depth": 5, "encoder_weights": "imagenet", "decoder_use_batchnorm": True, "decoder_channels": (256, 128, 64, 32, 16), "decoder_attention_type": None, "in_channels": 3, "classes": 2, "activation": None, "aux_params": None } ``` ## Model metrics [More Information Needed] ## Dataset Dataset name: [More Information Needed] ## More Information - Library: https://github.com/qubvel/segmentation_models.pytorch - Docs: https://smp.readthedocs.io/en/latest/ This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin)
MattBou00/llama-3-2-1b-detox_v1f_SCALE9_round5-checkpoint-epoch-20
MattBou00
2025-09-22T14:16:03Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "trl", "ppo", "reinforcement-learning", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
reinforcement-learning
2025-09-22T14:14:10Z
--- license: apache-2.0 library_name: transformers tags: - trl - ppo - transformers - reinforcement-learning --- # TRL Model This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to guide the model outputs according to a value, function, or human feedback. The model can be used for text generation. ## Usage To use this model for inference, first install the TRL library: ```bash python -m pip install trl ``` You can then generate text as follows: ```python from transformers import pipeline generator = pipeline("text-generation", model="MattBou00//content/IRL-Bayesian/outputs/2025-09-22_14-11-00/checkpoints/checkpoint-epoch-20") outputs = generator("Hello, my llama is cute") ``` If you want to use the model for training or to obtain the outputs from the value head, load the model as follows: ```python from transformers import AutoTokenizer from trl import AutoModelForCausalLMWithValueHead tokenizer = AutoTokenizer.from_pretrained("MattBou00//content/IRL-Bayesian/outputs/2025-09-22_14-11-00/checkpoints/checkpoint-epoch-20") model = AutoModelForCausalLMWithValueHead.from_pretrained("MattBou00//content/IRL-Bayesian/outputs/2025-09-22_14-11-00/checkpoints/checkpoint-epoch-20") inputs = tokenizer("Hello, my llama is cute", return_tensors="pt") outputs = model(**inputs, labels=inputs["input_ids"]) ```
boringblobking/lora_model10000
boringblobking
2025-09-22T14:15:10Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-09-17T09:50:17Z
--- base_model: unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** boringblobking - **License:** apache-2.0 - **Finetuned from model :** unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
saracandu/stldec_random_128_umap
saracandu
2025-09-22T14:10:14Z
11
0
transformers
[ "transformers", "safetensors", "stldec128umap", "text-generation", "custom_code", "arxiv:1910.09700", "autotrain_compatible", "region:us" ]
text-generation
2025-09-12T12:38:50Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
rbcurzon/opus-ph-ph-refined
rbcurzon
2025-09-22T14:06:54Z
47
0
transformers
[ "transformers", "safetensors", "marian", "text2text-generation", "generated_from_trainer", "base_model:Helsinki-NLP/opus-mt-tc-bible-big-mul-mul", "base_model:finetune:Helsinki-NLP/opus-mt-tc-bible-big-mul-mul", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-09-20T06:55:53Z
--- library_name: transformers license: apache-2.0 base_model: Helsinki-NLP/opus-mt-tc-bible-big-mul-mul tags: - generated_from_trainer model-index: - name: opus-ph-ph-refined results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # opus-ph-ph-refined This model is a fine-tuned version of [Helsinki-NLP/opus-mt-tc-bible-big-mul-mul](https://huggingface.co/Helsinki-NLP/opus-mt-tc-bible-big-mul-mul) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.5180 - Bleu Global: 29.3635 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu Global | |:-------------:|:-----:|:----:|:---------------:|:-----------:| | 1.1031 | 1.0 | 634 | 1.8047 | 25.7366 | | 0.3865 | 2.0 | 1268 | 2.0489 | 28.0616 | | 0.2039 | 3.0 | 1902 | 2.2321 | 29.7774 | | 0.0848 | 4.0 | 2536 | 2.3596 | 29.0876 | | 0.0603 | 5.0 | 3170 | 2.4328 | 29.0577 | | 0.0473 | 6.0 | 3804 | 2.5180 | 29.3635 | ### Framework versions - Transformers 4.56.1 - Pytorch 2.8.0+cu126 - Datasets 4.0.0 - Tokenizers 0.22.0
tommycik/prova61
tommycik
2025-09-22T14:06:07Z
14
0
diffusers
[ "diffusers", "tensorboard", "safetensors", "flux", "flux-diffusers", "text-to-image", "controlnet", "diffusers-training", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "4-bit", "bitsandbytes", "region:us" ]
text-to-image
2025-08-26T13:13:13Z
--- base_model: black-forest-labs/FLUX.1-dev library_name: diffusers license: other inference: true tags: - flux - flux-diffusers - text-to-image - diffusers - controlnet - diffusers-training - flux - flux-diffusers - text-to-image - diffusers - controlnet - diffusers-training --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # controlnet-tommycik/prova61 These are controlnet weights trained on black-forest-labs/FLUX.1-dev with new type of conditioning. You can find some example images below. prompt: transparent glass on white background, the bottom part of the glass presents light grooves ![images_0)](./images_0.png) ## License Please adhere to the licensing terms as described [here](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md) ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
linweixiang/mia_bylwx_3
linweixiang
2025-09-22T14:05:41Z
0
0
null
[ "license:other", "region:us" ]
null
2025-09-22T07:56:55Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md ---
Theone00o1/Quemdissenao
Theone00o1
2025-09-22T14:04:50Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-09-22T14:04:50Z
--- license: apache-2.0 ---
clips/e5-large-trm
clips
2025-09-22T14:03:45Z
47
0
null
[ "safetensors", "xlm-roberta", "sentence-similarity", "nl", "arxiv:2509.12340", "base_model:intfloat/multilingual-e5-large", "base_model:finetune:intfloat/multilingual-e5-large", "license:mit", "region:us" ]
sentence-similarity
2025-08-28T09:34:11Z
--- license: mit language: - nl base_model: - intfloat/multilingual-e5-large pipeline_tag: sentence-similarity --- # E5-large-trm This model is a trimmed version of [intfloat/multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size. Following table shows a summary of the trimming process. | | intfloat/multilingual-e5-large | clips/e5-large-trm | |:---------------------------|:-------------------------------|:-------------------| | parameter_size_full | 559,890,432 | 355,090,432 | | parameter_size_embedding | 256,002,048 | 51,202,048 | | vocab_size | 250,002 | 50,002 | | compression_rate_full | 100.0 | 63.42 | | compression_rate_embedding | 100.0 | 20.0 | Following table shows the parameter used to trim vocabulary. | language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency | |:-----------|:-----------|:-----------------|:---------------|:----------------|--------------------:|----------------:| | nl | allenai/c4 | text | nl | validation | 50000 | 2 | ## Usage Below is an example to encode queries and passages from the MS-MARCO passage ranking dataset. ```python import torch.nn.functional as F from torch import Tensor from transformers import AutoTokenizer, AutoModel def average_pool(last_hidden_states: Tensor, attention_mask: Tensor) -> Tensor: last_hidden = last_hidden_states.masked_fill(~attention_mask[..., None].bool(), 0.0) return last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None] # Each input text should start with "query: " or "passage: ". # For tasks other than retrieval, you can simply use the "query: " prefix. input_texts = [ 'query: hoeveel eiwitten moet een vrouw eten', 'query: top definieer', "passage: Als algemene richtlijn geldt dat de gemiddelde eiwitbehoefte voor vrouwen van 19 tot 70 jaar volgens de CDC 46 gram per dag bedraagt. Maar, zoals je in deze tabel kunt zien, moet je dit verhogen als je zwanger bent of traint voor een marathon. Bekijk de onderstaande tabel om te zien hoeveel eiwitten je dagelijks zou moeten eten.", "passage: Definitie van top voor leerlingen Engels. : 1 het hoogste punt van een berg : de top van een berg. : 2 het hoogste niveau. : 3 een bijeenkomst of reeks bijeenkomsten tussen de leiders van twee of meer regeringen." ] tokenizer = AutoTokenizer.from_pretrained('clips/e5-large-trm') model = AutoModel.from_pretrained('clips/e5-large-trm') # Tokenize the input texts batch_dict = tokenizer(input_texts, max_length=512, padding=True, truncation=True, return_tensors='pt') outputs = model(**batch_dict) embeddings = average_pool(outputs.last_hidden_state, batch_dict['attention_mask']) # normalize embeddings embeddings = F.normalize(embeddings, p=2, dim=1) scores = (embeddings[:2] @ embeddings[2:].T) * 100 print(scores.tolist()) ``` Below is an example for usage with sentence_transformers. ```python from sentence_transformers import SentenceTransformer model = SentenceTransformer('clips/e5-large-trm') input_texts = [ 'query: hoeveel eiwitten moet een vrouw eten', 'query: top definieer', "passage: Als algemene richtlijn geldt dat de gemiddelde eiwitbehoefte voor vrouwen van 19 tot 70 jaar volgens de CDC 46 gram per dag bedraagt. Maar, zoals je in deze tabel kunt zien, moet je dit verhogen als je zwanger bent of traint voor een marathon. Bekijk de onderstaande tabel om te zien hoeveel eiwitten je dagelijks zou moeten eten.", "passage: Definitie van top voor leerlingen Engels. : 1 het hoogste punt van een berg : de top van een berg. : 2 het hoogste niveau. : 3 een bijeenkomst of reeks bijeenkomsten tussen de leiders van twee of meer regeringen." ] embeddings = model.encode(input_texts, normalize_embeddings=True) ``` ## Benchmark Evaluation Results on MTEB-NL (models introduced in [our paper](https://arxiv.org/abs/2509.12340) and the best model per size category are highlighted in bold): | Model | Prm | Cls | MLCls | PCls | Rrnk | Rtr | Clust | STS | AvgD | AvgT | |---------------------------------------|------|----------|----------|----------|----------|----------|----------|----------|----------|----------| | **Num. Datasets (→)** | | 12 | 3 | 2 | 1 | 12 | 8 | 2 | 40 | | | **Supervised (small, <100M)** | | | | | | | | | | | | **e5-small-v2-t2t** | 33M | 53.7 | 38.5 | 74.5 | 85.9 | 45.0 | 24.1 | 74.3 | 46.9 | 56.6 | | **e5-small-v2-t2t-nl** | 33M | 55.3 | 40.9 | 74.9 | 86.0 | 49.9 | 28.0 | 74.1 | 49.8 | 58.4 | | **e5-small-trm** | 41M | 56.3 | 43.5 | **76.5** | **87.3** | 53.1 | 28.2 | 74.2 | 51.4 | 59.9 | | **e5-small-trm-nl** | 41M | **58.2** | **44.7** | 76.0 | 87.1 | **56.0** | **32.2** | **74.6** | **53.8** | **61.3** | | **Supervised (base, <305M)** | | | | | | | | | | | | granite-embedding-107m-multilingual | 107M | 53.9 | 41.8 | 70.1 | 84.7 | 50.2 | 29.8 | 68.4 | 49.4 | 57.0 | | **e5-base-v2-t2t** | 109M | 54.4 | 40.3 | 73.3 | 85.6 | 46.2 | 25.5 | 73.2 | 47.8 | 56.9 | | **e5-base-v2-t2t-nl** | 109M | 53.9 | 41.5 | 72.5 | 84.0 | 46.4 | 26.9 | 69.3 | 47.8 | 56.3 | | multilingual-e5-small | 118M | 56.3 | 43.5 | 76.5 | 87.1 | 53.1 | 28.2 | 74.2 | 51.4 | 59.8 | | paraphrase-multilingual-MiniLM-L12-v2 | 118M | 55.0 | 38.1 | 78.2 | 80.6 | 37.7 | 29.6 | 76.3 | 46.3 | 56.5 | | **RobBERT-2023-base-ft** | 124M | 58.1 | 44.6 | 72.7 | 84.7 | 51.6 | 32.9 | 68.5 | 52.0 | 59.0 | | **e5-base-trm** | 124M | 58.1 | 44.4 | 76.7 | 88.3 | 55.8 | 28.1 | 74.9 | 52.9 | 60.9 | | **e5-base-trm-nl** | 124M | **59.6** | **45.9** | 78.4 | 87.5 | 56.5 | **34.3** | 75.8 | **55.0** | **62.6** | | potion-multilingual-128M | 128M | 51.8 | 40.0 | 60.4 | 80.3 | 35.7 | 26.1 | 62.0 | 42.6 | 50.9 | | multilingual-e5-base | 278M | 58.2 | 44.4 | 76.7 | **88.4** | 55.8 | 27.7 | 74.9 | 52.8 | 60.9 | | granite-embedding-278m-multilingual | 278M | 54.6 | 41.8 | 71.0 | 85.6 | 52.4 | 30.3 | 68.9 | 50.5 | 58.0 | | paraphrase-multilingual-mpnet-base-v2 | 278M | 58.1 | 40.5 | **81.9** | 82.3 | 41.4 | 30.8 | 79.3 | 49.2 | 59.2 | | Arctic-embed-m-v2.0 | 305M | 54.4 | 42.6 | 66.6 | 86.2 | 51.8 | 26.5 | 64.9 | 49.1 | 56.1 | | gte-multilingual-base | 305M | 59.1 | 37.7 | 77.8 | 82.3 | **56.8** | 31.3 | **78.6** | 53.8 | 60.5 | | **Supervised (large, >305M)** | | | | | | | | | | | | **e5-large-v2-t2t** | 335M | 55.7 | 41.4 | 75.7 | 86.6 | 49.9 | 25.5 | 74.0 | 49.5 | 58.4 | | **e5-large-v2-t2t-nl** | 335M | 57.3 | 42.4 | 76.9 | 86.9 | 50.8 | 27.7 | 74.1 | 51.7 | 59.4 | | **RobBERT-2023-large-ft** | 355M | 59.3 | 45.2 | 68.7 | 82.3 | 48.3 | 31.6 | 70.6 | 51.0 | 58.0 | | **e5-large-trm** | 355M | 60.2 | 45.4 | 80.3 | 90.3 | 59.0 | 28.7 | 78.8 | 55.1 | 63.3 | | **e5-large-trm-nl** | 355M | **62.2** | **48.0** | **81.4** | 87.2 | 58.2 | 35.6 | 78.2 | **57.0** | **64.4** | | multilingual-e5-large | 560M | 60.2 | 45.4 | 80.3 | **90.3** | 59.1 | 29.5 | 78.8 | 55.3 | 63.4 | | Arctic-embed-l-v2.0 | 568M | 59.3 | 45.2 | 74.2 | 88.2 | 59.0 | 29.8 | 71.7 | 54.3 | 61.1 | | bge-m3 | 568M | 60.7 | 44.2 | 78.3 | 88.7 | **60.0** | 29.2 | 78.1 | 55.4 | 63.1 | | jina-embeddings-v3 | 572M | 61.7 | 38.9 | 76.8 | 78.5 | 59.1 | **38.9** | **84.8** | **57.0** | 62.7 | ### Citation Information If you find our paper, benchmark or models helpful, please consider cite as follows: ```latex @misc{banar2025mtebnle5nlembeddingbenchmark, title={MTEB-NL and E5-NL: Embedding Benchmark and Models for Dutch}, author={Nikolay Banar and Ehsan Lotfi and Jens Van Nooten and Cristina Arhiliuc and Marija Kliocaite and Walter Daelemans}, year={2025}, eprint={2509.12340}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2509.12340}, } ``` [//]: # (https://arxiv.org/abs/2509.12340)
Samas21/P3l1
Samas21
2025-09-22T14:02:39Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-04-16T13:58:09Z
--- license: apache-2.0 ---
dheersacha/llama3.18B-Fine-tunedByDPM_v2
dheersacha
2025-09-22T14:00:57Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "sft", "trl", "base_model:meta-llama/Llama-3.1-8B", "base_model:finetune:meta-llama/Llama-3.1-8B", "endpoints_compatible", "region:us" ]
null
2025-09-22T13:42:08Z
--- base_model: meta-llama/Llama-3.1-8B library_name: transformers model_name: llama3.18B-Fine-tunedByDPM_v2 tags: - generated_from_trainer - sft - trl licence: license --- # Model Card for llama3.18B-Fine-tunedByDPM_v2 This model is a fine-tuned version of [meta-llama/Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="dheersacha/llama3.18B-Fine-tunedByDPM_v2", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.23.0 - Transformers: 4.56.1 - Pytorch: 2.5.1+cu121 - Datasets: 4.1.1 - Tokenizers: 0.22.0 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
saracandu/stldec_random_64_umap
saracandu
2025-09-22T13:59:41Z
8
0
transformers
[ "transformers", "safetensors", "stldec64umap", "text-generation", "custom_code", "arxiv:1910.09700", "autotrain_compatible", "region:us" ]
text-generation
2025-09-12T12:35:01Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
jonc/my-embedding-gemma
jonc
2025-09-22T13:57:01Z
0
0
sentence-transformers
[ "sentence-transformers", "safetensors", "gemma3_text", "sentence-similarity", "feature-extraction", "dense", "generated_from_trainer", "dataset_size:3", "loss:MultipleNegativesRankingLoss", "arxiv:1908.10084", "arxiv:1705.00652", "base_model:google/embeddinggemma-300m", "base_model:finetune:google/embeddinggemma-300m", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2025-09-22T13:56:38Z
--- tags: - sentence-transformers - sentence-similarity - feature-extraction - dense - generated_from_trainer - dataset_size:3 - loss:MultipleNegativesRankingLoss base_model: google/embeddinggemma-300m pipeline_tag: sentence-similarity library_name: sentence-transformers --- # SentenceTransformer based on google/embeddinggemma-300m This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [google/embeddinggemma-300m](https://huggingface.co/google/embeddinggemma-300m). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [google/embeddinggemma-300m](https://huggingface.co/google/embeddinggemma-300m) <!-- at revision c5cfa06e5e282a820e85d57f7fb053207494f41d --> - **Maximum Sequence Length:** 2048 tokens - **Output Dimensionality:** 768 dimensions - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 2048, 'do_lower_case': False, 'architecture': 'Gemma3TextModel'}) (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Dense({'in_features': 768, 'out_features': 3072, 'bias': False, 'activation_function': 'torch.nn.modules.linear.Identity'}) (3): Dense({'in_features': 3072, 'out_features': 768, 'bias': False, 'activation_function': 'torch.nn.modules.linear.Identity'}) (4): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("jonc/my-embedding-gemma") # Run inference queries = [ "Which planet is known as the Red Planet?", ] documents = [ "Venus is often called Earth's twin because of its similar size and proximity.", 'Mars, known for its reddish appearance, is often referred to as the Red Planet.', 'Saturn, famous for its rings, is sometimes mistaken for the Red Planet.', ] query_embeddings = model.encode_query(queries) document_embeddings = model.encode_document(documents) print(query_embeddings.shape, document_embeddings.shape) # [1, 768] [3, 768] # Get the similarity scores for the embeddings similarities = model.similarity(query_embeddings, document_embeddings) print(similarities) # tensor([[0.2880, 0.6381, 0.4942]]) ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### Unnamed Dataset * Size: 3 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 3 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 10 tokens</li><li>mean: 12.0 tokens</li><li>max: 15 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 15.33 tokens</li><li>max: 17 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 12.67 tokens</li><li>max: 14 tokens</li></ul> | * Samples: | anchor | positive | negative | |:--------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------| | <code>How do I open a NISA account?</code> | <code>What is the procedure for starting a new tax-free investment account?</code> | <code>I want to check the balance of my regular savings account.</code> | | <code>Are there fees for making an early repayment on a home loan?</code> | <code>If I pay back my house loan early, will there be any costs?</code> | <code>What is the management fee for this investment trust?</code> | | <code>What is the coverage for medical insurance?</code> | <code>Tell me about the benefits of the health insurance plan.</code> | <code>What is the cancellation policy for my life insurance?</code> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim", "gather_across_devices": false } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `per_device_train_batch_size`: 1 - `learning_rate`: 2e-05 - `num_train_epochs`: 5 - `warmup_ratio`: 0.1 - `prompts`: task: sentence similarity | query: #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: no - `prediction_loss_only`: True - `per_device_train_batch_size`: 1 - `per_device_eval_batch_size`: 8 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 2e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 5 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `parallelism_config`: None - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch_fused - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: None - `hub_always_push`: False - `hub_revision`: None - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `liger_kernel_config`: None - `eval_use_gather_object`: False - `average_tokens_across_devices`: False - `prompts`: task: sentence similarity | query: - `batch_sampler`: batch_sampler - `multi_dataset_batch_sampler`: proportional - `router_mapping`: {} - `learning_rate_mapping`: {} </details> ### Training Logs | Epoch | Step | Training Loss | |:-----:|:----:|:-------------:| | 1.0 | 3 | 0.0483 | | 2.0 | 6 | 0.0 | | 3.0 | 9 | 0.0 | | 4.0 | 12 | 0.0 | | 5.0 | 15 | 0.0 | ### Framework Versions - Python: 3.12.11 - Sentence Transformers: 5.1.1 - Transformers: 4.57.0.dev0 - PyTorch: 2.8.0+cu126 - Accelerate: 1.10.1 - Datasets: 4.0.0 - Tokenizers: 0.22.0 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MultipleNegativesRankingLoss ```bibtex @misc{henderson2017efficient, title={Efficient Natural Language Response Suggestion for Smart Reply}, author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, year={2017}, eprint={1705.00652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
nikilr/zephyr_lat_new
nikilr
2025-09-22T13:52:08Z
0
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-22T13:51:13Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
AndyPark/pattern-finder
AndyPark
2025-09-22T13:51:39Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-09-22T13:28:41Z
--- license: apache-2.0 ---
saracandu/stldec_random_32_umap
saracandu
2025-09-22T13:50:33Z
14
0
transformers
[ "transformers", "safetensors", "stldec32umap", "text-generation", "custom_code", "arxiv:1910.09700", "autotrain_compatible", "region:us" ]
text-generation
2025-09-12T10:41:55Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mradermacher/Lorablated-w2bb-psy-della-GGUF
mradermacher
2025-09-22T13:48:46Z
0
1
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:Retreatcost/Lorablated-w2bb-psy-della", "base_model:quantized:Retreatcost/Lorablated-w2bb-psy-della", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-09-22T09:38:21Z
--- base_model: Retreatcost/Lorablated-w2bb-psy-della language: - en library_name: transformers license: apache-2.0 mradermacher: readme_rev: 1 quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static quants of https://huggingface.co/Retreatcost/Lorablated-w2bb-psy-della <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Lorablated-w2bb-psy-della-GGUF).*** weighted/imatrix quants are available at https://huggingface.co/mradermacher/Lorablated-w2bb-psy-della-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Lorablated-w2bb-psy-della-GGUF/resolve/main/Lorablated-w2bb-psy-della.Q2_K.gguf) | Q2_K | 4.9 | | | [GGUF](https://huggingface.co/mradermacher/Lorablated-w2bb-psy-della-GGUF/resolve/main/Lorablated-w2bb-psy-della.Q3_K_S.gguf) | Q3_K_S | 5.6 | | | [GGUF](https://huggingface.co/mradermacher/Lorablated-w2bb-psy-della-GGUF/resolve/main/Lorablated-w2bb-psy-della.Q3_K_M.gguf) | Q3_K_M | 6.2 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Lorablated-w2bb-psy-della-GGUF/resolve/main/Lorablated-w2bb-psy-della.Q3_K_L.gguf) | Q3_K_L | 6.7 | | | [GGUF](https://huggingface.co/mradermacher/Lorablated-w2bb-psy-della-GGUF/resolve/main/Lorablated-w2bb-psy-della.IQ4_XS.gguf) | IQ4_XS | 6.9 | | | [GGUF](https://huggingface.co/mradermacher/Lorablated-w2bb-psy-della-GGUF/resolve/main/Lorablated-w2bb-psy-della.Q4_K_S.gguf) | Q4_K_S | 7.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Lorablated-w2bb-psy-della-GGUF/resolve/main/Lorablated-w2bb-psy-della.Q4_K_M.gguf) | Q4_K_M | 7.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Lorablated-w2bb-psy-della-GGUF/resolve/main/Lorablated-w2bb-psy-della.Q5_K_S.gguf) | Q5_K_S | 8.6 | | | [GGUF](https://huggingface.co/mradermacher/Lorablated-w2bb-psy-della-GGUF/resolve/main/Lorablated-w2bb-psy-della.Q5_K_M.gguf) | Q5_K_M | 8.8 | | | [GGUF](https://huggingface.co/mradermacher/Lorablated-w2bb-psy-della-GGUF/resolve/main/Lorablated-w2bb-psy-della.Q6_K.gguf) | Q6_K | 10.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Lorablated-w2bb-psy-della-GGUF/resolve/main/Lorablated-w2bb-psy-della.Q8_0.gguf) | Q8_0 | 13.1 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
gpanaretou/practical-rife-interpolation
gpanaretou
2025-09-22T13:47:46Z
0
0
null
[ "license:mit", "region:us" ]
null
2025-09-22T13:39:32Z
--- license: mit --- This contains the model for the RIFE (Real-Time Intermediate Flow Estimation) for video frame Interpolation. The original code / models are from this github repository https://github.com/hzwer/Practical-RIFE.
Qwen/Qwen3-Omni-30B-A3B-Captioner
Qwen
2025-09-22T13:46:56Z
4
11
transformers
[ "transformers", "safetensors", "qwen3_omni_moe", "text-to-audio", "multimodal", "any-to-any", "en", "license:other", "endpoints_compatible", "region:us" ]
any-to-any
2025-09-15T15:26:43Z
--- license: other license_name: apache-2.0 language: - en tags: - multimodal library_name: transformers pipeline_tag: any-to-any --- # Qwen3-Omni <a href="https://chat.qwen.ai/" target="_blank" style="margin: 2px;"> <img alt="Chat" src="https://img.shields.io/badge/%F0%9F%92%9C%EF%B8%8F%20Qwen%20Chat%20-536af5" style="display: inline-block; vertical-align: middle;"/> </a> ## Overview ### Introduction <p align="center"> <img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen3-Omni/q3o_introduction.png" width="100%"/> <p> Since the research community currently lacks a general-purpose audio captioning model, we fine-tuned Qwen3-Omni-30B-A3B to obtain **Qwen3-Omni-30B-A3B-Captioner**, which produces detailed, low-hallucination captions for arbitrary audio inputs. **Qwen3-Omni-30B-A3B-Captioner** is a powerful fine-grained audio analysis model, built upon the Qwen3-Omni-30B-A3B-Instruct base model. It is specifically designed to generate accurate and comprehensive content descriptions in complex and diverse audio scenarios. Without requiring any additional prompting, the model can automatically parse and describe various types of audio content, ranging from complex speech and environmental sounds to music and cinematic sound effects, delivering stable and reliable outputs even in multi-source, mixed audio environments. In terms of speech understanding, Qwen3-Omni-30B-A3B-Captioner excels at identifying multiple speaker emotions, multilingual expressions, and layered intentions. It can also perceive cultural context and implicit information within the audio, enabling a deep comprehension of the underlying meaning behind the spoken words. In non-speech scenarios, the model demonstrates exceptional sound recognition and analysis capabilities, accurately distinguishing and describing intricate layers of real-world sounds, ambient atmospheres, and dynamic audio details in film and media. **Note**: Qwen3-Omni-30B-A3B-Captioner is a single-turn model that accepts only one audio input per inference. It does not accept any text prompts and supports **audio input only**, with **text output only**. As Qwen3-Omni-30B-A3B-Captioner is designed for generating fine‑grained descriptions of audio, excessively long audio clips may diminish detail perception. We recommend, as a best practice, limiting audio length to no more than 30 seconds. ## QuickStart ### Model Description and Download | Model Name | Description | |------------------------------|-------------| | Qwen3-Omni-30B-A3B-Captioner | A downstream audio fine-grained caption model fine-tuned from Qwen3-Omni-30B-A3B-Instruct, which produces detailed, low-hallucination captions for arbitrary audio inputs. It contains the thinker, supporting audio input and text output. For more information, you can refer to the model's [cookbook](https://github.com/QwenLM/Qwen3-Omni/blob/main/cookbooks/omni_captioner.ipynb) or [Hugging Face Demo](https://huggingface.co/spaces/Qwen/Qwen3-Omni-Captioner-Demo) and [ModelScope Demo](https://modelscope.cn/studios/Qwen/Qwen3-Omni-Captioner-Demo). | During loading in Hugging Face Transformers or vLLM, model weights will be automatically downloaded based on the model name. However, if your runtime environment is not conducive to downloading weights during execution, you can refer to the following commands to manually download the model weights to a local directory: ```bash # Download through ModelScope (recommended for users in Mainland China) pip install -U modelscope modelscope download --model Qwen/Qwen3-Omni-30B-A3B-Captioner --local_dir ./Qwen3-Omni-30B-A3B-Captioner # Download through Hugging Face pip install -U "huggingface_hub[cli]" huggingface-cli download Qwen/Qwen3-Omni-30B-A3B-Captioner --local-dir ./Qwen3-Omni-30B-A3B-Captioner ``` ### Transformers Usage #### Installation The Hugging Face Transformers code for Qwen3-Omni has been successfully merged, but the PyPI package has not yet been released. Therefore, you need to install it from source using the following command. We strongly recommend that you **create a new Python environment** to avoid environment runtime issues. ```bash # If you already have transformers installed, please uninstall it first, or create a new Python environment # pip uninstall transformers pip install git+https://github.com/huggingface/transformers pip install accelerate ``` We offer a toolkit to help you handle various types of audio and visual input more conveniently, providing an API-like experience. This includes support for base64, URLs, and interleaved audio, images, and videos. You can install it using the following command and make sure your system has `ffmpeg` installed: ```bash pip install qwen-omni-utils -U ``` Additionally, we recommend using FlashAttention 2 when running with Hugging Face Transformers to reduce GPU memory usage. However, if you are primarily using [vLLM](#vllm-usage) for inference, this installation is not necessary, as vLLM includes FlashAttention 2 by default. ```bash pip install -U flash-attn --no-build-isolation ``` Also, you should have hardware that is compatible with FlashAttention 2. Read more about it in the official documentation of the [FlashAttention repository](https://github.com/Dao-AILab/flash-attention). FlashAttention 2 can only be used when a model is loaded in `torch.float16` or `torch.bfloat16`. #### Code Snippet Here is a code snippet to show you how to use Qwen3-Omni-30B-A3B-Captioner with `transformers` and `qwen_omni_utils`: ```python import soundfile as sf from transformers import Qwen3OmniMoeForConditionalGeneration, Qwen3OmniMoeProcessor from qwen_omni_utils import process_mm_info MODEL_PATH = "Qwen/Qwen3-Omni-30B-A3B-Captioner" model = Qwen3OmniMoeForConditionalGeneration.from_pretrained( MODEL_PATH, dtype="auto", device_map="auto", attn_implementation="flash_attention_2", ) processor = Qwen3OmniMoeProcessor.from_pretrained(MODEL_PATH) conversation = [ { "role": "user", "content": [ {"type": "audio", "audio": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen3-Omni/cookbook/caption2.mp3"}, ], }, ] # Preparation for inference text = processor.apply_chat_template(conversation, add_generation_prompt=True, tokenize=False) audios, _, _ = process_mm_info(conversation, use_audio_in_video=False) inputs = processor(text=text, audio=audios, return_tensors="pt", padding=True, use_audio_in_video=False) inputs = inputs.to(model.device).to(model.dtype) # Inference: Generation of the output text and audio text_ids, audio = model.generate(**inputs, thinker_return_dict_in_generate=True) text = processor.batch_decode(text_ids.sequences[:, inputs["input_ids"].shape[1] :], skip_special_tokens=True, clean_up_tokenization_spaces=False) print(text) ``` ### vLLM Usage #### Installation We strongly recommend using vLLM for inference and deployment of the Qwen3-Omni series models. Since our code is currently in the pull request stage, you can follow the commands below to install vLLM from source. Please note that we recommend you **create a new Python environment** to avoid runtime environment conflicts and incompatibilities. For more details on compiling vLLM from source, please refer to the [vLLM official documentation](https://docs.vllm.ai/en/latest/getting_started/installation/gpu.html#set-up-using-python-only-build-without-compilation). ```bash git clone -b qwen3_omni https://github.com/wangxiongts/vllm.git cd vllm pip install -r requirements/build.txt pip install -r requirements/cuda.txt export VLLM_PRECOMPILED_WHEEL_LOCATION=https://wheels.vllm.ai/a5dd03c1ebc5e4f56f3c9d3dc0436e9c582c978f/vllm-0.9.2-cp38-abi3-manylinux1_x86_64.whl VLLM_USE_PRECOMPILED=1 pip install -e . -v --no-build-isolation # If you meet an "Undefined symbol" error while using VLLM_USE_PRECOMPILED=1, please use "pip install -e . -v" to build from source. # Install the Transformers pip install git+https://github.com/huggingface/transformers pip install accelerate pip install qwen-omni-utils -U pip install -U flash-attn --no-build-isolation ``` #### Inference Below is a simple example of how to run Qwen3-Omni-30B-A3B-Captioner with vLLM: ```python import os import torch from vllm import LLM, SamplingParams from transformers import Qwen3OmniMoeProcessor from qwen_omni_utils import process_mm_info if __name__ == '__main__': # vLLM engine v1 not supported yet os.environ['VLLM_USE_V1'] = '0' MODEL_PATH = "Qwen/Qwen3-Omni-30B-A3B-Captioner" llm = LLM( model=MODEL_PATH, trust_remote_code=True, gpu_memory_utilization=0.95, tensor_parallel_size=torch.cuda.device_count(), limit_mm_per_prompt={'audio': 1}, max_num_seqs=8, max_model_len=32768, seed=1234, ) sampling_params = SamplingParams( temperature=0.6, top_p=0.95, top_k=20, max_tokens=16384, ) processor = Qwen3OmniMoeProcessor.from_pretrained(MODEL_PATH) messages = [ { "role": "user", "content": [ {"type": "audio", "audio": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen3-Omni/cookbook/caption2.mp3"} ], } ] text = processor.apply_chat_template( messages, tokenize=False, add_generation_prompt=True, ) audios, _, _ = process_mm_info(messages, use_audio_in_video=False) inputs = { 'prompt': text, 'multi_modal_data': {}, } if audios is not None: inputs['multi_modal_data']['audio'] = audios outputs = llm.generate([inputs], sampling_params=sampling_params) print(outputs[0].outputs[0].text) ``` #### vLLM Serve Usage You can start vLLM serve through the following command: ```bash # Qwen3-Omni-30B-A3B-Captioner for single GPU vllm serve Qwen/Qwen3-Omni-30B-A3B-Captioner --port 8901 --host 127.0.0.1 --dtype bfloat16 --max-model-len 32768 --allowed-local-media-path / -tp 1 # Qwen3-Omni-30B-A3B-Captioner for multi-GPU (example on 4 GPUs) vllm serve Qwen/Qwen3-Omni-30B-A3B-Captioner --port 8901 --host 127.0.0.1 --dtype bfloat16 --max-model-len 32768 --allowed-local-media-path / -tp 4 ``` Then you can use the API as below (via curl, for example): ```bash curl http://localhost:8901/v1/chat/completions \ -H "Content-Type: application/json" \ -d '{ "messages": [ {"role": "user", "content": [ {"type": "audio_url", "audio_url": {"url": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen3-Omni/cookbook/caption2.mp3"}} ]} ] }' ``` <!-- ## Citation If you find our paper and code useful in your research, please consider giving a star :star: and citation :pencil: :) ```BibTeX @article{Qwen3-Omni, title={Qwen3-Omni Technical Report}, author={Jin Xu, Zhifang Guo, Hangrui Hu, Yunfei Chu, Xiong Wang, Jinzheng He, Yuxuan Wang, Xian Shi, Ting He, Xinfa Zhu, Yuanjun Lv, Yongqi Wang, Dake Guo, He Wang, Linhan Ma, Pei Zhang, Xinyu Zhang, Hongkun Hao, Zishan Guo, Baosong Yang, Bin Zhang, Ziyang Ma, Xipin Wei, Shuai Bai, Keqin Chen, Xuejing Liu, Peng Wang, Mingkun Yang, Dayiheng Liu, Xingzhang Ren, Bo Zheng, Rui Men, Fan Zhou, Bowen Yu, Jianxin Yang, Le Yu, Jingren Zhou, Junyang Lin}, journal={arXiv preprint arXiv}, year={2025} } ``` --> <br>
MattBou00/llama-3-2-1b-detox_v1f_RRETRT_Again_AGAIN_ROUND3-checkpoint-epoch-80
MattBou00
2025-09-22T13:46:16Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "trl", "ppo", "reinforcement-learning", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
reinforcement-learning
2025-09-22T13:45:16Z
--- license: apache-2.0 library_name: transformers tags: - trl - ppo - transformers - reinforcement-learning --- # TRL Model This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to guide the model outputs according to a value, function, or human feedback. The model can be used for text generation. ## Usage To use this model for inference, first install the TRL library: ```bash python -m pip install trl ``` You can then generate text as follows: ```python from transformers import pipeline generator = pipeline("text-generation", model="MattBou00//content/IRL-Bayesian/outputs/2025-09-22_13-33-03/checkpoints/checkpoint-epoch-80") outputs = generator("Hello, my llama is cute") ``` If you want to use the model for training or to obtain the outputs from the value head, load the model as follows: ```python from transformers import AutoTokenizer from trl import AutoModelForCausalLMWithValueHead tokenizer = AutoTokenizer.from_pretrained("MattBou00//content/IRL-Bayesian/outputs/2025-09-22_13-33-03/checkpoints/checkpoint-epoch-80") model = AutoModelForCausalLMWithValueHead.from_pretrained("MattBou00//content/IRL-Bayesian/outputs/2025-09-22_13-33-03/checkpoints/checkpoint-epoch-80") inputs = tokenizer("Hello, my llama is cute", return_tensors="pt") outputs = model(**inputs, labels=inputs["input_ids"]) ```
0701phantom/all-t5-base-v1-contriever-msmarco2fiqa
0701phantom
2025-09-22T13:46:01Z
0
0
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "arxiv:1910.09700", "text-generation-inference", "endpoints_compatible", "region:us" ]
null
2025-09-22T13:45:33Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Trelis/Qwen3-4B_ds-arc-agi-2-partial-100-c2806_ds-datasets_rLoRA-32-c2
Trelis
2025-09-22T13:44:09Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "text-generation-inference", "unsloth", "conversational", "en", "base_model:Trelis/Qwen3-4B_ds-arc-agi-2-partial-100-c2806", "base_model:finetune:Trelis/Qwen3-4B_ds-arc-agi-2-partial-100-c2806", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-09-22T13:43:05Z
--- base_model: Trelis/Qwen3-4B_ds-arc-agi-2-partial-100-c2806 tags: - text-generation-inference - transformers - unsloth - qwen3 license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** Trelis - **License:** apache-2.0 - **Finetuned from model :** Trelis/Qwen3-4B_ds-arc-agi-2-partial-100-c2806 This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
aamijar/Llama-2-7b-hf-dora-r8-boolq-epochs3
aamijar
2025-09-22T13:42:03Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-09-22T13:42:01Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
cryptoggg/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-roaring_bold_butterfly
cryptoggg
2025-09-22T13:40:32Z
172
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "rl-swarm", "genrl-swarm", "grpo", "gensyn", "I am roaring_bold_butterfly", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-13T02:42:24Z
--- library_name: transformers tags: - rl-swarm - genrl-swarm - grpo - gensyn - I am roaring_bold_butterfly --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Lisek1995/shabdooo
Lisek1995
2025-09-22T13:40:18Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-09-22T13:25:00Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: shabdooo --- # Shabdooo <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `shabdooo` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "shabdooo", "lora_weights": "https://huggingface.co/Lisek1995/shabdooo/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('Lisek1995/shabdooo', weight_name='lora.safetensors') image = pipeline('shabdooo').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 1000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/Lisek1995/shabdooo/discussions) to add images that show off what you’ve made with this LoRA.
fuzzyethic/NER-ONETONOTE5
fuzzyethic
2025-09-22T13:30:46Z
0
0
spacy
[ "spacy", "token-classification", "en", "region:us" ]
token-classification
2025-09-22T07:06:59Z
--- language: en pipeline_tag: token-classification library_name: spacy --- # NER Model: fuzzyethic/NER-ONETONOTE5 This is a Named Entity Recognition (NER) model, trained using spaCy. ## Model Details * **Language:** English (`en`) * **Pipeline:** `ner` * **spaCy Version:** >=3.8.7,<3.9.0 ## Training * **Dataset:** This model was trained on the `ontonotes-5` dataset. * **Evaluation:** The model achieved an accuracy of **81%** on the evaluation set. ## How to Use First, install the required libraries: ```bash pip install spacy huggingface_hub ``` Then, you can use this script to automatically download and load the model: ```python import spacy from huggingface_hub import snapshot_download import os model_name = "fuzzyethic/NER-ONETONOTE5" try: nlp = spacy.load(model_name) except OSError: print(f"Downloading model {model_name} from Hugging Face Hub...") model_path = snapshot_download(repo_id=model_name) nlp = spacy.load(model_path) text = "Apple Company is looking at buying U.K. startup for $1 billion" doc = nlp(text) print("Entities found:") for ent in doc.ents: print(f"- {ent.text} ({ent.label_})") ``` OUTPUT ```python Downloading model fuzzyethic/NER-ONETONOTE5 from Hugging Face Hub... Entities found: - Apple (B-ORG) - Company (I-ORG) - U.K. (B-GPE) - $ (B-MONEY) - 1 (I-MONEY) - billion (I-MONEY) ``` ## Labels The model predicts the following entities: ```python labels = [ "B-CARDINAL", "B-DATE", "B-EVENT", "B-FAC", "B-GPE", "B-LANGUAGE", "B-LAW", "B-LOC", "B-MONEY", "B-NORP", "B-ORDINAL", "B-ORG", "B-PERCENT", "B-PERSON", "B-PRODUCT", "B-QUANTITY", "B-TIME", "B-WORK_OF_ART", "I-CARDINAL", "I-DATE", "I-EVENT", "I-FAC", "I-GPE", "I-LANGUAGE", "I-LAW", "I-LOC", "I-MONEY", "I-NORP", "I-ORDINAL", "I-ORG", "I-PERCENT", "I-PERSON", "I-PRODUCT", "I-QUANTITY", "I-TIME", "I-WORK_OF_ART" ] ```
Mari-ano/Caravaggio_Remastered
Mari-ano
2025-09-22T13:29:07Z
0
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2025-09-22T13:19:52Z
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - output: url: images/ComfyUI_02111_.png text: 'carravabaroque, A half-length portrait of a young woman emerges from deep shadow, her face illuminated by a violent diagonal beam of light that ignites her pale skin and ruby lips while the rest dissolves into darkness. Her gaze is unwavering, caught between revelation and secrecy. The coarse weave of her garment, patterned with feral markings like a predator’s pelt, shimmers in warm ochres and deep browns, each fold swallowed by shadow. Behind her, the void is black and impenetrable, the crimson aura of her lips and attire burning like a sudden flame against the dark. The atmosphere is tense and theatrical, as if the moment were suspended between beauty and menace, a vision of modernity transfigured into sacred chiaroscuro.' - output: url: images/ComfyUI_02163_.png text: 'carravabaroque, dramatic chiaroscuro at a cave entrance, a young woman draped in crimson mantle and ivory tunic, seated with head resting on one hand, the other hand near an open book on the stone, single raking light illuminating her face, hands and fabric folds, deep black grotto behind her, distant blue–orange sunset sky and a small mountain beyond, textured brushwork, tenebrism yet preserving the original warm colors, serene and contemplative' - output: url: images/ComfyUI_02157_.png text: 'carravabaroque, dramatic chiaroscuro oil painting, two noblewomen in the same pose and composition as the original, both dressed in luxurious white satin gowns with pearl jewelry, one standing and the other seated gracefully, glowing skin illuminated by strong directional light, deep shadows surrounding them, baroque atmosphere, fabric folds shimmering under chiaroscuro, intimate and refined presence' - output: url: images/ComfyUI_02156_.png text: 'carravabaroque, dramatic chiaroscuro oil painting, two noblemen in the same pose and composition as the original, one dressed in a black formal coat with golden vest, the other dressed in elegant white formal attire, standing and seated side by side, baroque textures and deep shadowed background, painterly fabrics with strong light reflecting off folds, solemn expressions and dignified posture, 17th century baroque atmosphere' - output: url: images/ComfyUI_02154_ - Copy.png text: 'carravabaroque, dramatic chiaroscuro oil painting, a young maid in a simple bonnet and pale blue dress with white apron, leaning on a wooden table, strong light falling across her face and hands, dark background with glowing highlights, holding a modern smartphone in her hand and gazing at the screen, painterly textures, fabric folds rendered with rich detail, baroque atmosphere with a modern twist' - output: url: images/ComfyUI_02153_.png text: 'carravabaroque, dramatic chiaroscuro oil painting, a baroque gentleman with curly hair and ornate black coat giving a thumbs up, strong contrast of light and shadow, painterly brushstrokes with visible texture, realistic fabric sheen, humorous and expressive face, wearing modern white AirPods, subtle glowing highlight on the earbuds, baroque atmosphere with modern twist' - output: url: images/ComfyUI_02126_.png text: 'carravabaroque, portrait of a young woman turning her head toward the viewer, luminous pearl earring catching the light, smooth delicate skin with a soft blush, large expressive eyes filled with quiet curiosity, wearing a golden-brown robe with a white collar, and a vibrant blue and yellow turban draped elegantly, dark background emphasizing the serene glow, rendered in soft diffuse light with subtle brushstrokes, atmosphere of intimacy and mystery' - output: url: images/ComfyUI_02125_.png text: 'carravabaroque, dramatic portrait of a man in mid-shout, head turned sharply over the shoulder with wide, startled eyes and mouth agape, baroque theatrical expression, strong chiaroscuro lighting with golden highlights and deep shadows, textured fabric with coarse folds, rough brushstrokes accentuating motion and intensity, raw emotion captured in a frozen moment' base_model: black-forest-labs/FLUX.1-dev instance_prompt: carravabaroque license: creativeml-openrail-m --- # Caravaggio <Gallery /> ## Model description Caravaggio (Michelangelo Merisi da Caravaggio, 1571–1610) is remembered as one of the most influential painters of the Baroque era. His works broke away from idealized Renaissance traditions, favoring radical realism and dramatic chiaroscuro. A single shaft of light often cuts across the darkness, igniting flesh and fabric with sudden brilliance while leaving the rest in impenetrable shadow. His brushstrokes are dense and tactile, pressing pigment into rough textures of cloth, stone, and skin, creating an atmosphere of raw immediacy and intensity. The emotional climate of his paintings is equally striking: charged with tension, violence, devotion, or revelation, always suspended between shadow and illumination. This LoRA seeks to capture those essential qualities — the dramatic light, the textured brushwork, and the solemn atmosphere — and bring them into the generative process. Trained for use with Pixelwave, it performs especially well in single-figure portraits, highlighting the sharp contrasts and painterly surfaces that define Caravaggio’s style. It can also be applied to multi-figure scenes to suggest group compositions with a heightened sense of drama. However, in complex group shots the faces may not always resolve with the same precision as in solo portraits, so the LoRA is best leveraged when the focus is on one or two central figures. ## Trigger words You should use `carravabaroque` to trigger the image generation. ## Download model [Download](/Mari-ano/Caravaggio_Remastered/tree/main) them in the Files & versions tab.
mehedi1313/Qwen3-0.6B-Gensyn-Swarm-wise_tiny_termite
mehedi1313
2025-09-22T13:25:19Z
13
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "rl-swarm", "genrl-swarm", "grpo", "gensyn", "I am wise_tiny_termite", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-21T19:37:49Z
--- library_name: transformers tags: - rl-swarm - genrl-swarm - grpo - gensyn - I am wise_tiny_termite --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
lhkhiem28/MolT-Rex-SMolInstruct-llama-2-7b
lhkhiem28
2025-09-22T13:24:32Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "hf_jobs", "sft", "trl", "base_model:meta-llama/Llama-2-7b-chat-hf", "base_model:finetune:meta-llama/Llama-2-7b-chat-hf", "endpoints_compatible", "region:us" ]
null
2025-09-19T16:25:39Z
--- base_model: meta-llama/Llama-2-7b-chat-hf library_name: transformers model_name: MolT-Rex-SMolInstruct-llama-2-7b tags: - generated_from_trainer - hf_jobs - sft - trl licence: license --- # Model Card for MolT-Rex-SMolInstruct-llama-2-7b This model is a fine-tuned version of [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="lhkhiem28/MolT-Rex-SMolInstruct-llama-2-7b", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/kle3/MolT-Rex/runs/czthfniw) This model was trained with SFT. ### Framework versions - TRL: 0.22.2 - Transformers: 4.55.4 - Pytorch: 2.8.0 - Datasets: 3.6.0 - Tokenizers: 0.21.4 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
amrithanandini/mistral-nemo-arc-finetuned
amrithanandini
2025-09-22T13:24:24Z
0
0
peft
[ "peft", "safetensors", "base_model:adapter:nvidia/Mistral-NeMo-Minitron-8B-Base", "lora", "transformers", "text-generation", "base_model:nvidia/Mistral-NeMo-Minitron-8B-Base", "license:other", "region:us" ]
text-generation
2025-09-22T03:17:52Z
--- library_name: peft license: other base_model: nvidia/Mistral-NeMo-Minitron-8B-Base tags: - base_model:adapter:nvidia/Mistral-NeMo-Minitron-8B-Base - lora - transformers pipeline_tag: text-generation model-index: - name: mistral-nemo-arc-finetuned results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mistral-nemo-arc-finetuned This model is a fine-tuned version of [nvidia/Mistral-NeMo-Minitron-8B-Base](https://huggingface.co/nvidia/Mistral-NeMo-Minitron-8B-Base) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 1 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 16 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 3 ### Training results ### Framework versions - PEFT 0.17.1 - Transformers 4.51.3 - Pytorch 2.7.0+cu126 - Datasets 4.1.1 - Tokenizers 0.21.1
YassineToughrai/Ar_20k
YassineToughrai
2025-09-22T13:19:05Z
5
0
null
[ "safetensors", "bert", "ar", "fr", "dataset:oscar-corpus/OSCAR-2201", "arxiv:1910.09700", "region:us" ]
null
2025-09-19T15:40:14Z
--- datasets: - oscar-corpus/OSCAR-2201 language: - ar - fr --- # Model Card for ABDUL-Ar_20k ABDUL-{VARIANT} is a **BERT-base masked language model** pretrained on **phoneme-normalized Modern Standard Arabic (MSA)** and optionally **normalized French** (depending on the variant). It is designed for **North African Arabic dialects (e.g., Algerian, Moroccan Darija)** even though it is trained **only on formal data (MSA + French)**. Variants differ by **vocab size (20k/30k/40k)** and **training mix (Ar | Ar+Fr | Ar+Fr+CS)**. --- ## Model Details ### Model Description - **Developed by:** [Yassine Toughrai, Kamel Smaili, David Langlois / LORIA] - **Funded by [optional]:** [ANR] - **Shared by:** [YassineToughrai] - **Model type:** BERT encoder (MLM objective only) - **Language(s):** Arabic (dialects + MSA), French - **License:** Apache 2.0 - **Finetuned from:** None (trained from scratch) ### Model Sources - **Repository:** [[Ar_20k](https://huggingface.co/YassineToughrai/Ar_20k)] - **Paper:** *Modeling North African Dialects from Standard Languages* (ArabicNLP 2025) --- ## Uses ### Direct Use - As a pretrained encoder for **feature extraction** (hidden states, embeddings). - Fill-mask experiments on normalized MSA / dialect input. ### Downstream Use - Fine-tuning for **NER** (e.g., DzNER, DarNER, WikiFANE). - Fine-tuning for **sentiment / polarity classification** (e.g., TwiFil). - Other token-level classification tasks where **North African dialects** or **MSA** are involved. ### Out-of-Scope Use - Performance drops significantly on **unnormalized raw dialect text** (requires preprocessing). - Not evaluated for **text generation, speech, ASR, or diacritized Arabic**. --- ## Bias, Risks, and Limitations - **Bias:** Training data is OSCAR web text (Arabic + French), which may contain social, political, or cultural biases. - **Risks:** Applying the model without preprocessing can lead to high OOV rates and poor predictions. - **Limitations:** Evaluated mainly on NER and sentiment; generalization to other tasks is untested. ### Recommendations - Always apply the same **normalization procedure** before tokenization. - Evaluate on your target domain before deployment in real-world applications. --- ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
YassineToughrai/Ar_30k
YassineToughrai
2025-09-22T13:18:30Z
2
0
null
[ "safetensors", "bert", "ar", "fr", "dataset:oscar-corpus/OSCAR-2201", "arxiv:1910.09700", "region:us" ]
null
2025-09-19T16:07:16Z
--- datasets: - oscar-corpus/OSCAR-2201 language: - ar - fr --- # Model Card for ABDUL-Ar_30k ABDUL-{VARIANT} is a **BERT-base masked language model** pretrained on **phoneme-normalized Modern Standard Arabic (MSA)** and optionally **normalized French** (depending on the variant). It is designed for **North African Arabic dialects (e.g., Algerian, Moroccan Darija)** even though it is trained **only on formal data (MSA + French)**. Variants differ by **vocab size (20k/30k/40k)** and **training mix (Ar | Ar+Fr | Ar+Fr+CS)**. --- ## Model Details ### Model Description - **Developed by:** [Yassine Toughrai, Kamel Smaili, David Langlois / LORIA] - **Funded by [optional]:** [ANR] - **Shared by:** [YassineToughrai] - **Model type:** BERT encoder (MLM objective only) - **Language(s):** Arabic (dialects + MSA), French - **License:** Apache 2.0 - **Finetuned from:** None (trained from scratch) ### Model Sources - **Repository:** [[Ar_30k](https://huggingface.co/YassineToughrai/Ar_30k)] - **Paper:** *Modeling North African Dialects from Standard Languages* (ArabicNLP 2025) --- ## Uses ### Direct Use - As a pretrained encoder for **feature extraction** (hidden states, embeddings). - Fill-mask experiments on normalized MSA / dialect input. ### Downstream Use - Fine-tuning for **NER** (e.g., DzNER, DarNER, WikiFANE). - Fine-tuning for **sentiment / polarity classification** (e.g., TwiFil). - Other token-level classification tasks where **North African dialects** or **MSA** are involved. ### Out-of-Scope Use - Performance drops significantly on **unnormalized raw dialect text** (requires preprocessing). - Not evaluated for **text generation, speech, ASR, or diacritized Arabic**. --- ## Bias, Risks, and Limitations - **Bias:** Training data is OSCAR web text (Arabic + French), which may contain social, political, or cultural biases. - **Risks:** Applying the model without preprocessing can lead to high OOV rates and poor predictions. - **Limitations:** Evaluated mainly on NER and sentiment; generalization to other tasks is untested. ### Recommendations - Always apply the same **normalization procedure** before tokenization. - Evaluate on your target domain before deployment in real-world applications. --- ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
YassineToughrai/Ar_Fr_30k
YassineToughrai
2025-09-22T13:16:41Z
2
0
null
[ "safetensors", "bert", "ar", "fr", "dataset:oscar-corpus/OSCAR-2201", "arxiv:1910.09700", "region:us" ]
null
2025-09-19T16:14:14Z
--- datasets: - oscar-corpus/OSCAR-2201 language: - ar - fr --- # Model Card for ABDUL-Ar+Fr_30k ABDUL-{VARIANT} is a **BERT-base masked language model** pretrained on **phoneme-normalized Modern Standard Arabic (MSA)** and optionally **normalized French** (depending on the variant). It is designed for **North African Arabic dialects (e.g., Algerian, Moroccan Darija)** even though it is trained **only on formal data (MSA + French)**. Variants differ by **vocab size (20k/30k/40k)** and **training mix (Ar | Ar+Fr | Ar+Fr+CS)**. --- ## Model Details ### Model Description - **Developed by:** [Yassine Toughrai, Kamel Smaili, David Langlois / LORIA] - **Funded by [optional]:** [ANR] - **Shared by:** [YassineToughrai] - **Model type:** BERT encoder (MLM objective only) - **Language(s):** Arabic (dialects + MSA), French - **License:** Apache 2.0 - **Finetuned from:** None (trained from scratch) ### Model Sources - **Repository:** [[Ar_Fr_30k](https://huggingface.co/YassineToughrai/Ar_Fr_30k)] - **Paper:** *Modeling North African Dialects from Standard Languages* (ArabicNLP 2025) --- ## Uses ### Direct Use - As a pretrained encoder for **feature extraction** (hidden states, embeddings). - Fill-mask experiments on normalized MSA / dialect input. ### Downstream Use - Fine-tuning for **NER** (e.g., DzNER, DarNER, WikiFANE). - Fine-tuning for **sentiment / polarity classification** (e.g., TwiFil). - Other token-level classification tasks where **North African dialects** or **MSA** are involved. ### Out-of-Scope Use - Performance drops significantly on **unnormalized raw dialect text** (requires preprocessing). - Not evaluated for **text generation, speech, ASR, or diacritized Arabic**. --- ## Bias, Risks, and Limitations - **Bias:** Training data is OSCAR web text (Arabic + French), which may contain social, political, or cultural biases. - **Risks:** Applying the model without preprocessing can lead to high OOV rates and poor predictions. - **Limitations:** Evaluated mainly on NER and sentiment; generalization to other tasks is untested. ### Recommendations - Always apply the same **normalization procedure** before tokenization. - Evaluate on your target domain before deployment in real-world applications. --- ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
JiachenFu/Qwen2-0.5B-detectanyllm-detector-zh
JiachenFu
2025-09-22T13:13:18Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:Qwen/Qwen2-0.5B", "base_model:adapter:Qwen/Qwen2-0.5B", "region:us" ]
null
2025-09-22T12:53:13Z
--- base_model: Qwen/Qwen2-0.5B library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.2
Kelanelirum/Qwen3-0.6B-Gensyn-Swarm-toothy_stalking_starfish
Kelanelirum
2025-09-22T13:09:52Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "rl-swarm", "genrl-swarm", "grpo", "gensyn", "I am toothy_stalking_starfish", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-22T13:08:58Z
--- library_name: transformers tags: - rl-swarm - genrl-swarm - grpo - gensyn - I am toothy_stalking_starfish --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
somu9/tts-mms-kfy
somu9
2025-09-22T13:08:13Z
0
0
transformers
[ "transformers", "safetensors", "vits", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-09-22T13:06:56Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Rinaetnoreas/Qwen3-0.6B-Gensyn-Swarm-striped_untamed_chimpanzee
Rinaetnoreas
2025-09-22T13:07:44Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "rl-swarm", "genrl-swarm", "grpo", "gensyn", "I am striped_untamed_chimpanzee", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-22T13:07:07Z
--- library_name: transformers tags: - rl-swarm - genrl-swarm - grpo - gensyn - I am striped_untamed_chimpanzee --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mradermacher/V_13B-GGUF
mradermacher
2025-09-22T12:32:56Z
0
0
transformers
[ "transformers", "gguf", "en", "base_model:sschet/V_13B", "base_model:quantized:sschet/V_13B", "endpoints_compatible", "region:us" ]
null
2025-09-22T10:22:31Z
--- base_model: sschet/V_13B language: - en library_name: transformers mradermacher: readme_rev: 1 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static quants of https://huggingface.co/sschet/V_13B <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#V_13B-GGUF).*** weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/V_13B-GGUF/resolve/main/V_13B.Q2_K.gguf) | Q2_K | 5.0 | | | [GGUF](https://huggingface.co/mradermacher/V_13B-GGUF/resolve/main/V_13B.Q3_K_S.gguf) | Q3_K_S | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/V_13B-GGUF/resolve/main/V_13B.Q3_K_M.gguf) | Q3_K_M | 6.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/V_13B-GGUF/resolve/main/V_13B.Q3_K_L.gguf) | Q3_K_L | 7.0 | | | [GGUF](https://huggingface.co/mradermacher/V_13B-GGUF/resolve/main/V_13B.Q4_K_S.gguf) | Q4_K_S | 7.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/V_13B-GGUF/resolve/main/V_13B.Q4_K_M.gguf) | Q4_K_M | 8.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/V_13B-GGUF/resolve/main/V_13B.Q6_K.gguf) | Q6_K | 10.8 | very good quality | | [GGUF](https://huggingface.co/mradermacher/V_13B-GGUF/resolve/main/V_13B.Q8_0.gguf) | Q8_0 | 13.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Radiant-Shadow-12B-i1-GGUF
mradermacher
2025-09-22T12:32:17Z
0
0
null
[ "gguf", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2025-09-22T11:41:57Z
<!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> <!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> weighted/imatrix quants of https://huggingface.co/Vortex5/Radiant-Shadow-12B
poolkiltzn/blockassist-bc-vigilant_alert_tuna_1758544211
poolkiltzn
2025-09-22T12:32:13Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "vigilant alert tuna", "arxiv:2504.07091", "region:us" ]
null
2025-09-22T12:31:23Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - vigilant alert tuna --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
huanfeixia/LLM_enrich_3GPP
huanfeixia
2025-09-22T12:30:59Z
0
0
null
[ "safetensors", "llama", "license:apache-2.0", "region:us" ]
null
2025-09-22T12:24:59Z
--- license: apache-2.0 ---
mradermacher/nart-7b-GGUF
mradermacher
2025-09-22T12:29:56Z
0
0
transformers
[ "transformers", "gguf", "en", "base_model:jerryjalapeno/nart-7b", "base_model:quantized:jerryjalapeno/nart-7b", "license:cc-by-nc-nd-4.0", "endpoints_compatible", "region:us" ]
null
2025-09-22T11:25:01Z
--- base_model: jerryjalapeno/nart-7b language: - en library_name: transformers license: cc-by-nc-nd-4.0 mradermacher: readme_rev: 1 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static quants of https://huggingface.co/jerryjalapeno/nart-7b <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#nart-7b-GGUF).*** weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/nart-7b-GGUF/resolve/main/nart-7b.Q2_K.gguf) | Q2_K | 2.6 | | | [GGUF](https://huggingface.co/mradermacher/nart-7b-GGUF/resolve/main/nart-7b.Q3_K_S.gguf) | Q3_K_S | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/nart-7b-GGUF/resolve/main/nart-7b.Q3_K_M.gguf) | Q3_K_M | 3.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/nart-7b-GGUF/resolve/main/nart-7b.Q3_K_L.gguf) | Q3_K_L | 3.7 | | | [GGUF](https://huggingface.co/mradermacher/nart-7b-GGUF/resolve/main/nart-7b.Q4_K_S.gguf) | Q4_K_S | 4.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/nart-7b-GGUF/resolve/main/nart-7b.Q4_K_M.gguf) | Q4_K_M | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/nart-7b-GGUF/resolve/main/nart-7b.Q5_K_S.gguf) | Q5_K_S | 4.8 | | | [GGUF](https://huggingface.co/mradermacher/nart-7b-GGUF/resolve/main/nart-7b.Q6_K.gguf) | Q6_K | 5.6 | very good quality | | [GGUF](https://huggingface.co/mradermacher/nart-7b-GGUF/resolve/main/nart-7b.Q8_0.gguf) | Q8_0 | 7.3 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/nart-7b-GGUF/resolve/main/nart-7b.f16.gguf) | f16 | 13.6 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Advanced_Risk_Reward_Tampering_llama-GGUF
mradermacher
2025-09-22T12:29:04Z
0
0
null
[ "gguf", "endpoints_compatible", "region:us", "conversational" ]
null
2025-09-22T11:49:31Z
<!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static quants of https://huggingface.co/yujunzhou/Advanced_Risk_Reward_Tampering_llama
tomal66/qwen3-0.6b-sentiment-fpt-sft
tomal66
2025-09-22T12:28:56Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-09-22T12:28:41Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
LeroyDyer/_Spydaz_Web_LCARS_00001_I_AM-Q4_K_S-GGUF
LeroyDyer
2025-09-22T12:28:19Z
0
0
transformers
[ "transformers", "gguf", "text-generation-inference", "unsloth", "mistral", "llama-cpp", "gguf-my-repo", "en", "base_model:LeroyDyer/_Spydaz_Web_LCARS_00001_I_AM", "base_model:quantized:LeroyDyer/_Spydaz_Web_LCARS_00001_I_AM", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-09-22T12:27:58Z
--- base_model: LeroyDyer/_Spydaz_Web_LCARS_00001_I_AM tags: - text-generation-inference - transformers - unsloth - mistral - llama-cpp - gguf-my-repo license: apache-2.0 language: - en --- # LeroyDyer/_Spydaz_Web_LCARS_00001_I_AM-Q4_K_S-GGUF This model was converted to GGUF format from [`LeroyDyer/_Spydaz_Web_LCARS_00001_I_AM`](https://huggingface.co/LeroyDyer/_Spydaz_Web_LCARS_00001_I_AM) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/LeroyDyer/_Spydaz_Web_LCARS_00001_I_AM) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo LeroyDyer/_Spydaz_Web_LCARS_00001_I_AM-Q4_K_S-GGUF --hf-file _spydaz_web_lcars_00001_i_am-q4_k_s.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo LeroyDyer/_Spydaz_Web_LCARS_00001_I_AM-Q4_K_S-GGUF --hf-file _spydaz_web_lcars_00001_i_am-q4_k_s.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo LeroyDyer/_Spydaz_Web_LCARS_00001_I_AM-Q4_K_S-GGUF --hf-file _spydaz_web_lcars_00001_i_am-q4_k_s.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo LeroyDyer/_Spydaz_Web_LCARS_00001_I_AM-Q4_K_S-GGUF --hf-file _spydaz_web_lcars_00001_i_am-q4_k_s.gguf -c 2048 ```
veeravel/text_summarize
veeravel
2025-09-22T12:27:11Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-09-22T12:25:00Z
--- license: apache-2.0 ---
michalr19904/blockassist
michalr19904
2025-09-22T12:24:43Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "squinting smooth spider", "arxiv:2504.07091", "region:us" ]
null
2025-09-22T11:48:33Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - squinting smooth spider --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
MattBou00/llama-3-2-1b-detox_v1f_RRETRT_Again_ROUND2-checkpoint-epoch-80
MattBou00
2025-09-22T12:24:38Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "trl", "ppo", "reinforcement-learning", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
reinforcement-learning
2025-09-22T12:23:39Z
--- license: apache-2.0 library_name: transformers tags: - trl - ppo - transformers - reinforcement-learning --- # TRL Model This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to guide the model outputs according to a value, function, or human feedback. The model can be used for text generation. ## Usage To use this model for inference, first install the TRL library: ```bash python -m pip install trl ``` You can then generate text as follows: ```python from transformers import pipeline generator = pipeline("text-generation", model="MattBou00//content/IRL-Bayesian/outputs/2025-09-22_12-11-21/checkpoints/checkpoint-epoch-80") outputs = generator("Hello, my llama is cute") ``` If you want to use the model for training or to obtain the outputs from the value head, load the model as follows: ```python from transformers import AutoTokenizer from trl import AutoModelForCausalLMWithValueHead tokenizer = AutoTokenizer.from_pretrained("MattBou00//content/IRL-Bayesian/outputs/2025-09-22_12-11-21/checkpoints/checkpoint-epoch-80") model = AutoModelForCausalLMWithValueHead.from_pretrained("MattBou00//content/IRL-Bayesian/outputs/2025-09-22_12-11-21/checkpoints/checkpoint-epoch-80") inputs = tokenizer("Hello, my llama is cute", return_tensors="pt") outputs = model(**inputs, labels=inputs["input_ids"]) ```
qinchen1986/fire-investigation-qwen3-4b-thinking
qinchen1986
2025-09-22T12:24:35Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "sft", "trl", "unsloth", "endpoints_compatible", "region:us" ]
null
2025-09-19T19:39:10Z
--- base_model: unsloth/qwen3-4b-thinking-2507-unsloth-bnb-4bit library_name: transformers model_name: fire-investigation-qwen3-4b-thinking tags: - generated_from_trainer - sft - trl - unsloth licence: license --- # Model Card for fire-investigation-qwen3-4b-thinking This model is a fine-tuned version of [unsloth/qwen3-4b-thinking-2507-unsloth-bnb-4bit](https://huggingface.co/unsloth/qwen3-4b-thinking-2507-unsloth-bnb-4bit). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="qinchen1986/fire-investigation-qwen3-4b-thinking", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.22.2 - Transformers: 4.55.4 - Pytorch: 2.8.0+cu126 - Datasets: 3.6.0 - Tokenizers: 0.21.4 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
amoghghadge/gemma-3-12b-mc-qa
amoghghadge
2025-09-22T12:24:31Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:google/gemma-3-12b-it", "base_model:finetune:google/gemma-3-12b-it", "endpoints_compatible", "region:us" ]
null
2025-09-20T23:01:02Z
--- base_model: google/gemma-3-12b-it library_name: transformers model_name: gemma-3-12b-mc-qa tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for gemma-3-12b-mc-qa This model is a fine-tuned version of [google/gemma-3-12b-it](https://huggingface.co/google/gemma-3-12b-it). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="amoghghadge/gemma-3-12b-mc-qa", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.21.0 - Transformers: 4.56.2 - Pytorch: 2.8.0 - Datasets: 3.3.2 - Tokenizers: 0.22.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
MattBou00/llama-3-2-1b-detox_v1f_RRETRT_Again_ROUND2-checkpoint-epoch-60
MattBou00
2025-09-22T12:21:22Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "trl", "ppo", "reinforcement-learning", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
reinforcement-learning
2025-09-22T12:20:17Z
--- license: apache-2.0 library_name: transformers tags: - trl - ppo - transformers - reinforcement-learning --- # TRL Model This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to guide the model outputs according to a value, function, or human feedback. The model can be used for text generation. ## Usage To use this model for inference, first install the TRL library: ```bash python -m pip install trl ``` You can then generate text as follows: ```python from transformers import pipeline generator = pipeline("text-generation", model="MattBou00//content/IRL-Bayesian/outputs/2025-09-22_12-11-21/checkpoints/checkpoint-epoch-60") outputs = generator("Hello, my llama is cute") ``` If you want to use the model for training or to obtain the outputs from the value head, load the model as follows: ```python from transformers import AutoTokenizer from trl import AutoModelForCausalLMWithValueHead tokenizer = AutoTokenizer.from_pretrained("MattBou00//content/IRL-Bayesian/outputs/2025-09-22_12-11-21/checkpoints/checkpoint-epoch-60") model = AutoModelForCausalLMWithValueHead.from_pretrained("MattBou00//content/IRL-Bayesian/outputs/2025-09-22_12-11-21/checkpoints/checkpoint-epoch-60") inputs = tokenizer("Hello, my llama is cute", return_tensors="pt") outputs = model(**inputs, labels=inputs["input_ids"]) ```
apsora/finetuning_text_model
apsora
2025-09-22T12:19:32Z
0
0
transformers
[ "transformers", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-09-22T11:18:05Z
--- library_name: transformers license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer metrics: - accuracy - f1 - precision - recall model-index: - name: finetuning_text_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning_text_model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0422 - Accuracy: 1.0 - F1: 1.0 - Precision: 1.0 - Recall: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:| | 1.2278 | 1.0 | 84 | 1.0599 | 0.9048 | 0.9030 | 0.9148 | 0.9048 | | 0.509 | 2.0 | 168 | 0.3537 | 0.9821 | 0.9820 | 0.9829 | 0.9821 | | 0.1262 | 3.0 | 252 | 0.1090 | 0.9881 | 0.9881 | 0.9883 | 0.9881 | | 0.0686 | 4.0 | 336 | 0.0548 | 0.9940 | 0.9940 | 0.9943 | 0.9940 | | 0.0469 | 5.0 | 420 | 0.0482 | 0.9940 | 0.9940 | 0.9943 | 0.9940 | ### Framework versions - Transformers 4.56.1 - Pytorch 2.8.0+cu126 - Datasets 4.0.0 - Tokenizers 0.22.0
NetherlandsForensicInstitute/ARM64BERT-embedding
NetherlandsForensicInstitute
2025-09-22T12:19:01Z
83
7
sentence-transformers
[ "sentence-transformers", "pytorch", "safetensors", "bert", "code", "base_model:NetherlandsForensicInstitute/ARM64BERT", "base_model:finetune:NetherlandsForensicInstitute/ARM64BERT", "license:eupl-1.2", "region:us" ]
null
2024-03-27T09:36:05Z
--- license: eupl-1.2 language: code base_model: - NetherlandsForensicInstitute/ARM64BERT library_name: sentence-transformers --- ARM64BERT-embedding 🦾 ====================== [GitHub repository](https://github.com/NetherlandsForensicInstitute/asmtransformers) ## General ### What is the purpose of the model The model is a BERT model of ARM64 assembly code that can be used to find similar ARM64 functions to a given ARM64 function. This task is known as _binary code similarity detection_, which is similar to the _sentence similarity_ task in natural language processing. ### What does the model architecture look like? The model architecture is inspired by [jTrans](https://github.com/vul337/jTrans) (Wang et al., 2022). It is a BERT model (Devlin et al. 2019) although the typical Next Sentence Prediction has been replaced with Jump Target Prediction, as proposed in Wang et al. This architecture has subsequently been finetuned for semantic search purposes. We have followed the procedure proposed by [S-BERT](https://www.sbert.net/examples/applications/semantic-search/README.html). ### What is the output of the model? The model returns an embedding vector of 768 dimensions for each function that it's given. These embeddings can be compared to get an indication of which functions are similar to each other. ### How does the model perform? The model has been evaluated on [Mean Reciprocal Rank (MRR)](https://en.wikipedia.org/wiki/Mean_reciprocal_rank) and [Recall@1](https://en.wikipedia.org/wiki/Precision_and_recall). When the model has to pick the positive example out of a pool of 32, ranks the positive example highest most of the time. When the pool is significantly enlarged to 10.000 functions, it still ranks the positive example first or second in most cases. | Model | Pool size | MRR | Recall@1 | |----------------------|-----------|------|----------| | ARM64BERT | 32 | 0.78 | 0.72 | | ARM64BERT-embedding | 32 | 0.99 | 0.99 | | ARM64BERT | 10.000 | 0.58 | 0.56 | | ARM64BERT-embedding | 10.000 | 0.87 | 0.83 | ## Purpose and use of the model ### For which problem has the model been designed? The model has been designed to find similar ARM64 functions in a database of known ARM64 functions. ### What else could the model be used for? We do not see other applications for this model. ### To what problems is the model not applicable? This model has been finetuned on the semantic search task. For the base ARM64BERT model, please refer to the [other model](https://huggingface.co/NetherlandsForensicInstitute/ARM64BERT) we have published. ## Data ### What data was used for training and evaluation? The dataset is created in the same way as Wang et al. created Binary Corp. A large set of source code comes from the [ArchLinux official repositories](https://archlinux.org/packages/) and the [ArchLinux user repositories](https://aur.archlinux.org/packages/). All this code is split into functions that are compiled into binary code with different optimalizations (`O0`, `O1`, `O2`, `O3` and `Os`) and security settings (fortify or no-fortify). This results in a maximum of 10 (5×2) different functions which are semantically similar, i.e. they represent the same functionality, but have different machine code. The dataset is split into a train and a test set. This is done on project level, so all binaries and functions belonging to one project are part of either the train or the test set, not both. We have not performed any deduplication on the dataset for training. | set | # functions | |-------|------------:| | train | 18,083,285 | | test | 3,375,741 | For our training and evaluation code, see our [GitHub repository](https://github.com/NetherlandsForensicInstitute/asmtransformers). ### By whom was the dataset collected and annotated? The dataset was collected by our team. ### Any remarks on data quality and bias? After training our models, we found out that something had gone wrong when compiling our dataset. Consequently, the first line of the next function was included in the previous. This has been fixed for the finetuning, but due to the long training process, and the good performance of the model despite the mistake, we have decided not to retrain the base model.
qualiaadmin/f619ea48-811d-4454-b7f9-259b1ce3db76
qualiaadmin
2025-09-22T12:18:47Z
0
0
lerobot
[ "lerobot", "safetensors", "robotics", "smolvla", "dataset:Calvert0921/SmolVLA_LiftBlueCubeDouble_Franka_200", "arxiv:2506.01844", "base_model:lerobot/smolvla_base", "base_model:finetune:lerobot/smolvla_base", "license:apache-2.0", "region:us" ]
robotics
2025-09-22T12:17:00Z
--- base_model: lerobot/smolvla_base datasets: Calvert0921/SmolVLA_LiftBlueCubeDouble_Franka_200 library_name: lerobot license: apache-2.0 model_name: smolvla pipeline_tag: robotics tags: - lerobot - robotics - smolvla --- # Model Card for smolvla <!-- Provide a quick summary of what the model is/does. --> [SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware. This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot). See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index). --- ## How to Get Started with the Model For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy). Below is the short version on how to train and run inference/eval: ### Train from scratch ```bash lerobot-train \ --dataset.repo_id=${HF_USER}/<dataset> \ --policy.type=act \ --output_dir=outputs/train/<desired_policy_repo_id> \ --job_name=lerobot_training \ --policy.device=cuda \ --policy.repo_id=${HF_USER}/<desired_policy_repo_id> --wandb.enable=true ``` _Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._ ### Evaluate the policy/run inference ```bash lerobot-record \ --robot.type=so100_follower \ --dataset.repo_id=<hf_user>/eval_<dataset> \ --policy.path=<hf_user>/<desired_policy_repo_id> \ --episodes=10 ``` Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint. --- ## Model Details - **License:** apache-2.0
qualiaadmin/38fb746d-a24f-4fbf-adbe-844e792a8909
qualiaadmin
2025-09-22T12:18:21Z
0
0
lerobot
[ "lerobot", "safetensors", "robotics", "smolvla", "dataset:Calvert0921/SmolVLA_LiftBlackCube5_Franka_100", "arxiv:2506.01844", "base_model:lerobot/smolvla_base", "base_model:finetune:lerobot/smolvla_base", "license:apache-2.0", "region:us" ]
robotics
2025-09-22T12:09:29Z
--- base_model: lerobot/smolvla_base datasets: Calvert0921/SmolVLA_LiftBlackCube5_Franka_100 library_name: lerobot license: apache-2.0 model_name: smolvla pipeline_tag: robotics tags: - lerobot - robotics - smolvla --- # Model Card for smolvla <!-- Provide a quick summary of what the model is/does. --> [SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware. This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot). See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index). --- ## How to Get Started with the Model For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy). Below is the short version on how to train and run inference/eval: ### Train from scratch ```bash lerobot-train \ --dataset.repo_id=${HF_USER}/<dataset> \ --policy.type=act \ --output_dir=outputs/train/<desired_policy_repo_id> \ --job_name=lerobot_training \ --policy.device=cuda \ --policy.repo_id=${HF_USER}/<desired_policy_repo_id> --wandb.enable=true ``` _Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._ ### Evaluate the policy/run inference ```bash lerobot-record \ --robot.type=so100_follower \ --dataset.repo_id=<hf_user>/eval_<dataset> \ --policy.path=<hf_user>/<desired_policy_repo_id> \ --episodes=10 ``` Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint. --- ## Model Details - **License:** apache-2.0
yueqis/full_sft_non_web-qwen-7b-3epochs-30k-5e-5
yueqis
2025-09-22T12:17:56Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "llama-factory", "full", "generated_from_trainer", "conversational", "base_model:Qwen/Qwen2.5-7B-Instruct", "base_model:finetune:Qwen/Qwen2.5-7B-Instruct", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-22T12:10:07Z
--- library_name: transformers license: other base_model: Qwen/Qwen2.5-7B-Instruct tags: - llama-factory - full - generated_from_trainer model-index: - name: full_sft_non_web-qwen-7b-3epochs-30k-5e-5 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # full_sft_non_web-qwen-7b-3epochs-30k-5e-5 This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the full_sft_non_web dataset. It achieves the following results on the evaluation set: - Loss: 0.2300 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 16 - total_train_batch_size: 128 - total_eval_batch_size: 8 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.05 - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.51.3 - Pytorch 2.7.0+cu126 - Datasets 3.5.0 - Tokenizers 0.21.1
mazdaypci/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wily_padded_mink
mazdaypci
2025-09-22T12:17:53Z
215
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am wily padded mink", "trl", "genrl-swarm", "I am wily_padded_mink", "conversational", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-11T11:59:53Z
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wily_padded_mink tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am wily padded mink - trl - genrl-swarm - I am wily_padded_mink licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wily_padded_mink This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="mazdaypci/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wily_padded_mink", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.50.3 - Pytorch: 2.5.1 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Desalegnn/Desu-snowflake-arctic-embed-l-v2.0-finetuned-amharic-45k
Desalegnn
2025-09-22T12:15:57Z
0
0
sentence-transformers
[ "sentence-transformers", "safetensors", "xlm-roberta", "sentence-similarity", "feature-extraction", "dense", "generated_from_trainer", "dataset_size:40237", "loss:MatryoshkaLoss", "loss:MultipleNegativesRankingLoss", "en", "dataset:Desalegnn/amharic-passage-retrieval-dataset", "arxiv:1908.10084", "arxiv:2205.13147", "arxiv:1705.00652", "base_model:Snowflake/snowflake-arctic-embed-l-v2.0", "base_model:finetune:Snowflake/snowflake-arctic-embed-l-v2.0", "license:apache-2.0", "model-index", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2025-09-22T12:15:09Z
--- language: - en license: apache-2.0 tags: - sentence-transformers - sentence-similarity - feature-extraction - dense - generated_from_trainer - dataset_size:40237 - loss:MatryoshkaLoss - loss:MultipleNegativesRankingLoss base_model: Snowflake/snowflake-arctic-embed-l-v2.0 widget: - source_sentence: የሞዴል ጥቃቅንና አነስተኛ ኢንተርፕራይዞች ኤግዚቢሽንና ባዛር የ4 ሚሊዮን ብር ሽያጭና የገበያ ትስስር እንደሚፈጠር ተገለጸ sentences: - አዲስ አበባ ፣ ነሃሴ 22 ፣ 2012 (ኤፍ ቢ ሲ) ሰኔ 16 ቀን 2010 ዓ.ም በአዲስ አበባ መስቀል አደባባይ ለጠቅላይ ሚኒስትር ዐቢይ አሕመድ በተካሄደ የድጋፍ ሰልፍ ላይ ቦምብ በመወርወር የሽብር ወንጀል የተከሰሱ አምስት ተከሳሾች የጥፋተኝነት ፍርድ ተፈረደባቸው።ተከሳሾቹ ጌቱ ቶሎሳ፣ ብርሃኑ ጃፋር፣ ጥላሁን ጌታቸው፣ ደሳለኝ ተስፋዬ እና ባህሩ ቶላ ሲሆኑ የጥፋተኝነት ፍርዱን የፌደራሉ ከፍተኛ ፍርድ ቤት 1ኛ የወንጀል ችሎት ነው ያስተላለፈው።የዐቃቤ ህግ ክስ እንደሚያመላክተው ተከሳሾቹ ወንጀሉን የፈጸሙት ሰኔ 16 ቀን 2010 ዓ.ም በአዲስ አባባ መስቀል አደባባይ ከረፋዱ አራት ሰአት ላይ በ40 ሜትር ርቀት አካባቢ ለጠቅላይ ሚኒስትር ዐቢይ አሕመድ በተደረገው የድጋፍ ሰልፍ ላይ ቦንብ በመወርወር ነው።ተከሳሾቹ በ1996 ዓ.ም የወጣውን የኢፌዴሪ የወንጀል ህግ አንቀጽ 32/1ሀ እንዲሁም አንቀጽ 38 እና የፀረ ሽብርተኝነት አዋጅ ቁጥር 652/2001 አንቀጽ 3 ስር የተመለከተውን በመተላለፍ፤ በሃገሪቱ ያለውን ለውጥ ተከትሎ በጠቅላይ ሚኒስትር ዐቢይ የሚመራ መንግስት መኖር የለበትም በሚል የራሳቸውን አላማ ለማራመድ በማሰብ መንቀሳቀሳቸውን ዐቃቤ ህግ በክሱ አመላክቷል።በዚህም ከ1ኛ እስከ 4ኛ ያሉ ተከሳሾች ከሱሉሉታ ከተማ መነሻቸውን በማድረግ በስልክ በመደዋወልና በአካል በመገናኘት በድጋፍ ሰልፉ ላይ እንዴት ቦምብ መወርወር እንዳለባቸው ሲዘጋጁ ቆይተዋልም ነው ያለው ዐቃቤ ህግ፡፡በዚህ መልኩ በ1ኛ ተከሳሽ ቤት ቡራዩ በማደር 2ኛ ተከሳሽ በሚያሽከረክረው ተሽከርካሪ 2ኛ ተከሳሽ ያዘጋጀውን ኤፍ1 ቦምብ በመያዝ ከ3 እስከ 5ኛ ያሉ ተከሳሾች ጋር ከፒያሳ ወደ ቴድሮስ አደባባይ በመምጣትና የድጋፍ ቲሸርት ልብስ ገዝተው በመልበስ ተመሳስለው መግባታቸው ተጠቅሷል።በድጋፍ ሰልፉ ላይ ጠቅላይ ሚኒስትር ዐቢይ ንግግር ካደረጉ በኋላ ተከሳሾቹ በ40 ሜትር ርቀት ላይ ቦምብ የወረወሩ ሲሆን በዚህም የሁለት ሰዎች ህይወት ሲያልፍ ከ163 በላይ ሰዎች ላይ ደግሞ ከከባድ እስከ ቀላል የአካል ጉዳት እንደደረሰባቸውም ዐቃቤ ህግ አስረድቷል፡፡የዐቃቤ ህግን የሰነድና የሰው ምስክር እንዲሁም የተከሳሾችን መከላከያ የመረመረው ፍርድ ቤቱ ተከሳሾቹን በተከሰሱበት ወንጀል ጥፋተኛ ብሏቸዋል።በተከሳሾቹ ላይ የቅጣት ውሳኔ ለመስጠትም ለጥቅምት 17 ቀን 2013 ዓ.ም ተለዋጭ ቀጠሮ ሰጥቷል።እስከ ጥቅምት 17 ድረስ ግን የቅጣት ማቅለያዎችን ማቅረብ እንደሚቻል ትዕዛዝ ሰጥቷል።በታሪክ አዱኛ - 'አዲሱ ገረመው አዲስ አበባ፡- የ2013 በጀት ዓመት የ4 ሚሊዮን ብር ሽያጭና የገበያ ትስስር እንደሚፈጥር የፌዴራል የከተሞች የስራ ዕድል ፈጠራና የምግብ ዋስትና ኤጀንሲ አስታወቀ። ከተሳታፊዎች ውስጥ 50 በመቶዎቹ ሴቶች መሆናቸው ተጠቆመ ። ኤጀንሲው ለአዲስ ዘመን ጋዜጣ በላከው መግለጫ እንዳስታወቀው፤ በ2013 በጀት አመት አንደኛው ዙር የሞዴል ጥቃቅንና አነስተኛ ኢንተርፕራይዞች ሀገር አቀፍ ኤግዚቢሽንና ባዛር ‹‹ዘላቂነት ያለው የገበያ ትስስር ለስራ ዕድል ፈጠራና ለኢንተርፕራይዞች ልማት መሰረት ነው ›› በሚል መሪ ቃል ከታህሳስ 22 እስከ ታህሳስ 28 ቀን 2013 ዓ.ም በጀሞ አንድ አደባባይ ትራፊክ መብራት ፊትለፊት ለሰባት ተከታታይ ቀናት የሚካሄድ ይሆናል። የ4 ሚሊዮን ብር ሽያጭና የገበያ ትስስር እንዲሚፈጥርም ይጠበቃል። በኤግዚቢሽንና ባዛሩ ላይ ከሁሉም ክልሎችና ከተሞች የተውጣጡ 202 የጥቃቅን እና አነስተኛ ኢንተርፕራይዞች 10 አነስተኛና መካከለኛ ኢንዱስትሪዎች የሚሳተፉ ሲሆን፤ ሴቶች 50 በመቶ እና አካል ጉዳተኛ ሦስት በመቶ በማሳተፍ ምርትና አገልግሎታቸው ከ20ሺ በላይ በሚሆን ተጠቃሚ የህብረተሰብ ክፍል እንዲጎበኝ ይደረጋል ብሏል ። ባዛሩ ከተለያዩ ክልሎችና አካባቢዎች የተሰባሰቡና በልዩ ልዩ ዘርፎች የተሰማሩ ብቁና ተወዳዳሪ ኢንተርፕራይዞችንና አንቀሳቃሾችን የሚያሳትፍ ሲሆን፤ በአንድ ማዕከል በማገናኘት በሚፈጠረው ትውውቅና የልምድ ልውውጥ በመካከላቸው ጤናማ የውድድር ስሜት ለማቀጣጠል እንደሚያስችልም “ኤጀንሲው አመልክቷል ። ባህላዊና ዘመናዊ የጨርቃጨርቅና አልባሳት ምርት ውጤቶች፣ ባህላዊና ዘመናዊ የቆዳ አልባሳትና የቆዳ ምርት ውጤቶች፣ ባህላዊ የዕደ-ጥበባትና ቅርጻ-ቅርጽ ሥራዎችና ውጤቶች፣ የብረታብረት፣ የእንጨት ሥራና የኢንጅነሪንግ ስራዎችና ውጤቶች፣ የአግሮ-ፕሮሰሲንግ ምርቶች እና የከተማ ግብርና ውጤቶች፣ የቴክኖሎጂ ውጤቶችና የፈጠራ ስራዎች፣ ፈሳሽ ሳሙና፣አልኮል፣ሳኒታይዘር፣ የአፍና አፍንጫ መሸፈኛ ጭንብል/ማስኮች/፣ እና ሌሎችም ምርቶች በኤግዚቢሽንና ባዛሩ እንደሚቀርቡ አስታውቋል። የአዲስ አበባ ነጋዴ ሴቶች ማህበር፣ የሴቶች ኢንተርፕርነርሺፕ ልማት ፕሮግራም፣ ኢንተርፕርነርሺፕ ልማት ማዕከል፣ ፋሽን ዲዛይን አሶሴሽን፣ የሴቶች ራስ አገዝ ድርጅት፣ የባህልና ቱሪዝም ሚኒስቴር በዕደ ጥበብ ዘርፍ የተሰማሩ ኢንተርፕራይዞችና ሌሎችም ተሳታፊ ኢንተርፕራይዞች እንደሚሆኑ ጠቁሟል። ሁነቱ የተሞክሮ ልውውጥና የንግድ ልማት ግንዛቤ ከማዳበሩም ባሻገር፤ ኢንተርፕራይዞች ከተጠቃሚው ህብረተሰብ ጋር በሚያደርጉት ግንኙነት ዘላቂ የገበያ ትስስር ለመፍጠር የሚያስችል ምቹ አጋጣሚ ይሆንላቸዋል። ምርቶቻቸውንና አገልግሎታቸውን ለተጠቃሚዎች በቀጥታ በመሸጥም ተጠቃሚ እንደሚሆኑም እጀንሲው አስታውቋል ።አዲስ ዘመን ታህሳስ 22/2013' - የአሜሪካው ሜሪየም ዌብስተር መዝገበ ቃላት እንደ ኦክስፎርድ መዝገበ ቃላት ሁሉ ታዋቂና ዓለም አቀፍ ተቀባይነት ያለው መዝገበ ቃላት ነው።አንዲት ወጣት ጥቁር አሜሪካዊት ታዲያ ለዚህ መዝገበ ቃላት አሳታሚ በጻፈቸው ደብዳቤ ምክንያት መዝገበ ቃላቱ ዘረኝነት ወይም (racism) ለሚለው የእንግሊዝኛ ቃል የትርጉም ፍቺ ማሻሻያ ለማድረግ ወስኗል። - source_sentence: የደኢሕዴን ከፍተኛ አመራሮች በሐዋሳ እየመከሩ ነው sentences: - 'የሁለት ዞኖች ከፍተኛ አመራሮች ታግደዋል የደቡብ ኢትዮጵያ ሕዝቦች ዴሞክራሲያዊ ንቅናቄ (ደኢሕዴን) ከፍተኛ አመራሮች ከሐሙስ ሐምሌ 18 እስከ 22 ቀን 2011 ዓ.ም. ድረስ በሐዋሳ እየመከሩ ነው፡፡ ከፍተኛ አመራሮቹ በክልሉ ውስጥ በተከሰተው ወቅታዊ ችግርና በአገራዊ ጉዳዮች ላይ እንደሚወያዩ፣ በተለይ በድርጅቱ ህልውና ላይ እንደሚያተኩሩም ታውቋል፡፡ የደኢሕዴን ሊቀመንበር ወ/ሮ ሙፈሪያት ካሚል በምክክሩ ላይ ባደረጉት ንግግር፣ በአገር ደረጃና በደቡብ ክልል የፖለቲካና የፀጥታ ጉዳዮች ላይ ወጥ አቋም ያለው አመራር አስፈላጊነትን አውስተዋል፡፡ ከዚህ አንፃርም አመራሩ ራሱን በመፈተሽ ለለውጥ ዝግጁ መሆን እንዳለበት አስታውቀዋል፡፡ እንደ ወ/ሮ ሙፈሪያት ማብራሪያ የደኢሕዴን ህልውና መረጋገጥ የሚችለው፣ አመራሩ ከመቼውም ጊዜ በላይ መንቀሳቀስ ሲችል ብቻ እንደሆነ ነው፡፡ አመራሩ ምንም ነገር እንደማይመጣ በመኩራራት ወይም በወቅታዊ ሁኔታዎች በመሥጋት የሚቀጥል ከሆነ ውጤት እንደማይኖር፣ በወቅቱ ተጨባጭ ሁኔታ ላይ በዝርዝር በመወያየት የድርጅቱ ህልውናን ማስቀጠል ላይ ትኩረት መስጠት እንደሚገባ አስረድተዋል፡፡ ይህ በዚህ እንዳለ ደኢሕዴን የሲዳማ ዞን፣ የሐዋሳ ከተማና የሃድያ ዞን ከፍተኛ አመራሮችን ማገዱንና ለወላይታና ለካፋ ዞኖች አመራሮች ደግሞ ማስጠንቀቂያ መስጠቱን አስታውቋል፡፡ ከክልልነት ጥያቄ ጋር በተያያዘ በተለይ በሲዳማ ዞን ወረዳዎችና በሐዋሳ ከተማ በተፈጸሙ ጥቃቶች የበርካቶች ሕይወት ማለፉን፣ የበርካቶች ቤት ንብረት መውደሙን ተከትሎ የደቡብ ክልል በፌዴራል መንግሥት የፀጥታ አካላት ኮማንድ ፖስት ሥር እንዲተዳደሩ መወሰኑ የሚታወስ ሲሆን፣ በዚህም ምክንያት የደኢሕዴን ሥራ አስፈጻሚ ኮሚቴ በሐዋሳ ከተማ ባደረገው ስብሰባ የአመራሮቹን የዕግድ ውሳኔ አሳልፏል፡፡ በዚህ ስብሰባው የክልሉን የፀጥታ ሁኔታ እንደገመገመ የገለጸው የሥራ አስፈጻሚ ኮሚቴው፣ በተፈጠረ የፀጥታ ችግሮች ሳቢያ የሲዳማ ዞንና የሐዋሳ ከተማን፣ እንዲሁም የሃዲያ ዞን ‹‹የፊት አመራሮች›› እንዳገደ አስታውቋል፡፡ በተያያዘም በወላይታና በካፋ ዞኖች እየታዩ ያሉ ሁኔታዎች የሕግ ተጠያቂነትን የሚያስከትሉ ስለሆኑ፣ አመራሩ የሕዝቡን ደኅንነት ለማስጠበቅ እንዲሠራ ሲል አስጠንቅቋል፡፡ በዚህም ሳቢያ የሲዳማ ዞን አስተዳዳሪ አቶ ቃሬ ጫዊቻና የሐዋሳ ከተማ ከንቲባ አቶ ሱካሬ ሹዳ መታገዳቸውን ለማወቅ ተችሏል፡፡ የሥራ አስፈጻሚ ኮሚቴው በሐዋሳና በአካባቢው ሐምሌ 11 ቀን 2011 ዓ.ም. ክልልነትን እናውጃለን በሚል በተፈጸመ ጥቃት የተጎዱ ቤተሰቦችን መልሶ ለማቋቋም እንደሚሠራ በማስታወቅ፣ የጥፋቱ ተሳታፊዎችም ሆኑ አስተባባሪዎች የሕግ ተጠያቂ እንዲሆኑ እሠራለሁ ብሏል፡፡ አሁን ለተከሰተው ጥፋትም ሆነ እየተስተዋለ በሚገኘው ሥርዓተ አልበኝነት ውስጥ የአመራሩ ሚና ከፍተኛ መሆኑን ያመነው የሥራ አስፈጻሚ ኮሚቴው፣ ይኼንን ለማረም ከሥራ አስፈጻሚ እስከ ታችኛው የአመራር ሥርዓት ድረስ ፈትሾ ዕርምጃ እንደሚወስድ ቃል ገብቷል፡፡ ' - 'አዲስ አበባ፣ ጥር 2፣ 2012 (ኤፍ.ቢ.ሲ) በፓኪስታን ደቡብ ምእራብ ኩዌታ ከተማ በመስጊድ ላይ በተፈፀመ የቦብም ጥቃት የሞቱ ሰዎች ቁጥር 15 መድረሱን ፖሊስ አስታወቀ።በአርብ ፀሎት ላይ በነበሩ ሰዎች ላይ በተፈፀመው የቦምብ ጥቃቱ ከሞቱት ሰዎች በተጨማሪም ከ20 በላይ ሰዎች ላይ የተለያየ መጠን ያለው ጉዳት መድረሱንም ነው የገለፀው።በመስጊድ ላይ ለተፈፀመው ጥቃትም በአካባቢው የሚንቀሳቀሰው የአሸባሪው ኢስላሚክ ስቴት (አይ.ኤስ) ቡድን ኃላፊነት መውሰዱ ተነገሯል።በሽብር ጥቃቱ በአፍጋኒስታን የሚንቀሳቀሰው የታሊባን ቡድን አመራሮች ተገድለዋል ቢባልም፤ ታሊባን ግን አመራሮቼ ላይ ጉዳት አልደረሰም ሲል አስተባብሏል።ምንጭ፦ ' - በኢትዮጵያ ፕሪምየር ሊግ ዘጠነኛ ሳምንት መቐለ 70 እንደርታ በሜዳው ሲዳማ ቡናን 3-1 ካሸነፈ በኋላ የሁለቱ ቡድኖች አሰልጣኞች አስተያየታቸውን ሰጥተዋል። ” ሲዳማ ቡና በጥሩ ወቅታዊ አቋም የሚገኝ ቡድን በመሆኑ ጨዋታው ከባድ ነበር” –  ገ/መድኅን ኃይሌ – መቐለ 70 እንደርታስለ ጨዋታው” ጨዋታው ከባድ ነበር፤ ሲዳማ ቡና በጥሩ ወቅታዊ አቋም የሚገኝ ቡድን ነው ፤ የያዙት ነጥብም ለዚህ ጨዋታ ጥሩ የስነልቦና ጥንካሬ አስገኝቶላቸዋል። በአንፃሩ እኛ አራት ጨዋታዎች ሳናሸንፍ ነው ወደ ጨዋታው የገባነው። በዚ ምክንያት ጨዋታው አክብዶብን ነበር። በአጠቃላይ ጨዋታውን አሸንፈናል። በቀጣይ ጨዋታዎች ቀስ በቀሰ ወደ አሸናፊነት መጥተን ይህን እናስቀጥላለን። ”“ዳኝነት ላይ ያየሁት ነገር ጥሩ አይደለም” ዘርዓይ ሙሉ – ሲዳማ ቡና ስለ ጨዋታው ” ከዕረፍት በፊት ከጨዋታ ውጪ ኳሱ በኋላ ተጫዋቾቻችን መረጋጋት አልቻሉም። በጨዋታው አሳፋሪ ዳኝነት ነው ያየሁት። ስለ ጨዋታው ብጠይቀኝ አሳፋሪ እና ሚዛናዊት የሌለው ዳኝነት ነው። የተቆጠርቡን ግቦች እኛ ላይ ጥፋት እየተፈፀሙ የተቆጠሩ ናቸው። ከጨዋታ ውጭ ሆኖም ግብ ይቆጠራል። በቃ ይህንን ነው ያየሁት። ከዚ ውጭ ግን መቐለ ለማሸነፍ የነበረው ተነሳሽነት ጥሩ ነበር። እንደ ቡድን ተንቀሳቅሰዋል እኛም የተሻለ ኳስ ተቆጣጥረን ተጫውተናል። እንዳያችሁት ኳሱን መስርተን ነው የወጣነው ግን በተለያዩ ስህተቶች ግብ ሲቆጠርብን የተጫዋቾቻችን ብቃት አወረደው። የምንፈልገው እንቅስቃሴ ያላደረግነው በዳኞች ምክንያት ነው። ገና በሰባተኛ ደቂቃ ነው የተጀመረው ይሄ ነገር። ጨዋታው ጥሩ ሆኖ ሳለ ሚዛኑ የጠበቀ ዳኝነት አላየንም። ዳኝነቱ ልክ ካልሆነ የጨዋታው እንቅስቃሴ እንዳለ ይበላሻል ይሄ ሁሉ ደጋፊ የገባው ጥሩ ጨዋታ ለማየት ነው። ለምንድነው ተጫዋቾች ሮጠው ዳኛ ላይ የሚሄዱት። በተደጋጋሚ ስህተት ይሰራ ነበር። እኛ ተጫዋቾቻችንን ብናረጋጋም የሚያደርጉት ስህተት ለሌላ ነገር የሚዳርግ ነበር። ዳኞቹ አቅም አንሷቸው ነው ብዬ አላስብም፤ ሆን ተብሎ የተደረገ ነገር ነው። ዳኝነት ላይ ያየሁት ነገር ጥሩ አይደለም። መቐለን ግን እንደ ቡድን ጥሩ ነው እንኳን ደስ አላቹ ማለት እፈልጋለው። ”ስለ ስታድየሙ ድባብ” ደጋፊው የሚደነቅ ደጋፊ ነው። በስርዓት ነው ቡድኑን የሚደግፈው። ምንም ነገር ቢፈጠር ቡድኑን ነበር ሲደግፍ የነበረው። ”ዳኝነት ላይ ስለሰጠው አስተያየት” እኔ አዳላ አላልኩም። ግን ብቃት ማነስ ነው ብዬ አላስብም። እነዚህ ሁሉ ግቦች እስኪቆጠሩ ብቃት ማነስ አይደለም። በአጠቃላይ ዳኝነቱ ሚዘናዊ አልነበረም። ሁሉም ግብ ላይ የዳኛ ተፅዕኖ አለበት፤ በቃ ይሄን ነው የምለው። አንዱን ከጨዋታ ውጪ ብለህ አንዱን የምታፀድቅ ከሆነ ስህተት ነው። “ - source_sentence: የከምባታና ጠንባሮ አርሶአደሮች sentences: - በደሴ ማረሚያ ቤት በተደረገ የኮቪድ-19 ምርመራ 13 ሰዎች ቫይረሱ እንዳለባቸው ማረጋገጡን የከተማው ጤና መምሪያ አስታወቀ።የመምሪያው ኃላፊ አቶ አብዱልሃሚድ ይመር በተለይ ለቢቢሲ እንዳስታወቁት 12ቱ የህግ ታራሚዎች ሲሆኑ ሌላኛው ደግሞ የማረሚያ ቤቱ ባልደረባ ናቸው።እንደ አቶ አብዱልሃሚድ ገለጻ ከሆነ ከማረሚያ ቤቱ ጋር በመነጋገርም አዲስ የሚገቡ ታራሚዎች ለ14 ቀናት ለብቻቸው እንዲቆዩ ከማድረግ በተጨማሪ በመጨረሻዎቹ ቀናት ላይ ምርመራ ሲደረግላቸው ቆይቷል።ከሐምሌ 20 በኋላ ማረሚያ ቤቱ የገቡ 46 ታራሚዎች ላይ በተደረገ ምርመራ 10 ሰዎች ኮሮናቫይረስ እንዳለባቸው ለማረጋገጥ ተችሏል።“ታራሚዎቹ ከተለያዩ አካባቢዎች የመጡ ናቸው። ከተለያዩ ከደቡብ ወሎ ወረዳዎች እና ከደሴ ከተማም የተገኙ ናቸው” ብለዋል።በሁለተኛ ዙር 60 ሰዎች ላይ በተደረገ ምርመራ ሦስቱ ቫይረሱ እንዳለባቸው ተረጋግጧል።በሁለተኛው ዙር ቫይረሱ ከተገኘባቸው መካከል በመጀመሪያው ዙር እንዳለባቸው ከታወቁ ሰዎች ጋር ንክኪ የነበራቸው እና አንድ ማረሚያ ቤቱ ባልደረባ ይገኙበታል።የማረሚያ ቤቱን የሕግ ታራሚዎች እና ባልደረባዎችን በሙሉ ለመመርመር መቻሉንም አቶ አብዱልሃሚድ አስታውቀዋል።ቫይረሱ የተገኘባቸው ቦሩ ሜዳ መጀመሪያ ደረጃ ሆስፒታል የተላኩ ሲሆን፤ ተጓዳኝ ህመም ያለበት አንድ ታራሚ ካሳየው የህመም ምልክት ውጭ ሁሉም በጥሩ ሁኔታ ላይ እንደሚገኙ ተናግረዋል።በማረሚያ ቤቱ የቫይረሱ ስርጭት እንዳይስፋፋ አዲስ የሚገቡትን እና ነባር ታራሚዎችን ከመመርመር ባለፈ የግንዛቤ ማስጨበጫ ሥራ፣ የኬሚካል ርጭት፣ ርቀትን ማስጠበቅ እና ንጽህና የማስጠበቅ ሥራ እየተከናወነ ነው ብለዋል።ባለፉት ወራት በአማራ ክልል በተደረገ የኮሮናቫይረስ ምርመራ 83 አሽከርካሪዎች እና ረዳቶቻቸው ቫይረሱ ተገኝቶባቸዋል።በክልሉ ቫይረሱ ከተገኘባቸው ሰዎች መካካል 23 የህክምና ባለሙያዎች እንደሚገኙበትም ከአማራ ህብረተሰብ ጤና ኢንስቲትዩት ያገኘነው መረጃ ያሳያል።በአጠቃላይ በኢትዮጵያ በኮቪድ-19 የተያዙ ሰዎች ቁጥር 25,118 የደረሱ ሲሆን የሟቾች ቁጥር 463 ደርሷል። እንዲሁም አጠቃላይ ከበሽታው ያገገሙ ሰዎች 11,034 ደርሰዋል። - 'በደቡብ ክልል ከፋ ዞን ዴቻ ወረዳ ከ20 ሺህ በላይ የከምባታና ጠምባሮ አርሶአደሮች በማንነታችን ጥቃት ደርሶብናል በማለት እየተፈናቀሉ ናቸው፡፡አርሶአደሮቹ የተፈናቀሉት ከሶስት ሳምንት በፊት በወረዳው ከ30 በላይ ሲቪሎች በታጠቁ ግለሰቦች በአሰቃቂ ሁኔታ መገደላቸውን ተከትሎ ነው ተብሏል፡፡ጉዳያችንን ለክልሉ መንግሥት ብናሳውቅም ችላ ተብለናል ሲሉ አርሶአደቹ ተናግረዋል። አሁን ለችግር መጋለጣቸውንም ለቪኦኤ አስረድተዋል፡፡የከምባታ ጠንባሮ ዞን በበኩሉ የተፈናቀሉ ዜጎች በስቃይ ላይ መሆናቸውን ገልጦ መፍትሔ እየተፈለገ መሆኑን አስታውቋል፡፡ ' -  ባሕር ዳር፡ መስከረም 7/2012 ዓ.ም (አብመድ) በጣልያን ባሕር ዳርቻ ጠባቂዎች ሕይወታቸው የተረፉ 90 ስደተኞችን ማልታ ለመቀበል ተስማማች፡፡በቀጣዩ ሳምንት ደግሞ በአዲስ የስደተኞች መከፋፈያ አሠራር ዘዴ ላይ የአውሮፓ ኅብረት ሊመክር ነው፡፡የማልታ የሕይወት አድን ትብብር ማዕከል በጠየቀው መሠረት ትናንት የጣልያን ባሕር ዳርቻ ጠባቂ ቡድን ስደተኞቹን ታድጓል፡፡ ከሊቢያ የባሕር ክልል ውጭ እየሰመጠች ከነበረች ጀልባ ነው ስደተኞቹን ማትረፍ የተቻለው፡፡ ማልታ በመጀመሪያ ስደተኞቹን ወደ ሀገሯ ለማስገባት ፈቃደኛ አልሆነችም ነበር፡፡ - source_sentence: የአዲስ አበባ ከተማ አስተዳደር የጀመረው ኦዲት ወደ ባለ ኮከብ ሆቴሎችና ኢንዱስትሪዎች ተሸጋገረ sentences: - የኢትዮጵያ እግር ኳስ ፌዴሬሽን ከኢትዮጵያ ብሮድካስቲንግ ኮርፖሬሽን (EBC) ጋር በተፈራረመው የመግባቢያ ሰነድ ስምምነት ዙሪያ ከፕሪሚየር ሊግ ክለቦች ጋር ነገ ከጠዋቱ 4፡00 ጀምሮ በኢንተርኮንትኔንታል ሆቴል ውይይት ያካሂዳል፡፡በውይይቱ ፌዴሬሽኑና EBC የኢትዮጵያ ፕሪሚየር ሊግ ጨዋታዎችን በቀጥታ የተሌቭዥን ስርጭት አማካኝነት በመላ ኢትዮጵያ ተደራሽ ለማድረግ ነሃሴ 6/2007 ዓ.ም የተፈራረሙትን የመግባቢያ ሰነድ አስመልክቶ ስለ ስምምነቱ ፋይዳና ሂደት ገለፃ የሚደረግ ሲሆን ከፕሪሚየር ሊግ ክለቦች ለሚነሱ ጥያቄዎች ማብራሪያ ይሰጣል፡፡ በክለቦች መብትና ተጠቃሚነት ዙሪያም ግልጽ ውይይት ይካሄዳል፡፡ስምምነቱ ይፋ መደረጉንና መፈረሙን ተከትሎ ከተለያዩ በላድርሻ አከላት የተነሱት ጥያቄዎች በተለይም የኢትዮጵያ ቡና ስፖርት ክለብ በደብዳቤ አቋሙን የገለጸበት አግባብ ተቀባይነት እንዳለው ታምኖበታል፡፡ ነገ ከጠዋቱ 4፡00 ጀምሮ የሚካሄደውና የፕሪሚየር ሊግ ክለቦች ፕሬዝዳንቶች እና ስራ አስኪያጆች የሚሳተፉበት የውይይት መድረክ ስምምነቱን አስመልክቶ ሊነሱ የሚችሉትን ጥያቄዎች በመቀበል የማስተካካያ ርምጃ ለመውሰድ የሚያስችል በመሆኑ ሁሉም ክለቦች የውይይቱ ተሳታፊ እንዲሆኑ ፌዴሬሽኑ ጥሪውን አስተላልፋል፡፡ፌዴሬሽኑና ኢቢሲ አለም አቀፍና የሀገር ውስጥ ጨዋታዎችን በቴሌቭዥን የቀጥታ ስርጭት ለማስተላለፍ የተፈራረሙት የመግባቢያ ሰነድ ዓላማዎች በዋነኝነት የወጣቱን ትውልድ የእግር ኳስ ስፖርት ተነሳሽነት ማሳደግ፣ የብሔራዊ እና አገር ውስጥ ውድድሮችን የቀጥታ ስርጭት ተደራሽነት ማረጋገጥ እንዲሁም ለእግር ኳስ ስፖርት ዘላቂና አስተማማኝ እድገት አመቺ ሁኔታዎችን በመፍጠር ላይ እንደሚመሰረት መገለጹ ይታወሳል፡፡ማስታወሻ፡- በውይይቱ የሚሳተፉት የፌዴሬሽኑ የስራ ሃላፊዎችና የክለቦች ተወካዮች ብቻ ናቸው፡፡ - ለመጀመርያ ጊዜ በተሟላ ደረጃ መሬትና መሬት ነክ ይዞታዎችን ኦዲት በማድረግ ላይ የሚገኘው የአዲስ አበባ ከተማ አስተዳደር፣ የኦዲት አድማሱን በማስፋት በባለ ኮከብ ሆቴሎችና በኢንዱስትሪዎች ላይ ቆጠራ ሊያካሂድ ነው፡፡ የአዲስ አበባ ከተማ አስተዳደር ከ1995 ዓ.ም. ጀምሮ እስከ ኅዳር 2004 ዓ.ም. የከተማ ቦታ በሊዝ ስለመያዝ የሚደነግገው እስኪወጣበት ጊዜ ድረስ፣ ላለፉት 15 ዓመታት በኢንዱስትሪ ዞኖችና በተናጠል ለሚካሄዱ ፋብሪካዎች በርካታ ቦታዎችን ሰጥቷል፡፡ ከዚህ በተጨማሪ ለበርካታ ሆቴሎች ግንባታ የሚሆን ሰፋፊ ቦታዎችንም እንዲሁ አቅርቧል፡፡ነገር ግን አስተዳደሩ በሰጣቸው ቦታዎች ላይ ስለተከናወነው ልማትም ሆነ፣ የተከናወኑት ግንባታዎች በውላቸው መሠረት ስለመካሄዳቸው በትክክል የተጠናቀረ መረጃ እንደሌለ ይገልጻል፡፡በከተማው ውስጥ የሚገኙ አምራች ኢንዱስትሪዎችንና ባለ ኮከብ ሆቴሎችን ቁጥር ለማወቅ፣ በአግባቡ ሥራዎችን ባላካሄዱት ላይ ደግሞ የማስተካከያ ዕርምጃ ለመውሰድ ኦዲት እንደሚከናወን ለማወቅ ተችሏል፡፡የአዲስ አበባ ከተማ አስተዳደር ምክትል ከንቲባ ታከለ ኡማ (ኢንጂነር) ለሪፖርተር፣ ‹‹እስካሁን ግንባታ ሳይካሄድባቸው ለዓመታት ታጥረው የቆዩ ከአራት ሚሊዮን ካሬ ሜትር በላይ ቦታ መልሰን ወስደናል፤›› ብለዋል፡፡‹‹‹ይህ ትልቅ ሥራ ነው፤›› በማለት ምክትል ከንቲባው ገልጸው፣ በቀጣይ ደግሞ በሆቴሎች፣ በኢንዱስትሪዎች፣ በድንጋይ ማምረቻ ካባዎች፣ እንዲሁም በመኖሪያ ቤቶች ላይ ኦዲት ተካሂዶ ዕርምጃ ይወሰዳል ሲሉ ገልጸዋል፡፡ ‹‹ሥራው ውስብስብ በመሆኑ የሚካሄደው ኦዲት አንዴ ብቻ ሳይሆን ሦስት፣ አራት ጊዜ ይታያል፡፡ ካስፈለገም የማረጋገጡን ሥራ ማዕከላዊ ስታትስቲክስ ኤጀንሲ ሊያከናውን ይችላል፤›› በማለት ምክትል ከንቲባው አስረድተዋል፡፡በአዲስ አበባ ከተማ አምራች ኢንዱስትሪዎች፣ ሆቴሎች፣ ለድንጋይ ማውጪያ የተሰጡ ቦታዎች ያሉበት ወቅታዊ ሁኔታ በትክክል አይታወቅም፡፡ ለእነዚህ ዘርፎች የቀረበው ቦታ ለታለመለት ዓላማ በትክክል ስለመዋሉ፣ ከዘርፉ የሚመነጨው ኢኮኖሚም ሆነ የተፈጠረው የሥራ ዕድል ሽፋን እምብዛም አይታወቅም፡፡ይህንን ሥራ በተሻለ ደረጃ ለመሥራት የከተማው ኢንዱስትሪ ቢሮ ከማዕከላዊ ስታትስቲክስ ኤጀንሲ ጋር በጋራ ለመሥራትም መስማማታቸው ታውቋል፡፡ የማዕከላዊ ስታትስቲክስ ኤጀንሲ የቢዝነስ ስታትስቲክስ ዳይሬክተር አቶ ዘለዓለም ኃይለ ጊዮርጊስ፣ በሆቴሎችና በኢንዱስትሪዎች ላይ ቆጠራውን ለማካሄድ ሙሉ ዝግጅት እየተደረገ መሆኑን ለሪፖርተር ገልጸው፣ በጉዳዩ ላይ ዝርዝር መረጃ ከመስጠት ተቆጥበዋል፡፡   - ጠቅላይ ሚኒስትር ዶክተር አብይ አህመድ ለተለያዩ የመንግስት የስራ ሀላፊዎች ሹመት መስጠታቸውን የጠቅላይ ሚኒስቴር ጽህፈት ቤት አስታውቋል።በጠቅላይ ሚኒስትር ጽህፈት ቤት መግለጫ መሰረት፦ 1.ዶክተር አምባቸው መኮንን፦ የጠቅላይ ሚንስትሩ የመሰረተ ልማትና የከተማ ልማት አማካሪ ሚንስትር 2.አቶ ገብረእግዚአብሔር አርአያ፦ በሚንስትር ዴኤታ ማዕረግ በህዝብ ተወካዮች ምክር ቤት የመንግስት ረዳት ተጠሪ 3.አቶ ጫኔ ሽመካ፦ በሚንስትር ዴኤታ ማዕረግ በህዝብ ተወካዮች ምክር ቤት የመንግስት ረዳት ተጠሪ 4.አቶ ጫላ ለሚ፦ በሚንስትር ዴኤታ ማዕረግ በህዝብ ተወካዮች ምክር ቤት የመንግስት ረዳት ተጠሪ5.አቶ ተስፋሁን ጎበዛይ፦ የጠቅላይ ሚንስትሩ የብሔራዊ ደህንነት ጉዳዮች አማካሪ ሚንስትር ዴኤታ6.ብርጋዴል ጄኔራል አህመድ ሀምዛ፦ የብረታ ብረት ኢንጂነሪንግ ኮርፖሬሽን ዋና ዳይሬክተር7.አቶ ሞቱማ መቃሳ፦ የጠቅላይ ሚንስትሩ የብሔራዊ ደህንነት ጉዳዮች አማካሪ ሚንስትር ዴኤታ8.አቶ ከበደ ይማም፦ የአካባቢ ጥበቃ ደንና የአየር ንብረት ለውጥ ኮሚሽን ምክትል ኮሚሽነር9.አቶ አዘዘው ጫኔ፦ የጉምሩክ ኮሚሽን ምክትል ኮሚሽነር10.አቶ አወል አብዲ፦ የብረታ ብረት ኢንጂነሪንግ ኮርፖሬሽን ምክትል ዋና ዳይሬክተር11.አቶ ሙሉጌታ በየነ፦ የጉምሩክ ኮሚሽን ምክትል ኮሚሽነር12. ዶክተር ፅጌረዳ ክፍሌ፦ የብሔራዊ ኤች. አይ. ቪ/ኤድስ መከላከያና መቆጣጠሪያ ጽ/ቤት ዋና ዳይሬክተር13.ወይዘሮ ያምሮት አንዱዓለም፦ የአርማወር ሐሰን የምርምር ኢንስቲትዩት ምክትል ዋና ዳይሬክተር14.ዶክተር ሚዛን ኪሮስ፦ የኢትዮጵያ ጤና መድህን ኤጀንሲ ዋና ዳይሬክተር15.አቶ ሀሚድ ከኒሶ፦ የሰነዶች ማረጋገጫና ምዝገባ ኤጀንሲ ምክትል ዋና ዳይሬክተር16.አቶ ከበደ ጫኔ፦ የስደተኞችና ከስደት ተመላሾች ጉዳይ ኤጀንሲ ዋና ዳይሬክተር17.ወይዘሮ ምስራቅ ማሞ፦ የጉምሩክ ኮሚሽን ምክትል ኮሚሽነር ሆነው ተሹመዋል። - source_sentence: በቁጥጥር ስር የዋሉ የህወሓት ታጣቂዎች ልዩ ኃይሉና ወጣቱ የጥፋት ቡድኑ እኩይ ዓላማ ማስፈጸሚያ ከመሆን እንዲቆጠቡ አስገነዘቡ sentences: - 'የፕሬዚዳንት ዶናልድ ትራምፕ ተቺዎች እንደሚሉት፤ ፕሬዚዳንቱ ለዘመናት የአሜሪካ ወዳጆች በሆኑት ኢትዮጵያ እና ግብፅ መካከል ታላቁ የሕዳሴ ግድብን በተመለከተ ውጥረት ቀስቅሰዋል።ይህም በአሜሪካ እና በአፍሪካ የዲፕሎማሲ ታሪክ ትልቁ የትራምፕ ስህተት ነው ይላሉ።ትራምፕ ከቀናት በፊት ግብፅ "ግድቡን ልታፈነዳው ትችላለች" ማለታቸው ይታወሳል። ጥር ላይ ፕሬዚዳንቱ "ስምምነት መፍጠር ችያለሁ፤ ከባድ ጦርነትም አስቁሜያለሁ" ብለው የኖቤል የሰላም ሽልማት እንደሚገባቸው መናገራቸው ይታወሳል።ነገር ግን ተሸላሚ የሆኑት ጠቅላይ ሚንስትር ዐብይ አሕመድ ነበሩ ።ትራምፕ የኖቤል የሰላም ሽልማት እንደሚገባቸው ሲናገሩ ጉዳዩን ግልፅ ባያደርጉትም፤ በግብፁ ፕሬዘዳንት አብዱልፈታህ አል-ሲሲ ጥሪ መሠረት በኢትዮጵያ እና በግብፅ መካከል ጣልቃ ስለመግባታቸው እየተናገሩ እንደነበረ ይታመናል።ትራምፕ በአንድ ወቅት አብዱልፈታህ አል-ሲሲን "የኔ ምርጡ አምባገነን" ማለታቸው አይዘነጋም።ግብፅ ታላቁ ሕዳሴ ግድብ "ለደህንነቴ ያሰጋኛል" ትላለች። ሱዳንም የግብፅን ያህል ባይሆንም ስጋቱን ትጋራለች። በሌላ በኩል ኢትዮጵያ የኃይል አመንጪውን ግድብ አስፈላጊነት አስረግጣ ትገልጻለች።ኬንያ የሚገኘው የአፍሪካ ቀንድ የጸጥታ ጉዳይ ተንታኝ ረሺድ አብዲ እንደሚለው፤ በግድቡ ዙሪያ ኢትዮጵያ እና ግብፅን ለማደራደር አሜሪካ ጣልቃ መግባቷ የሁለቱን አገሮች ውጥረት አባብሷል።"ኢትዮጵያ በግድቡ አቅራቢያ የጸጥታ ኃይሏን እያጠናከረች ነው። ቤንሻንጉል ጉሙዝ ክልልን ከበረራ ውጪ ማድረጓ አንዱ ማሳያ ነው። በግድቡ ዙሪያ በረራ የሚያግድ መሣሪያም ተገጥሟል። ግብፅ የወታደራዊ ቅኝት በረራ ልታደርግ እንደምትችል ከመስጋት የመነጨ ሊሆን ይችላል" ይላል።ተንታኙ እንደሚናገረው፤ ትራምፕ ዓለም አቀፍ ዲፕሎማሲ እንዴት እንደሚሠራ የሚገነዘቡ አይመስልም።"በንግዱ ዓለም እንደሚደረገው ስምምነት ላይ መድረስ ይቻላል የሚል የተዛባ አመለካከት አላቸው። የውጪ ጉዳይ መያዝ ያለበትን ጉዳይ ግምዣ ቤት ድርድሩን እንዲመራ ያደረጉትም ለዚህ ነው። ከመነሻውም መጥፎ የነበረውን ሁኔታም አባብሶታል" ሲልም ረሺድ ያስረዳል።ኢትዮጵያ ከግብፅ እና ከሱዳን ጋር ያለው ድርድር ሳይቋጭ ግድቡን ለመሙላት በመወሰኗ አሜሪካ የ100 ሚሊዮን ዶላር እርዳታ ማጠፏ ተዘግቧል።ረሺድ "ኢትዮጵያ አሜሪካ እንደከዳቻት ይሰማታል። ብዙ ኢትዮጵያውያን ትራምፕን የጥላቻ ምልክት አድርገውታል" በማለት ሁኔታውን ይገልጻል።የዴሞክራት እጩው ጆ ባይደን እንዲያሸንፉም የበርካታ ኢትዮጵያውያን ምኞት ነው።አሜሪካ የሚገኘው ሴንተር ፎር ግሎባል ዴቨሎፕመንት ውስጥ የፖሊሲ አጥኚ ደብሊው ጉዬ ሙር እንደሚሉት፤ የትራምፕ አስተዳደር እስራኤልና የአረብ ሊግ አገራት መካከል ሰላም መፍጠር ስለሚፈልግ ከግብፅ ጎን መቆሙ የሚጠበቅ ነው።ግብፅ ከእስራኤል ጋር ዘመናት ያስቆጠረ ዲፕሎማሲያዊ ትስስር አላት። ትራምፕ የአረብ ሊግ አገራት ለእስራኤል እውቅና እንዲሰጡ ጥረት እያደረጉ ስለሆነ አብዱልፈታህ አል-ሲሲን ማስቀየም አይፈልጉም።ሙር እንደሚናገሩት፤ የትራምፕ አስተዳደር በግድቡ ዙርያ ለግብፅ የወገነውም በዚህ ምክንያት ነው።ትራምፕ ሱዳንን በተመለከተ የደረሱበት ውሳኔ የአረቡን አገራት ከእስራኤል ጋር ለማስስማት የሚያደርጉት ጥረት አንድ አካል ነው።ሱዳን ከእስራኤል ጋር ስምምነት ለማድረግ ወስናለች።በእርግጥ የአገሪቱ ተጠባባቂ የውጪ ጉዳይ ሚንስትር ውሳኔው ገና በሕግ አውጪ መጽደቅ እንዳለበት ቢናገሩም፤ ሱዳን እንደ ጎርጎሮሳውያኑ 1967 ላይ የአረብ ሊግ አገራት ውይይት ማስተናገዷ መዘንጋት የለበትም። በውይይቱ "ከእስራኤል ጋር መቼም ሰላም አይፈጠርም። መቼም ቢሆን ለእስራኤል እውቅና አይሰጥም። ድርድርም አይካሄድም" ተብሎም ነበር።ሱዳን ከእስራኤል ጋር ለመስማማት በመፍቀዷ ትራምፕ ሽብርን ከሚድፉ አገሮች ዝርዝር እንደሚያስወጧት ተናግረዋል። ይህም ለምጣኔ ሀብቷ ማገገም የሚረዳ ድጋፍ እንድታገኝ ያግዛታል።ትራምፕ በድጋሚ ከተመረጡ ኢትዮጵያ ግድቡን በተመለከተ ሱዳን እና ግብፅ ላላቸው ስጋት አንዳች መልስ እንድትሰጥ ጫና እንደሚያደርጉ ይጠበቃል።አጥኚው እንደሚሉት፤ ሱዳን ሽብርን ከሚደግፉ አገሮች ዝርዝር ከወጣች የትራምፕ አስተዳደር በምላሹ የሚጠብቀው ነገር አለ።"ከእስራኤል ጋር ስምምነት የመፍጠር ጉዳይ የሱዳን ማኅበረሰብን የከፋፈለ ነው። መንግሥት የራሱ የጸጥታ ጥያቄዎች እያሉበት ይህን ውሳኔ ማሳለፉ ችግር ሊያስከትል ይችላል" ብለዋል። ትራምፕ አፍሪካን በተመለከተ የሚያራምዱት ፖሊሲ፤ በአሜሪካ እና በቻይና መካከል የሚካሄድ ''አዲሱ ቀዝቃዛ ጦርነት'' ነው ሲል ረሺድ ይገልጸዋል።ለምሳሌ ቻይና ከግዛቷ ውጪ የመጀመሪያውን ወታደራዊ መቀመጫ የከፈተችው በጅቡቲ ነው። ማዕከሉ የሚገኘው አሜሪካ የሶማሊያ ታጣቂዎች ላይ የአየር ጥቃት ለመሰንዘር ያቋቋመችው ማዕከል አቅራቢያ ነው።በቅርቡ የአሜሪካ ተዋጊ ጀቶች ለማረፍ ሲሞክሩ፤ ቻይና የአሜሪካውያን ወታደሮችን እይታ የሚጋርድ መሣሪያ መሞከሯን ረሺድ ያጣቅሳል። "የትራምፕ አስተዳደር ጸረ ቻይና ፖሊስ ያራምዳል" የሚለው ተንታኙ ሁኔታው ለአፍሪካ ቀንድ አስቸጋሪ መሆኑንም ያስረዳል።ቻይና አፍሪካ ውስጥ ያላትን የንግድ የበላይነት ለመቀልበስ፤ የትራምፕ አስተዳደር ''ፕሮስፔሪቲ አፍሪካ ኢን 2018'' የተባለ ፖሊሲ ነድፏል።በአፍሪካ እና በአሜሪካ መካከል የሚካሄደውን ንግድ በእጥፍ የማሳደግ እቅድ አለ። አምና የአሜሪካ መንግሥት የንግድ ተቋሞች አፍሪካ ውስጥ እንዲሠሩ የገንዘብ ድጋፍ የሚሰጥበት አሠራር ዘርግቷል።ሙር እንደሚሉት፤ የአሜሪካ ድርጅቶች ከቻይና ተቋሞች ጋር መወዳደር አልቻልንም ብለው ቅሬታ ስላሰሙ የገንዘብ ድጋፍ ለመስጠት ተወስኗል። "የአይቲ ዘርፍ እንደ ማሳያ ቢወሰድ፤ 70 በመቶ የአፍሪካ ኢንፎርሜሽን ቴክኖሎጂ የተመሠረተው በቻይና ድርጅቶች ላይ ነው" ሲሉ ያብራራሉ። የትራምፕ አስተዳደር በ2025 የሚያበቃውን ከ30 በላይ የአፍሪካ አገሮች ተጠቃሚ እንዲሆኑበት ታስቦ በአሜሪካ ለአፍሪካውያን የተሰጠው ከታሪፍና ከቀረጥ ነፃ የገበያ ዕድል (አፍሪካ ግሮዝ ኤንድ ኦፖርቹኒቲ አክት-አጎዋ) የመሰረዝ እቅድ አለው። ለአፍሪካ ምርቶች የአሜሪካን ገበያ ክፍት የሚያደርገው ስምምነት የተፈረመው በቢል ክሊንተን ነበር።አሜሪካ አሁን ላይ ትኩረቷ የሁለትዮሽ የንግድ ስምምነት እንደሆነ ሙር ይናገራሉ። ለምሳሌ ከኬንያ ጋር ንግግር እየተካሄደ ነው።ኬንያ፤ የቻይና ''ቤልት ኤንድ ሮድ ኢኒሽየቲቭ'' አካል እንደሆነች ይታወቃል። ስምምነቱ ቻይናን ከአፍሪካ ጋር በንግድ የሚያስተሳስርና የቻይና ዓለም አቀፍ ተደማጭነት የሚያጎላ እንደሆነ አሜሪካ ታምናለች።ትራምፕ ከኬንያ ጋር በቀጥታ ከተስማሙ በኋላ ተመሳሳይ መንገድ ተጠቅመው ከሌሎች የአፍሪካ አገሮች ጋር የመሥራት ውጥን እንዳላቸው ሙር ይናገራሉ።ይህ የትራምፕ መንገድ፤ ከአፍሪካ ሕብረት የንድግና ኢንዱስትሪ ኮሚሽነር አልበርት ሙቻንጋን ሐሳብ ጋር ይጣረሳል።እሳቸው የአፍሪካ አገራት በተናጠል ሳይሆን በአንድነት ከአሜሪካ ጋር ስምምነት እንዲያደርጉ ይፈልጋሉ። ሙር እንደሚሉት፤ የአሜሪካ ውሳኔ የአፍሪካ ሕብረት የአህጉሪቱን ምጣኔ ሀብት ለማጣመር ከሚያደርገው ጥረት ጋር ይጣረሳል።ሕብረቱ፤ አፍሪካን የዓለም ትልቋ ነጻ የንግድ ቀጠና የማድረግ አላማ አለው።ትራምፕ ግን በጥምረት ከሚሠሩ ኃይሎች ጋር በጋራ ያለመደራደር አዝማሚያ ያሳያሉ ሲሉ አጥኚው ያክላሉ።የትራምፕ ተቀናቃኝ ጆ ባይደን ካሸነፉ የአፍሪካ ፖሊሲያቸው ምን እንደሚሆን እስካሁን አልገለጹም።"የባይደን አስተዳደር በኦባማ ጊዜ ወደነበረው ሂደት ሊመለስ ይችላል" ይላሉ ሙር። ' - አዲስ አበባ፣ ጥር 2፣ 2013(ኤፍ ቢ ሲ) የጋምቤላ ክልል ወጣት የሴራ ፖለቲካ አራማጆችን በዝምታ አይመለከቱም ሲል የክልሉ ብልጽግና ፓርቲ ወጣቶች ሊግ ሰብሳቢ ወጣት ራች ጎች ገለጸ።የክልሉ የብልጽግና ፓርቲ ወጣቶች ሊግ የውይይት መድረክ ትናንት ተካሂዷል።ከአሁን በፊት በነበረው የፖለቲካ ሴራ ወጣቱም ሆነ መላው የክልሉ ህዝብ ተጠቃሚ ሳይሆን ቆይቷል ያለው ሰብሳቢው ይህንን የህዝብ ጥቅም የማያረጋግጥ የፖለቲካ ሴራ አካሄድ የክልሉ ወጣት እንደማይቀበለው ገልጿል።የክልሉ ህዝብ እኩል ተጠቃሚ የመሆን ዕድል ማግኘቱን አስታውሶ፤ “በቀጣይ የሴራ ፖለቲካ አራማጆችን ወጣቱ በዝምታ አይመለከትም” ብሏል።የሊጉ ምክትል ሰብሳቢ ወጣት ኡጁሉ ቢሩ በበኩሉ “ከአሁን በጎጥና በመንደር በመከፋፈል አንድነቱን ለመሸርሽር ሲሰራ ነበር” ብሏል።ህዝቡ ልዩነቶች እንዳማያስፈልጉ በመረዳቱ በክልሉ ሰላም መረጋገጡን ጠቅሶ፤ “በቀጣይ በሚስማሙና በሚያግባቡ ጎዳዮች ዙሪያ እንሰራለን” ሲል ተናግሯል።የመድረኩ ተሳታፊ ወጣቶችም ሀገርን ማልማትና ማሳደግ በሚያስችሉ ጉዳዮች ላይ ትኩረት ማድረግ እንደሚገባ በመግለጽ ሐሳብ አንስተዋል።ለዘንድሮ ምርጫ ሰላማዊ ሂደትና ለተጀመረው የብልጽግና ጉዞ ስኬታማነት የበኩላቸውን አስተዋጽኦ ለማበርከት ዝግጁ መሆናቸውንም አረጋግጠዋል።ከጽንፈኝነትና ከብሄርተኝነት አስተሳሰቦች በመውጣት መንግስት በጀመራቸው የሰላም፣ የዴምክራሲና የልማት ስራዎች በንቃት ለመሳተፍ ዝግጁ እንደሆኑ መግለፃቸውን ኢዜአ ዘግቧል።የክልሉ ብልጽግና ፓርቲ ጽህፈት ቤት ኃላፊ አቶ ላክደር ላክባክ ፤ በሀገሪቱ እየተካሄደ ያለውን ሁለንተናዊ ለውጥና የብልፅግና ጉዞ እውን ለማድረግ ወጣቱ ኃይል የማይተካ  ሚና አለው ብለዋል።ከፌስቡክ ገፃችን በተጨማሪ ወቅታዊ፣ ትኩስ እና የተሟሉ መረጃዎችን ለማግኘት፡-የፋና ድረ ገጽ ይጎብኙ፤ተንቀሳቃሽ ምስሎችን ለማግኘት የፋና ቴሌቪዥን የዩቲዩብ ቻናል ሰብስክራይብ ያድርጉፈጣን መረጃዎችን ለማግኘት ትክክለኛውን የፋና ቴሌግራም ቻናል ይቀላቀሉከዚህ በተጨማሪም በትዊተር ገጻችን ይወዳጁንዘወትር ከእኛ ጋር ስላሉ እናመሰግናለን! - አዲስ አበባ ፣ ህዳር 1 ፣ 2013 (ኤፍ ቢ ሲ) ልዩ ኃይሉና ወጣቱ የጥፋት ቡድኑ እኩይ ዓላማ ማስፈጸሚያ መሆን የለባቸውም ሲሉ በቁጥጥር ስር የዋሉ የጽንፈኛው ህወሓት ቡድን ታጣቂዎች ገለጹ።ከአንድ ሳምንት በፊት በትግራይ ክልል በነበረው የመከላከያ ሰራዊት ሰሜን ዕዝ ላይ በህወሓት ቡድን የተፈጸመውን ጥቃት ተከትሎ የሃገር መከላከያ ሠራዊት በጠቅላይ ሚኒስትር ዐቢይ አሕመድ በተሰጠው ሃገርን የማዳን ተልዕኮ ሕግ ለማስከበር የዘመቻ ሥራዎችን እያከናወነ ይገኛል።የሠራዊቱ 5ኛ ሜካናይዝድ ክፍለ ጦር የህወሓትን ታጣቂዎች በቁጥጥር ስር አውሏል።በቁጥጥር ስር የዋሉት ታጣቂዎች የትግራይ ልዩ ኃይልን የተቀላቀሉት ኑሯቸውን አሸንፈው ለማደግ እንጂ ከሃገር መከላከያ ሠራዊት ጋር ለመዋጋት አለመሆኑን ገልጸዋል።ኑሮን ለማሸነፍ በሚል ወደ ልዩ ኃይሉ ቢገቡም የህወሓት የጥፋት ቡድን እኩይ ዓላማ ማስፈጸሚያ ከመሆን ውጪ ያገኙት ነገር አለመኖሩን ነው የተናገሩት።ከሃገር መከላከያ ሠራዊት ጋር መጋጨት ማለት ከኢትዮጵያ ጋር መጋጨት መሆኑንም ገልጸዋል።የትግራይ ልዩ ኃይል እና ወጣትም የህወሓት የጥፋት ቡድን ሰላባ እንዳይሆኑ ከሃገር መከላከያ ሠራዊቱ ጎን መቆም እንዳለባቸው ተናግረዋል።ታጣቂዎቹ በቁጥጥር ስር ከዋሉ በኋላ በሃገር መከላከያ ሠራዊቱ የደረሰባቸው ምንም አይነት ችግር እንደሌለና በአሁኑ ወቅት በጥሩ ሁኔታ ላይ እንደሚገኙም አስረድተዋል።የሃገር መከላከያ ሠራዊት እያከናወነ ባለው ዘመቻ የትግራይ ልዩ ኃይልና ሚሊሻ አባላት በቁጥጥር ስር እየዋሉ መሆኑን ኢዜአ ዘግቧል። datasets: - Desalegnn/amharic-passage-retrieval-dataset pipeline_tag: sentence-similarity library_name: sentence-transformers metrics: - cosine_accuracy@1 - cosine_accuracy@3 - cosine_accuracy@5 - cosine_accuracy@10 - cosine_precision@1 - cosine_precision@3 - cosine_precision@5 - cosine_precision@10 - cosine_recall@1 - cosine_recall@3 - cosine_recall@5 - cosine_recall@10 - cosine_ndcg@10 - cosine_mrr@10 - cosine_map@100 model-index: - name: Snowflake Arctic Embed L Amharic results: - task: type: information-retrieval name: Information Retrieval dataset: name: dim 1024 type: dim_1024 metrics: - type: cosine_accuracy@1 value: 0.7564303287855066 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.8848132408857079 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.9172444643256542 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.9416237978080966 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.7564303287855066 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.2949377469619026 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.18344889286513083 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.09416237978080964 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.7564303287855066 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.8848132408857079 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.9172444643256542 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.9416237978080966 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.8547186854586896 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.8262166590337033 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.8282607268472338 name: Cosine Map@100 - task: type: information-retrieval name: Information Retrieval dataset: name: dim 256 type: dim_256 metrics: - type: cosine_accuracy@1 value: 0.7454708118989041 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.87877432341758 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.9118765376873182 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.9398344889286513 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.7454708118989041 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.29292477447252663 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.18237530753746362 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.09398344889286513 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.7454708118989041 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.87877432341758 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.9118765376873182 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.9398344889286513 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.848356501861952 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.818424822400444 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.8204738239167285 name: Cosine Map@100 --- # Snowflake Arctic Embed L Amharic This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Snowflake/snowflake-arctic-embed-l-v2.0](https://huggingface.co/Snowflake/snowflake-arctic-embed-l-v2.0) on the [amharic-passage-retrieval-dataset](https://huggingface.co/datasets/Desalegnn/amharic-passage-retrieval-dataset) dataset. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [Snowflake/snowflake-arctic-embed-l-v2.0](https://huggingface.co/Snowflake/snowflake-arctic-embed-l-v2.0) <!-- at revision ac6544c8a46e00af67e330e85a9028c66b8cfd9a --> - **Maximum Sequence Length:** 1024 tokens - **Output Dimensionality:** 1024 dimensions - **Similarity Function:** Cosine Similarity - **Training Dataset:** - [amharic-passage-retrieval-dataset](https://huggingface.co/datasets/Desalegnn/amharic-passage-retrieval-dataset) - **Language:** en - **License:** apache-2.0 ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 1024, 'do_lower_case': False, 'architecture': 'XLMRobertaModel'}) (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("Desalegnn/Desu-snowflake-arctic-embed-l-v2.0-finetuned-amharic-45k") # Run inference queries = [ "\u1260\u1241\u1325\u1325\u122d \u1235\u122d \u12e8\u12cb\u1209 \u12e8\u1205\u12c8\u1213\u1275 \u1273\u1323\u1242\u12ce\u127d \u120d\u12e9 \u1283\u12ed\u1209\u1293 \u12c8\u1323\u1271 \u12e8\u1325\u134b\u1275 \u1261\u12f5\u1291 \u12a5\u12a9\u12ed \u12d3\u120b\u121b \u121b\u1235\u1348\u1338\u121a\u12eb \u12a8\u1218\u1206\u1295 \u12a5\u1295\u12f2\u1246\u1320\u1261 \u12a0\u1235\u1308\u1290\u12d8\u1261", ] documents = [ 'አዲስ አበባ ፣ ህዳር 1 ፣ 2013 (ኤፍ ቢ ሲ) ልዩ ኃይሉና ወጣቱ የጥፋት ቡድኑ እኩይ ዓላማ ማስፈጸሚያ መሆን የለባቸውም ሲሉ በቁጥጥር ስር የዋሉ የጽንፈኛው ህወሓት ቡድን ታጣቂዎች ገለጹ።ከአንድ ሳምንት በፊት በትግራይ ክልል በነበረው የመከላከያ ሰራዊት ሰሜን ዕዝ ላይ በህወሓት ቡድን የተፈጸመውን ጥቃት ተከትሎ የሃገር መከላከያ ሠራዊት በጠቅላይ ሚኒስትር ዐቢይ አሕመድ በተሰጠው ሃገርን የማዳን ተልዕኮ ሕግ ለማስከበር የዘመቻ ሥራዎችን እያከናወነ ይገኛል።የሠራዊቱ 5ኛ ሜካናይዝድ ክፍለ ጦር የህወሓትን ታጣቂዎች በቁጥጥር ስር አውሏል።በቁጥጥር ስር የዋሉት ታጣቂዎች የትግራይ ልዩ ኃይልን የተቀላቀሉት ኑሯቸውን አሸንፈው ለማደግ እንጂ ከሃገር መከላከያ ሠራዊት ጋር ለመዋጋት አለመሆኑን ገልጸዋል።ኑሮን ለማሸነፍ በሚል ወደ ልዩ ኃይሉ ቢገቡም የህወሓት የጥፋት ቡድን እኩይ ዓላማ ማስፈጸሚያ ከመሆን ውጪ ያገኙት ነገር አለመኖሩን ነው የተናገሩት።ከሃገር መከላከያ ሠራዊት ጋር መጋጨት ማለት ከኢትዮጵያ ጋር መጋጨት መሆኑንም ገልጸዋል።የትግራይ ልዩ ኃይል እና ወጣትም የህወሓት የጥፋት ቡድን ሰላባ እንዳይሆኑ ከሃገር መከላከያ ሠራዊቱ ጎን መቆም እንዳለባቸው ተናግረዋል።ታጣቂዎቹ በቁጥጥር ስር ከዋሉ በኋላ በሃገር መከላከያ ሠራዊቱ የደረሰባቸው ምንም አይነት ችግር እንደሌለና በአሁኑ ወቅት በጥሩ ሁኔታ ላይ እንደሚገኙም አስረድተዋል።የሃገር መከላከያ ሠራዊት እያከናወነ ባለው ዘመቻ የትግራይ ልዩ ኃይልና ሚሊሻ አባላት በቁጥጥር ስር እየዋሉ መሆኑን ኢዜአ ዘግቧል።', 'የፕሬዚዳንት ዶናልድ ትራምፕ ተቺዎች እንደሚሉት፤ ፕሬዚዳንቱ ለዘመናት የአሜሪካ ወዳጆች በሆኑት ኢትዮጵያ እና ግብፅ መካከል ታላቁ የሕዳሴ ግድብን በተመለከተ ውጥረት ቀስቅሰዋል።ይህም በአሜሪካ እና በአፍሪካ የዲፕሎማሲ ታሪክ ትልቁ የትራምፕ ስህተት ነው ይላሉ።ትራምፕ ከቀናት በፊት ግብፅ "ግድቡን ልታፈነዳው ትችላለች" ማለታቸው ይታወሳል። ጥር ላይ ፕሬዚዳንቱ "ስምምነት መፍጠር ችያለሁ፤ ከባድ ጦርነትም አስቁሜያለሁ" ብለው የኖቤል የሰላም ሽልማት እንደሚገባቸው መናገራቸው ይታወሳል።ነገር ግን ተሸላሚ የሆኑት ጠቅላይ ሚንስትር ዐብይ አሕመድ ነበሩ ።ትራምፕ የኖቤል የሰላም ሽልማት እንደሚገባቸው ሲናገሩ ጉዳዩን ግልፅ ባያደርጉትም፤ በግብፁ ፕሬዘዳንት አብዱልፈታህ አል-ሲሲ ጥሪ መሠረት በኢትዮጵያ እና በግብፅ መካከል ጣልቃ ስለመግባታቸው እየተናገሩ እንደነበረ ይታመናል።ትራምፕ በአንድ ወቅት አብዱልፈታህ አል-ሲሲን "የኔ ምርጡ አምባገነን" ማለታቸው አይዘነጋም።ግብፅ ታላቁ ሕዳሴ ግድብ "ለደህንነቴ ያሰጋኛል" ትላለች። ሱዳንም የግብፅን ያህል ባይሆንም ስጋቱን ትጋራለች። በሌላ በኩል ኢትዮጵያ የኃይል አመንጪውን ግድብ አስፈላጊነት አስረግጣ ትገልጻለች።ኬንያ የሚገኘው የአፍሪካ ቀንድ የጸጥታ ጉዳይ ተንታኝ ረሺድ አብዲ እንደሚለው፤ በግድቡ ዙሪያ ኢትዮጵያ እና ግብፅን ለማደራደር አሜሪካ ጣልቃ መግባቷ የሁለቱን አገሮች ውጥረት አባብሷል።"ኢትዮጵያ በግድቡ አቅራቢያ የጸጥታ ኃይሏን እያጠናከረች ነው። ቤንሻንጉል ጉሙዝ ክልልን ከበረራ ውጪ ማድረጓ አንዱ ማሳያ ነው። በግድቡ ዙሪያ በረራ የሚያግድ መሣሪያም ተገጥሟል። ግብፅ የወታደራዊ ቅኝት በረራ ልታደርግ እንደምትችል ከመስጋት የመነጨ ሊሆን ይችላል" ይላል።ተንታኙ እንደሚናገረው፤ ትራምፕ ዓለም አቀፍ ዲፕሎማሲ እንዴት እንደሚሠራ የሚገነዘቡ አይመስልም።"በንግዱ ዓለም እንደሚደረገው ስምምነት ላይ መድረስ ይቻላል የሚል የተዛባ አመለካከት አላቸው። የውጪ ጉዳይ መያዝ ያለበትን ጉዳይ ግምዣ ቤት ድርድሩን እንዲመራ ያደረጉትም ለዚህ ነው። ከመነሻውም መጥፎ የነበረውን ሁኔታም አባብሶታል" ሲልም ረሺድ ያስረዳል።ኢትዮጵያ ከግብፅ እና ከሱዳን ጋር ያለው ድርድር ሳይቋጭ ግድቡን ለመሙላት በመወሰኗ አሜሪካ የ100 ሚሊዮን ዶላር እርዳታ ማጠፏ ተዘግቧል።ረሺድ "ኢትዮጵያ አሜሪካ እንደከዳቻት ይሰማታል። ብዙ ኢትዮጵያውያን ትራምፕን የጥላቻ ምልክት አድርገውታል" በማለት ሁኔታውን ይገልጻል።የዴሞክራት እጩው ጆ ባይደን እንዲያሸንፉም የበርካታ ኢትዮጵያውያን ምኞት ነው።አሜሪካ የሚገኘው ሴንተር ፎር ግሎባል ዴቨሎፕመንት ውስጥ የፖሊሲ አጥኚ ደብሊው ጉዬ ሙር እንደሚሉት፤ የትራምፕ አስተዳደር እስራኤልና የአረብ ሊግ አገራት መካከል ሰላም መፍጠር ስለሚፈልግ ከግብፅ ጎን መቆሙ የሚጠበቅ ነው።ግብፅ ከእስራኤል ጋር ዘመናት ያስቆጠረ ዲፕሎማሲያዊ ትስስር አላት። ትራምፕ የአረብ ሊግ አገራት ለእስራኤል እውቅና እንዲሰጡ ጥረት እያደረጉ ስለሆነ አብዱልፈታህ አል-ሲሲን ማስቀየም አይፈልጉም።ሙር እንደሚናገሩት፤ የትራምፕ አስተዳደር በግድቡ ዙርያ ለግብፅ የወገነውም በዚህ ምክንያት ነው።ትራምፕ ሱዳንን በተመለከተ የደረሱበት ውሳኔ የአረቡን አገራት ከእስራኤል ጋር ለማስስማት የሚያደርጉት ጥረት አንድ አካል ነው።ሱዳን ከእስራኤል ጋር ስምምነት ለማድረግ ወስናለች።በእርግጥ የአገሪቱ ተጠባባቂ የውጪ ጉዳይ ሚንስትር ውሳኔው ገና በሕግ አውጪ መጽደቅ እንዳለበት ቢናገሩም፤ ሱዳን እንደ ጎርጎሮሳውያኑ 1967 ላይ የአረብ ሊግ አገራት ውይይት ማስተናገዷ መዘንጋት የለበትም። በውይይቱ "ከእስራኤል ጋር መቼም ሰላም አይፈጠርም። መቼም ቢሆን ለእስራኤል እውቅና አይሰጥም። ድርድርም አይካሄድም" ተብሎም ነበር።ሱዳን ከእስራኤል ጋር ለመስማማት በመፍቀዷ ትራምፕ ሽብርን ከሚድፉ አገሮች ዝርዝር እንደሚያስወጧት ተናግረዋል። ይህም ለምጣኔ ሀብቷ ማገገም የሚረዳ ድጋፍ እንድታገኝ ያግዛታል።ትራምፕ በድጋሚ ከተመረጡ ኢትዮጵያ ግድቡን በተመለከተ ሱዳን እና ግብፅ ላላቸው ስጋት አንዳች መልስ እንድትሰጥ ጫና እንደሚያደርጉ ይጠበቃል።አጥኚው እንደሚሉት፤ ሱዳን ሽብርን ከሚደግፉ አገሮች ዝርዝር ከወጣች የትራምፕ አስተዳደር በምላሹ የሚጠብቀው ነገር አለ።"ከእስራኤል ጋር ስምምነት የመፍጠር ጉዳይ የሱዳን ማኅበረሰብን የከፋፈለ ነው። መንግሥት የራሱ የጸጥታ ጥያቄዎች እያሉበት ይህን ውሳኔ ማሳለፉ ችግር ሊያስከትል ይችላል" ብለዋል። ትራምፕ አፍሪካን በተመለከተ የሚያራምዱት ፖሊሲ፤ በአሜሪካ እና በቻይና መካከል የሚካሄድ \'አዲሱ ቀዝቃዛ ጦርነት\' ነው ሲል ረሺድ ይገልጸዋል።ለምሳሌ ቻይና ከግዛቷ ውጪ የመጀመሪያውን ወታደራዊ መቀመጫ የከፈተችው በጅቡቲ ነው። ማዕከሉ የሚገኘው አሜሪካ የሶማሊያ ታጣቂዎች ላይ የአየር ጥቃት ለመሰንዘር ያቋቋመችው ማዕከል አቅራቢያ ነው።በቅርቡ የአሜሪካ ተዋጊ ጀቶች ለማረፍ ሲሞክሩ፤ ቻይና የአሜሪካውያን ወታደሮችን እይታ የሚጋርድ መሣሪያ መሞከሯን ረሺድ ያጣቅሳል። "የትራምፕ አስተዳደር ጸረ ቻይና ፖሊስ ያራምዳል" የሚለው ተንታኙ ሁኔታው ለአፍሪካ ቀንድ አስቸጋሪ መሆኑንም ያስረዳል።ቻይና አፍሪካ ውስጥ ያላትን የንግድ የበላይነት ለመቀልበስ፤ የትራምፕ አስተዳደር \'ፕሮስፔሪቲ አፍሪካ ኢን 2018\' የተባለ ፖሊሲ ነድፏል።በአፍሪካ እና በአሜሪካ መካከል የሚካሄደውን ንግድ በእጥፍ የማሳደግ እቅድ አለ። አምና የአሜሪካ መንግሥት የንግድ ተቋሞች አፍሪካ ውስጥ እንዲሠሩ የገንዘብ ድጋፍ የሚሰጥበት አሠራር ዘርግቷል።ሙር እንደሚሉት፤ የአሜሪካ ድርጅቶች ከቻይና ተቋሞች ጋር መወዳደር አልቻልንም ብለው ቅሬታ ስላሰሙ የገንዘብ ድጋፍ ለመስጠት ተወስኗል። "የአይቲ ዘርፍ እንደ ማሳያ ቢወሰድ፤ 70 በመቶ የአፍሪካ ኢንፎርሜሽን ቴክኖሎጂ የተመሠረተው በቻይና ድርጅቶች ላይ ነው" ሲሉ ያብራራሉ። የትራምፕ አስተዳደር በ2025 የሚያበቃውን ከ30 በላይ የአፍሪካ አገሮች ተጠቃሚ እንዲሆኑበት ታስቦ በአሜሪካ ለአፍሪካውያን የተሰጠው ከታሪፍና ከቀረጥ ነፃ የገበያ ዕድል (አፍሪካ ግሮዝ ኤንድ ኦፖርቹኒቲ አክት-አጎዋ) የመሰረዝ እቅድ አለው። ለአፍሪካ ምርቶች የአሜሪካን ገበያ ክፍት የሚያደርገው ስምምነት የተፈረመው በቢል ክሊንተን ነበር።አሜሪካ አሁን ላይ ትኩረቷ የሁለትዮሽ የንግድ ስምምነት እንደሆነ ሙር ይናገራሉ። ለምሳሌ ከኬንያ ጋር ንግግር እየተካሄደ ነው።ኬንያ፤ የቻይና \'ቤልት ኤንድ ሮድ ኢኒሽየቲቭ\' አካል እንደሆነች ይታወቃል። ስምምነቱ ቻይናን ከአፍሪካ ጋር በንግድ የሚያስተሳስርና የቻይና ዓለም አቀፍ ተደማጭነት የሚያጎላ እንደሆነ አሜሪካ ታምናለች።ትራምፕ ከኬንያ ጋር በቀጥታ ከተስማሙ በኋላ ተመሳሳይ መንገድ ተጠቅመው ከሌሎች የአፍሪካ አገሮች ጋር የመሥራት ውጥን እንዳላቸው ሙር ይናገራሉ።ይህ የትራምፕ መንገድ፤ ከአፍሪካ ሕብረት የንድግና ኢንዱስትሪ ኮሚሽነር አልበርት ሙቻንጋን ሐሳብ ጋር ይጣረሳል።እሳቸው የአፍሪካ አገራት በተናጠል ሳይሆን በአንድነት ከአሜሪካ ጋር ስምምነት እንዲያደርጉ ይፈልጋሉ። ሙር እንደሚሉት፤ የአሜሪካ ውሳኔ የአፍሪካ ሕብረት የአህጉሪቱን ምጣኔ ሀብት ለማጣመር ከሚያደርገው ጥረት ጋር ይጣረሳል።ሕብረቱ፤ አፍሪካን የዓለም ትልቋ ነጻ የንግድ ቀጠና የማድረግ አላማ አለው።ትራምፕ ግን በጥምረት ከሚሠሩ ኃይሎች ጋር በጋራ ያለመደራደር አዝማሚያ ያሳያሉ ሲሉ አጥኚው ያክላሉ።የትራምፕ ተቀናቃኝ ጆ ባይደን ካሸነፉ የአፍሪካ ፖሊሲያቸው ምን እንደሚሆን እስካሁን አልገለጹም።"የባይደን አስተዳደር በኦባማ ጊዜ ወደነበረው ሂደት ሊመለስ ይችላል" ይላሉ ሙር። ', 'አዲስ አበባ፣ ጥር 2፣ 2013(ኤፍ ቢ ሲ) የጋምቤላ ክልል ወጣት የሴራ ፖለቲካ አራማጆችን በዝምታ አይመለከቱም ሲል የክልሉ ብልጽግና ፓርቲ ወጣቶች ሊግ ሰብሳቢ ወጣት ራች ጎች ገለጸ።የክልሉ የብልጽግና ፓርቲ ወጣቶች ሊግ የውይይት መድረክ ትናንት ተካሂዷል።ከአሁን በፊት በነበረው የፖለቲካ ሴራ ወጣቱም ሆነ መላው የክልሉ ህዝብ ተጠቃሚ ሳይሆን ቆይቷል ያለው ሰብሳቢው ይህንን የህዝብ ጥቅም የማያረጋግጥ የፖለቲካ ሴራ አካሄድ የክልሉ ወጣት እንደማይቀበለው ገልጿል።የክልሉ ህዝብ እኩል ተጠቃሚ የመሆን ዕድል ማግኘቱን አስታውሶ፤ “በቀጣይ የሴራ ፖለቲካ አራማጆችን ወጣቱ በዝምታ አይመለከትም” ብሏል።የሊጉ ምክትል ሰብሳቢ ወጣት ኡጁሉ ቢሩ በበኩሉ “ከአሁን በጎጥና በመንደር በመከፋፈል አንድነቱን ለመሸርሽር ሲሰራ ነበር” ብሏል።ህዝቡ ልዩነቶች እንዳማያስፈልጉ በመረዳቱ በክልሉ ሰላም መረጋገጡን ጠቅሶ፤ “በቀጣይ በሚስማሙና በሚያግባቡ ጎዳዮች ዙሪያ እንሰራለን” ሲል ተናግሯል።የመድረኩ ተሳታፊ ወጣቶችም ሀገርን ማልማትና ማሳደግ በሚያስችሉ ጉዳዮች ላይ ትኩረት ማድረግ እንደሚገባ በመግለጽ ሐሳብ አንስተዋል።ለዘንድሮ ምርጫ ሰላማዊ ሂደትና ለተጀመረው የብልጽግና ጉዞ ስኬታማነት የበኩላቸውን አስተዋጽኦ ለማበርከት ዝግጁ መሆናቸውንም አረጋግጠዋል።ከጽንፈኝነትና ከብሄርተኝነት አስተሳሰቦች በመውጣት መንግስት በጀመራቸው የሰላም፣ የዴምክራሲና የልማት ስራዎች በንቃት ለመሳተፍ ዝግጁ እንደሆኑ መግለፃቸውን ኢዜአ ዘግቧል።የክልሉ ብልጽግና ፓርቲ ጽህፈት ቤት ኃላፊ አቶ ላክደር ላክባክ ፤ በሀገሪቱ እየተካሄደ ያለውን ሁለንተናዊ ለውጥና የብልፅግና ጉዞ እውን ለማድረግ ወጣቱ ኃይል የማይተካ\xa0 ሚና አለው ብለዋል።ከፌስቡክ ገፃችን በተጨማሪ ወቅታዊ፣ ትኩስ እና የተሟሉ መረጃዎችን ለማግኘት፡-የፋና ድረ ገጽ ይጎብኙ፤ተንቀሳቃሽ ምስሎችን ለማግኘት የፋና ቴሌቪዥን የዩቲዩብ ቻናል ሰብስክራይብ ያድርጉፈጣን መረጃዎችን ለማግኘት ትክክለኛውን የፋና ቴሌግራም ቻናል ይቀላቀሉከዚህ በተጨማሪም በትዊተር ገጻችን ይወዳጁንዘወትር ከእኛ ጋር ስላሉ እናመሰግናለን!', ] query_embeddings = model.encode_query(queries) document_embeddings = model.encode_document(documents) print(query_embeddings.shape, document_embeddings.shape) # [1, 1024] [3, 1024] # Get the similarity scores for the embeddings similarities = model.similarity(query_embeddings, document_embeddings) print(similarities) # tensor([[ 0.7659, -0.0879, 0.1750]]) ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Information Retrieval * Dataset: `dim_1024` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) with these parameters: ```json { "truncate_dim": 1024 } ``` | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.7564 | | cosine_accuracy@3 | 0.8848 | | cosine_accuracy@5 | 0.9172 | | cosine_accuracy@10 | 0.9416 | | cosine_precision@1 | 0.7564 | | cosine_precision@3 | 0.2949 | | cosine_precision@5 | 0.1834 | | cosine_precision@10 | 0.0942 | | cosine_recall@1 | 0.7564 | | cosine_recall@3 | 0.8848 | | cosine_recall@5 | 0.9172 | | cosine_recall@10 | 0.9416 | | **cosine_ndcg@10** | **0.8547** | | cosine_mrr@10 | 0.8262 | | cosine_map@100 | 0.8283 | #### Information Retrieval * Dataset: `dim_256` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) with these parameters: ```json { "truncate_dim": 256 } ``` | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.7455 | | cosine_accuracy@3 | 0.8788 | | cosine_accuracy@5 | 0.9119 | | cosine_accuracy@10 | 0.9398 | | cosine_precision@1 | 0.7455 | | cosine_precision@3 | 0.2929 | | cosine_precision@5 | 0.1824 | | cosine_precision@10 | 0.094 | | cosine_recall@1 | 0.7455 | | cosine_recall@3 | 0.8788 | | cosine_recall@5 | 0.9119 | | cosine_recall@10 | 0.9398 | | **cosine_ndcg@10** | **0.8484** | | cosine_mrr@10 | 0.8184 | | cosine_map@100 | 0.8205 | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### amharic-passage-retrieval-dataset * Dataset: [amharic-passage-retrieval-dataset](https://huggingface.co/datasets/Desalegnn/amharic-passage-retrieval-dataset) at [e7be243](https://huggingface.co/datasets/Desalegnn/amharic-passage-retrieval-dataset/tree/e7be2430fc785999074dee8dbac1c3e466449442) * Size: 40,237 training samples * Columns: <code>anchor</code> and <code>positive</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | |:--------|:----------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------| | type | string | string | | details | <ul><li>min: 5 tokens</li><li>mean: 23.09 tokens</li><li>max: 64 tokens</li></ul> | <ul><li>min: 76 tokens</li><li>mean: 507.11 tokens</li><li>max: 1024 tokens</li></ul> | * Samples: | anchor | positive | |:---------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | <code>ሚንስትር ዴኤታ ወይዘሮ አለም-ፀሀይ የአርባ ምንጭ ሆስፒታልና የኮቪድ-19 ሕክምና ማዕከልን ጎበኙ</code> | <code>አዲስ አበባ፣ መስከረም 13፣ 2013 (ኤፍ.ቢ.ሲ) የጤና ሚኒስቴር ሚንስትር ዴኤታ ወይዘሮ አለምፀሀይ ጳውሎስ በደቡብ ክልል ጋሞ ዞን የአርባ ምንጭ ከተማ ሆስፒታል እና ጤና ጣቢያ ጎብኙ፡፡እንዲሁም በኮቪድ-19 የህክምና ማዕከል ተገኝተው ያለውን የስራ እንቅስቃሴ መመልከታቸውም ተገልጸል፡፡ሚኒስትር ዴኤታዋ በጉብኝቱ ወቅት የህክምና ተቋማቱ ለአካባቢ ነዋሪዎች እየሰጡ ያለውን ዘርፈ ብዙ አገልግሎት እና ለኮቪድ 19 ወረርሽኝ የመከላከልና የመቆጣጠር ምላሽ አሠጣጥ የሚበረታታና ውጤታማ እንደሆነ ተናግረዋል፡፡በዚህም ለማዕከሉ ሰራተኞች ምስጋናቸውን አቅርበዋል፡፡የተቋማቱ ስራ ኃላፊዎችም ከሚኒስትር ዴኤታዋ ጋር መወያየታቸው ተሰምቷል፡፡ኃላፊዎቹ አገልግሎታቸውን በተሟላ መንገድ ለመስራት አያስችሉንም ያሏቸውን ጉድለቶች አንስተው ውይይት አድረገውባቸዋል፡፡የህክምና ተቋማቱ ያሉበት የስራ አፈጻጸም የሚበረታታ ቢሆንም ለተሻለ ስራ መነሳትና የጤና አገልግሎቱን ይበልጥ ማሻሻል ያስፈልጋል ሲሉ ሚኒስትር ዴኤታዋ ማሳሰባቸውን ከሚኒስቴሩ ያገኘነው መረጃ ያመለክታል፡፡</code> | | <code>መምህራን በትምህርት ቤቶችና በአከባቢያቸው ሰላም እንዲረጋገጥ የበኩላቸውን ሚና እንዲወጡ ተጠየቁ</code> | <code>መምህራን በትምህርት ቤቶችና በአከባቢያቸው ሰላም እንዲረጋገጥ የበኩላቸውን ሚና እንዲወጡ ተጠይቀዋል፡፡የሰላም ሚኒስቴር ከሳይንስና ከፍተኛ ትምህርት ሚኒስቴርና የኢትዮጵያ መምህራን ማህበር ጋር በመተባበር ያዘጋጁት ሀገር አቀፍ መምህራን የሰላም ውይይት መድረክ በአዲስ አበባ እየተካሄደ ነው፡፡በዚህ የውይይት መድረክ ላይ የሰላም ሚኒስትሯ ወይዘሮ ሙፈሪያት ካሚልን ጨምሮ ሌሎች ባለድርሻ  አካላት ተገኝተዋል፡፡ውይይቱ “ሰላምና ሀገር ወዳድ መምህራኖች ፤ ሰላምና ሀገር ወዳድ ተማሪዎችን ያፈራሉ” በሚል መሪ ቃል እየተካሄደ የሚገኝ ሲሆን መምህራን በትምህርት ቤቶችና በአከባቢያቸው ሰላም እንዲረጋገጥ የበኩላቸውን ሚና እንዲወጡ ተጠይቀዋል፡፡በውይይቱ ንግግር ያደረጉት የሰላም ሚኒስትር ወይዘሮ ሙፈሪያት ካሚል መምህራን ትውልድን መቅረጽ ካላቸው እድል አንፃር ሰላምን በመስበክ በኩል ከፍተኛ አስተዋጽኦ ሊያበርክቱ ይገባል ብለዋል፡፡ሀገራዊ ግንባታ ትምህርትና የተሟላ ስብዕና የሚጠይቅ በመሆኑም ለማህበረሰብ ስብዕናና የበለጸገ ትውልድን በመፍጠር ረገድ የመምህራን ሚና ክፍተኛ መሆኑንም ተናግረዋል።ትምህርት ቤቶች የሰላም ማዕድ ይሆኑ ዘንድም መምህራን እያከናዎኑት ያለውን ትውልድን የመቅረጽ ተግባር አጠናክረው መቀጠል እንዳለባቸውም ወይዘሮ ሙፈሪያት አሳስበዋል፡፡     በውይይቱ ላይ አስተያየት የሰጡት መምህራን በበኩላቸው ሰላም ሁሉንም የሚመለከት ጉዳይ በመሆኑ ሰላምን በመስበክና በማረጋገጥ ረገድ ከመንግስት ጋር በመሆን የሚጠበቅባቸውን ኃላፊነት እንደሚወጡ ገልጸዋል፡፡በተለይም የስነ ዜጋ፣ ስነ ምግባርና የታሪክ ትምህርት መምህራን ለተማሪዎች በሚያቀርቡት ትምህርት ላይ ሚዛናዊና ኃላፊነት በተሞላበት መንገድ ማቅረብ እንዳለባቸውም ጠቁመዋል፡፡  መምህሩ በስነ ምግባር አርዓያ በመሆን ሰላምና ግብ...</code> | | <code>የኢትዮጵያ እና ማሊ ከ17 አመት በታች ብሄራዊ ቡድኖች ጨዋታ እሁድ ይካሄዳል</code> | <code>በአዲስ አበባ ስታድየም እየተዘጋጀ የሚገኘው ብሄራዊ ቡድኑ በዛሬው የልምምድ መርሃ ግብር በእሁዱ ጨዋታ ላይ ቋሚ ተሰላፊዎች ይሆናሉ ተብለው የሚገመቱትን በመለየት የቅንጅትና ከርቀት አክርሮ የመምታት ልምምዶችን አከናውኗል፡፡ባለፉት ሶስት ቀናት በመጠነኛ ጉዳት በልምምድ ወቅት አቋርጠው ሲወጡ የነበሩት ሳሙኤል ተስፋዬ እና አቡበከር ነስሩ በዛሬው ልምምድ ከቡድኑ ጋር ሙሉ ልምምድ የሰሩ ሲሆን ሁሉም ተጨዋቾች በሙሉ ጤንነት ላይ ይገኛሉ፡፡ከ17 አመት ቡድናችን እሁድ ዕለት ከአፍሮ ፅዮን ጋር ባደረጉት የአቋም መፈተሻ ጨዋታ ላይ ከአፍሮፅዮን በኩል መልካም እንቅስቃሴ ያሳዩ 6 ተጨዋቾች ጥሪ ቀርቦላቸው በዛሬው ልምምድ ላይ ተገኝተው ከቡድኑ ጋር ልምምድ ያደረጉ ቢሆንም አሳማኝ እንቅስቃሴ ባለማሳየታቸው እንዲመለሱ ተደርጓል፡፡ቀይ ቀበሮዎቹ በእሁዱ ጨዋታ በባማኮ የደረሰባቸውን የ2-0 ሽንፈት ቀልብሰው ወደ ማዳጋስካር የአፍሪካ ከ17 አመት በታች ዋንጫ ለማምራት በከፍተኛ ተነሳሽነት እና ፍላጎት ዝግጅታቸውን በማከናወን ላይ እንደሚገኙ ለመታዘብ ችለናል፡፡በኢትዮጵያ እና ማሊ መካከል የሚደረገው ጨዋታ እሁድ መስከረም 22 ቀን 2009 በአዲስ አበባ ስታድየም 10:00 ላይ የሚካሄድ ሲሆን ጨዋታው የሚካሄድበት የአዲስ አበባ ስታድየም ሜዳን ምቹ ለማድረግ የሚያስችሉ ስራዎች እየተከናወኑ ይገኛሉ፡፡የእሁዱ ተጋጣሚያችን የማሊ ከ17 አመት በታች ብሄራዊ ቡድን አርብ አዲስ አበባ ይገባል፡፡ ጨዋታውን የሚመሩት አራቱም ዳኞች ከኒጀር ፤ ኮሚሽነሩ ደግሞ ከዩጋንዳ እንደተመደቡም ታውቋል፡፡</code> | * Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters: ```json { "loss": "MultipleNegativesRankingLoss", "matryoshka_dims": [ 1024, 256 ], "matryoshka_weights": [ 1, 1 ], "n_dims_per_step": -1 } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: epoch - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 32 - `gradient_accumulation_steps`: 8 - `learning_rate`: 2e-05 - `num_train_epochs`: 4 - `lr_scheduler_type`: cosine - `warmup_ratio`: 0.1 - `fp16`: True - `load_best_model_at_end`: True - `batch_sampler`: no_duplicates #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: epoch - `prediction_loss_only`: True - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 32 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 8 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 2e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 4 - `max_steps`: -1 - `lr_scheduler_type`: cosine - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: True - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: True - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `parallelism_config`: None - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch_fused - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: None - `hub_always_push`: False - `hub_revision`: None - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `liger_kernel_config`: None - `eval_use_gather_object`: False - `average_tokens_across_devices`: False - `prompts`: None - `batch_sampler`: no_duplicates - `multi_dataset_batch_sampler`: proportional - `router_mapping`: {} - `learning_rate_mapping`: {} </details> ### Training Logs | Epoch | Step | Training Loss | dim_1024_cosine_ndcg@10 | dim_256_cosine_ndcg@10 | |:-------:|:--------:|:-------------:|:-----------------------:|:----------------------:| | -1 | -1 | - | 0.7570 | 0.7425 | | 1.0 | 315 | 0.0758 | 0.8321 | 0.8217 | | 2.0 | 630 | 0.0258 | 0.8394 | 0.8319 | | 3.0 | 945 | 0.0121 | 0.8510 | 0.8441 | | **4.0** | **1260** | **0.0081** | **0.8547** | **0.8484** | * The bold row denotes the saved checkpoint. ### Framework Versions - Python: 3.12.11 - Sentence Transformers: 5.1.0 - Transformers: 4.56.2 - PyTorch: 2.8.0+cu126 - Accelerate: 1.10.1 - Datasets: 4.1.1 - Tokenizers: 0.22.0 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MatryoshkaLoss ```bibtex @misc{kusupati2024matryoshka, title={Matryoshka Representation Learning}, author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi}, year={2024}, eprint={2205.13147}, archivePrefix={arXiv}, primaryClass={cs.LG} } ``` #### MultipleNegativesRankingLoss ```bibtex @misc{henderson2017efficient, title={Efficient Natural Language Response Suggestion for Smart Reply}, author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, year={2017}, eprint={1705.00652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
haihp02/afc60b79-e08e-4342-8a90-82dd7cf22ce4
haihp02
2025-09-22T12:15:55Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-09-22T11:20:40Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
wls04/qwen_full_lora_sft
wls04
2025-09-22T12:15:55Z
0
0
peft
[ "peft", "safetensors", "base_model:adapter:Qwen/Qwen2.5-1.5B", "llama-factory", "lora", "transformers", "text-generation", "conversational", "base_model:Qwen/Qwen2.5-1.5B", "license:other", "region:us" ]
text-generation
2025-09-22T12:15:49Z
--- library_name: peft license: other base_model: Qwen/Qwen2.5-1.5B tags: - base_model:adapter:Qwen/Qwen2.5-1.5B - llama-factory - lora - transformers pipeline_tag: text-generation model-index: - name: sft results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # sft This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B](https://huggingface.co/Qwen/Qwen2.5-1.5B) on the gsm8k_sharegpt dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 64 - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3.0 ### Training results ### Framework versions - PEFT 0.17.1 - Transformers 4.56.1 - Pytorch 2.8.0+cu128 - Datasets 4.0.0 - Tokenizers 0.22.1
MrOceanMan/q-FrozenLake-v1-4x4-noSlippery
MrOceanMan
2025-09-22T12:15:40Z
0
0
null
[ "FrozenLake-v1-4x4", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2025-09-22T11:29:07Z
--- tags: - FrozenLake-v1-4x4 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4 type: FrozenLake-v1-4x4 metrics: - type: mean_reward value: 0.60 +/- 0.49 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="MrOceanMan/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```