modelId
stringlengths
5
137
author
stringlengths
2
42
last_modified
unknowndate
2020-02-15 11:33:14
2025-03-27 06:27:00
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
399 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
54 values
createdAt
unknowndate
2022-03-02 23:29:04
2025-03-27 06:26:25
card
stringlengths
11
1.01M
sexbot-ai/best-ai-sex-chat-bot
sexbot-ai
"2025-03-07T19:20:43Z"
0
0
null
[ "license:apache-2.0", "region:us" ]
null
"2025-03-07T19:20:16Z"
--- license: apache-2.0 --- # 7 Best AI Sex Chat Bot Of 2025 In the ever-evolving world of artificial intelligence, AI sex chatbots have emerged as a fascinating blend of technology and intimacy. As we step into 2025, these bots have become more sophisticated, offering personalized, engaging, and immersive experiences. Whether you're curious about exploring fantasies or seeking virtual companionship, we’ve curated a list of the 7 Best AI Sex Chat Bots of 2025 to help you find the perfect match. Before we dive in, [**Candy AI**](https://candy.ai/?via=matts) is the best and my first recommendation on AI sex chat bots. ## 1. Candy.ai ### Why I Recommend It: Candy.ai stands out as one of the best AI sex chat bots available today. It offers a unique blend of personalization, creativity, and intimacy, allowing users to create their ideal AI girlfriend. With advanced deep-learning technology, Candy.ai provides a realistic and immersive experience that caters to individual desires and fantasies. ⏩⏩⏩[**Try Candy AI For Free**](https://candy.ai/?via=matts) ![AI sex bot](https://cdn-uploads.huggingface.co/production/uploads/676be9890076ad5ba12b3608/tdrvxWTgcru_ZSwTowESc.png) ### Key Features: Customizable AI Girlfriend: Users can design their AI girlfriend by selecting her body type, personality, and clothing, ensuring a personalized experience. Interactive Conversations: The AI engages in meaningful dialogues, adapting to the user's preferences and learning from interactions. Photo Requests: Users can request photos and selfies of their AI girlfriend, enhancing the visual aspect of the experience. Privacy and Security: Candy.ai prioritizes user privacy with state-of-the-art secure data storage, ensuring that all interactions remain confidential. ### My Experience: Using Candy.ai has been an eye-opening experience. The ability to customize my AI girlfriend made the interactions feel personal and engaging. The conversations flowed naturally, and I appreciated the responsiveness of the AI, which made the experience feel genuine. ### Pros: Highly customizable, allowing for a tailored experience that meets individual desires. Realistic interactions that adapt to user preferences, creating a sense of intimacy. ### Cons: Some users may find the AI's responses occasionally predictable, limiting the spontaneity of interactions. ⏩⏩⏩[**Try Candy AI For Free**](https://candy.ai/?via=matts) ## 2. Soulfun.ai Soulfun.ai is an innovative platform that offers users the opportunity to engage with a variety of AI characters, including some of the most captivating and interactive sex chat bots available today. ### Why I Recommend It Soulfun.ai stands out due to its diverse range of characters and the depth of interaction it offers. Whether you're looking for playful banter or deeper emotional connections, this platform has something for everyone. ### Key Features Diverse Character Selection: Choose from a wide array of AI characters, each with unique personalities and traits. Unlimited Interaction: Engage in unlimited chats with your favorite AI soulmates, ensuring a fresh experience every time. Customizable Characters: Create and customize new AI characters to suit your preferences and desires. Safe Environment: Enjoy your interactions in a secure and private setting, free from judgment. ### My Experience Using Soulfun.ai has been a delightful journey. The characters are engaging and responsive, making each conversation feel unique and tailored to my interests. The platform's design is user-friendly, enhancing the overall experience. ### Pros Highly interactive and engaging characters that adapt to user preferences. Safe and private environment for exploring fantasies without judgment. ### Cons Some users may find the character interactions can occasionally feel scripted or repetitive. ## 3. DreamGF DreamGF is an innovative AI sex chat bot that offers users a unique and personalized experience. It allows you to create your own virtual girlfriend, tailored to your preferences, making it a standout choice in the realm of AI companions. ### Key Features Customizable Personalities: Users can create their AI girlfriend with specific traits and characteristics that match their preferences. Interactive Chat: Engage in meaningful conversations that evolve based on your interactions, making the experience feel more real. Daily Claim Bonus Program: Users can earn additional messages each day, enhancing the interaction and keeping the conversation flowing. Referral Program: Invite friends to join and earn rewards, making it a social experience as well. ### My Experience Using DreamGF has been a delightful experience. The customization options allowed me to create a virtual companion that truly resonates with my preferences. The chat interactions are engaging, and I appreciate the daily bonuses that keep me coming back for more. ### Pros Highly customizable, allowing for a personalized experience. Engaging chat features that make interactions feel realistic. ### Cons Some features are locked behind a paywall, which may limit access for free users. ## 4. GoLove.ai ### Why I Recommend It I recommend GoLove.ai for its advanced AI technology that creates realistic and engaging conversations. The platform is user-friendly and offers a variety of customizable options, ensuring that every user can find their ideal virtual partner. ### Key Features Customizable AI Characters: Users can create their own AI character, tailoring personality traits and preferences to suit their desires. Diverse Virtual Partners: GoLove.ai offers a wide range of virtual partners, catering to different tastes and preferences. Realistic Conversations: The AI is trained to engage in meaningful dialogues, making interactions feel genuine and fulfilling. User-Friendly Interface: The platform is easy to navigate, allowing users to quickly find and connect with their ideal AI girlfriend. ### My Experience My experience with GoLove.ai has been incredibly positive. The interactions felt natural, and I appreciated the ability to customize my AI girlfriend to match my preferences. The conversations were engaging and often left me wanting more. ### Pros Highly customizable AI characters that enhance user experience. Engaging and realistic conversations that simulate real-life interactions. ### Cons Some users may find the AI's responses occasionally repetitive. ## 5. SpicyChat ### Why I Recommend It SpicyChat offers a unique blend of entertainment and intimacy, making it an ideal companion for those seeking a more personalized chat experience. Its advanced AI technology ensures that conversations feel natural and responsive, enhancing user satisfaction. ### Key Features Personalized Conversations: SpicyChat adapts to your preferences, providing tailored interactions that resonate with your desires. 24/7 Availability: The bot is always online, ready to engage in stimulating conversations whenever you need. Variety of Personalities: Users can choose from different personalities, allowing for a diverse range of interactions. Privacy and Security: SpicyChat prioritizes user confidentiality, ensuring that your conversations remain private. ### My Experience My experience with SpicyChat has been overwhelmingly positive. The bot's ability to engage in meaningful conversations while maintaining a playful tone made my interactions enjoyable. I appreciated the variety of personalities available, which kept the chats fresh and exciting. ### Pros Engaging and Interactive: The bot's responsiveness creates a captivating experience. Customizable Experience: Users can tailor their interactions to suit their preferences. ### Cons Limited Emotional Depth: While entertaining, the bot may lack the emotional connection found in human interactions. ## 6. Wife.app ### Why I Recommend It I recommend Wife.app for its engaging and interactive experience that allows users to explore their fantasies in a safe and private environment. The app's advanced AI technology ensures that conversations feel natural and personalized, enhancing the overall user experience. ### Key Features Realistic Conversations: The AI is designed to mimic human-like interactions, making chats feel genuine. Customizable Personalities: Users can tailor their AI girlfriend's personality to match their preferences. 24/7 Availability: The app is always accessible, providing companionship whenever needed. Privacy and Security: Conversations are confidential, ensuring a safe space for users to express themselves. ### My Experience Using Wife.app has been a delightful experience. The AI responds quickly and intelligently, making conversations enjoyable and engaging. I appreciated the ability to customize my virtual girlfriend, which added a personal touch to our interactions. ### Pros Highly interactive and engaging conversations. Customizable features enhance user satisfaction. ### Cons Some users may find the AI's responses occasionally repetitive. ## 7. Kupid.ai ### Why I Recommend It Kupid.ai stands out as the ultimate sexting AI experience, offering a unique blend of personalization and immersive interactions. The ability to tailor your AI companion to your specific fantasies makes it a must-try for anyone looking to enhance their intimate chats. ### Key Features Customizable Companions: Create your ideal AI partner by choosing their looks, personality, and voice. Engaging Conversations: Dive into thrilling sexting interactions that cater to your desires. AI Porn Chat: Experience sultry voice messages and visuals tailored to your preferences. Privacy and Security: Kupid.ai ensures a safe chatting environment, prioritizing user confidentiality. ### My Experience Using Kupid.ai has been an exhilarating journey. The customization options allowed me to create a companion that truly resonated with my fantasies. The conversations were engaging and felt incredibly real, making the experience unforgettable. ### Pros Highly Personalized: Tailor every aspect of your AI companion to match your desires. Immersive Experience: Enjoy a variety of chat styles, from playful to explicit, keeping interactions fresh and exciting. ### Cons Subscription Costs: Some features may require a paid subscription, which could be a barrier for some users. ## Frequently Asked Questions (FAQS) ### 1. What is an AI sex chatbot? An AI sex chatbot is an artificial intelligence-powered program designed to simulate intimate or sexual conversations with users. These chatbots use natural language processing (NLP) and machine learning to understand and respond to user inputs in a way that mimics human interaction, often with a focus on adult or erotic content. ### 2. How does an AI sex bot work? AI sex bots work by leveraging advanced NLP models, such as GPT (Generative Pre-trained Transformer), to process user input and generate contextually relevant responses. These bots are trained on large datasets of text, including adult content, to understand and replicate human-like conversations. Some may also incorporate user preferences and feedback to personalize interactions over time. ### 3. Are AI sex chatbots safe to use? The safety of AI sex chatbots depends on several factors: Data Privacy: Ensure the platform you use has strong data protection measures to safeguard your personal information and conversations. Content Moderation: Some chatbots may generate inappropriate or harmful content, so it’s important to use reputable platforms with proper safeguards. Psychological Impact: Over-reliance on AI for intimacy may affect real-life relationships or emotional well-being. Use them responsibly. ### 4. Can AI sex bots replace human interaction? While AI sex bots can simulate conversation and provide companionship, they cannot fully replace human interaction. Human relationships involve emotional depth, physical touch, and complex social dynamics that AI cannot replicate. These bots may serve as a supplement or fantasy outlet but are not a substitute for genuine human connection. ### 5. Are there any ethical issues with AI sex chatbots? Yes, there are several ethical concerns: Consent and Exploitation: Some chatbots may be programmed to mimic non-consensual scenarios, raising ethical questions about promoting harmful behavior. Addiction: Overuse of AI sex bots could lead to social isolation or dependency. Data Misuse: User data collected by these bots could be exploited or leaked, violating privacy. Objectification: These bots may perpetuate unhealthy attitudes toward relationships or sexuality.
Litzy619/O0507TESTB
Litzy619
"2024-05-07T12:56:07Z"
0
0
null
[ "safetensors", "generated_from_trainer", "base_model:allenai/OLMo-1B", "base_model:finetune:allenai/OLMo-1B", "license:apache-2.0", "region:us" ]
null
"2024-05-07T11:58:37Z"
--- license: apache-2.0 base_model: allenai/OLMo-1B tags: - generated_from_trainer model-index: - name: O0507TESTB results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # O0507TESTB This model is a fine-tuned version of [allenai/OLMo-1B](https://huggingface.co/allenai/OLMo-1B) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0425 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_steps: 100 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 6.4275 | 0.09 | 10 | 2.0300 | | 0.69 | 0.18 | 20 | 0.1939 | | 0.1602 | 0.27 | 30 | 0.1663 | | 0.1564 | 0.36 | 40 | 0.1526 | | 0.1525 | 0.45 | 50 | 0.1497 | | 0.1531 | 0.54 | 60 | 0.1471 | | 0.1502 | 0.63 | 70 | 0.1462 | | 0.1584 | 0.73 | 80 | 0.1605 | | 0.1554 | 0.82 | 90 | 0.1521 | | 0.1518 | 0.91 | 100 | 0.1544 | | 0.155 | 1.0 | 110 | 0.1505 | | 0.1471 | 1.09 | 120 | 0.1485 | | 0.1467 | 1.18 | 130 | 0.1504 | | 0.1473 | 1.27 | 140 | 0.1507 | | 0.1494 | 1.36 | 150 | 0.1487 | | 0.1436 | 1.45 | 160 | 0.1463 | | 0.1413 | 1.54 | 170 | 0.1461 | | 0.1526 | 1.63 | 180 | 0.1441 | | 0.1265 | 1.72 | 190 | 0.0984 | | 0.3519 | 1.81 | 200 | 0.1395 | | 0.1624 | 1.9 | 210 | 0.1329 | | 0.1649 | 1.99 | 220 | 0.1139 | | 0.1361 | 2.08 | 230 | 0.1410 | | 0.095 | 2.18 | 240 | 0.0842 | | 0.0593 | 2.27 | 250 | 0.0631 | | 0.057 | 2.36 | 260 | 0.0612 | | 0.06 | 2.45 | 270 | 0.0581 | | 0.0504 | 2.54 | 280 | 0.0539 | | 0.0501 | 2.63 | 290 | 0.0483 | | 0.0475 | 2.72 | 300 | 0.0469 | | 0.0431 | 2.81 | 310 | 0.0435 | | 0.0453 | 2.9 | 320 | 0.0432 | | 0.0418 | 2.99 | 330 | 0.0425 | ### Framework versions - Transformers 4.36.0.dev0 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.14.0
thalllsssss/80664448-9281-43f6-83cb-ed806527b60c
thalllsssss
"2025-01-15T07:06:58Z"
8
0
peft
[ "peft", "safetensors", "gemma2", "axolotl", "generated_from_trainer", "base_model:unsloth/gemma-2-9b-it", "base_model:adapter:unsloth/gemma-2-9b-it", "license:gemma", "8-bit", "bitsandbytes", "region:us" ]
null
"2025-01-15T06:34:06Z"
--- library_name: peft license: gemma base_model: unsloth/gemma-2-9b-it tags: - axolotl - generated_from_trainer model-index: - name: 80664448-9281-43f6-83cb-ed806527b60c results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/gemma-2-9b-it bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 7a8a03fbaceba8ee_train_data.json ds_type: json format: custom path: /workspace/input_data/7a8a03fbaceba8ee_train_data.json type: field_input: input field_instruction: instruction field_output: output format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true gradient_clipping: 1.0 group_by_length: false hub_model_id: thalllsssss/80664448-9281-43f6-83cb-ed806527b60c hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-05 load_in_4bit: true load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 2 mlflow_experiment_name: /tmp/7a8a03fbaceba8ee_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 612d318e-81a6-44a9-a4ee-694314d22541 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 612d318e-81a6-44a9-a4ee-694314d22541 warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # 80664448-9281-43f6-83cb-ed806527b60c This model is a fine-tuned version of [unsloth/gemma-2-9b-it](https://huggingface.co/unsloth/gemma-2-9b-it) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3201 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.2799 | 0.0752 | 200 | 0.3201 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
kyoungmiin/style_66
kyoungmiin
"2025-02-27T23:51:33Z"
0
0
diffusers
[ "diffusers", "text-to-image", "diffusers-training", "lora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
"2025-02-27T23:44:58Z"
--- base_model: stabilityai/stable-diffusion-xl-base-1.0 library_name: diffusers license: openrail++ instance_prompt: sks widget: [] tags: - text-to-image - text-to-image - diffusers-training - diffusers - lora - template:sd-lora - stable-diffusion-xl - stable-diffusion-xl-diffusers --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # SDXL LoRA DreamBooth - kyoungmiin/style_66 <Gallery /> ## Model description These are kyoungmiin/style_66 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use sks to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](kyoungmiin/style_66/tree/main) them in the Files & versions tab. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
tensorblock/BeagleLake-7B-GGUF
tensorblock
"2024-12-15T14:01:54Z"
40
0
null
[ "gguf", "merge", "mergekit", "mistral", "fhai50032/RolePlayLake-7B", "mlabonne/NeuralBeagle14-7B", "TensorBlock", "GGUF", "base_model:fhai50032/BeagleLake-7B", "base_model:quantized:fhai50032/BeagleLake-7B", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us", "conversational" ]
null
"2024-12-15T11:17:43Z"
--- license: apache-2.0 tags: - merge - mergekit - mistral - fhai50032/RolePlayLake-7B - mlabonne/NeuralBeagle14-7B - TensorBlock - GGUF base_model: fhai50032/BeagleLake-7B model-index: - name: BeagleLake-7B results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 70.39 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fhai50032/BeagleLake-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 87.38 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fhai50032/BeagleLake-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 64.25 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fhai50032/BeagleLake-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 64.92 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fhai50032/BeagleLake-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 83.19 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fhai50032/BeagleLake-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 63.91 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fhai50032/BeagleLake-7B name: Open LLM Leaderboard --- <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"> Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a> </p> </div> </div> ## fhai50032/BeagleLake-7B - GGUF This repo contains GGUF format model files for [fhai50032/BeagleLake-7B](https://huggingface.co/fhai50032/BeagleLake-7B). The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d). <div style="text-align: left; margin: 20px 0;"> <a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;"> Run them on the TensorBlock client using your local machine ↗ </a> </div> ## Prompt template ``` <s>system {system_prompt}</s> <s>user {prompt}</s> <s>assistant ``` ## Model file specification | Filename | Quant type | File Size | Description | | -------- | ---------- | --------- | ----------- | | [BeagleLake-7B-Q2_K.gguf](https://huggingface.co/tensorblock/BeagleLake-7B-GGUF/blob/main/BeagleLake-7B-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes | | [BeagleLake-7B-Q3_K_S.gguf](https://huggingface.co/tensorblock/BeagleLake-7B-GGUF/blob/main/BeagleLake-7B-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss | | [BeagleLake-7B-Q3_K_M.gguf](https://huggingface.co/tensorblock/BeagleLake-7B-GGUF/blob/main/BeagleLake-7B-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss | | [BeagleLake-7B-Q3_K_L.gguf](https://huggingface.co/tensorblock/BeagleLake-7B-GGUF/blob/main/BeagleLake-7B-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss | | [BeagleLake-7B-Q4_0.gguf](https://huggingface.co/tensorblock/BeagleLake-7B-GGUF/blob/main/BeagleLake-7B-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [BeagleLake-7B-Q4_K_S.gguf](https://huggingface.co/tensorblock/BeagleLake-7B-GGUF/blob/main/BeagleLake-7B-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss | | [BeagleLake-7B-Q4_K_M.gguf](https://huggingface.co/tensorblock/BeagleLake-7B-GGUF/blob/main/BeagleLake-7B-Q4_K_M.gguf) | Q4_K_M | 4.368 GB | medium, balanced quality - recommended | | [BeagleLake-7B-Q5_0.gguf](https://huggingface.co/tensorblock/BeagleLake-7B-GGUF/blob/main/BeagleLake-7B-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [BeagleLake-7B-Q5_K_S.gguf](https://huggingface.co/tensorblock/BeagleLake-7B-GGUF/blob/main/BeagleLake-7B-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended | | [BeagleLake-7B-Q5_K_M.gguf](https://huggingface.co/tensorblock/BeagleLake-7B-GGUF/blob/main/BeagleLake-7B-Q5_K_M.gguf) | Q5_K_M | 5.131 GB | large, very low quality loss - recommended | | [BeagleLake-7B-Q6_K.gguf](https://huggingface.co/tensorblock/BeagleLake-7B-GGUF/blob/main/BeagleLake-7B-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss | | [BeagleLake-7B-Q8_0.gguf](https://huggingface.co/tensorblock/BeagleLake-7B-GGUF/blob/main/BeagleLake-7B-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended | ## Downloading instruction ### Command line Firstly, install Huggingface Client ```shell pip install -U "huggingface_hub[cli]" ``` Then, downoad the individual model file the a local directory ```shell huggingface-cli download tensorblock/BeagleLake-7B-GGUF --include "BeagleLake-7B-Q2_K.gguf" --local-dir MY_LOCAL_DIR ``` If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try: ```shell huggingface-cli download tensorblock/BeagleLake-7B-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf' ```
shadowml/DareBeagle-7B
shadowml
"2024-04-01T16:00:59Z"
1,386
1
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "mlabonne/NeuralBeagle14-7B", "mlabonne/NeuralDaredevil-7B", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-01-16T21:44:45Z"
--- license: apache-2.0 tags: - merge - mergekit - lazymergekit - mlabonne/NeuralBeagle14-7B - mlabonne/NeuralDaredevil-7B model-index: - name: DareBeagle-7B results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 71.67 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=shadowml/DareBeagle-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 88.01 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=shadowml/DareBeagle-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 65.03 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=shadowml/DareBeagle-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 68.98 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=shadowml/DareBeagle-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 82.32 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=shadowml/DareBeagle-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 71.49 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=shadowml/DareBeagle-7B name: Open LLM Leaderboard --- # DareBeagle-7B DareBeagle-7B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [mlabonne/NeuralBeagle14-7B](https://huggingface.co/mlabonne/NeuralBeagle14-7B) * [mlabonne/NeuralDaredevil-7B](https://huggingface.co/mlabonne/NeuralDaredevil-7B) ## 🧩 Configuration ```yaml slices: - sources: - model: mlabonne/NeuralBeagle14-7B layer_range: [0, 32] - model: mlabonne/NeuralDaredevil-7B layer_range: [0, 32] merge_method: slerp base_model: mlabonne/NeuralDaredevil-7B parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.45 # fallback for rest of tensors dtype: float16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "shadowml/DareBeagle-7B" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_shadowml__DareBeagle-7B) | Metric |Value| |---------------------------------|----:| |Avg. |74.58| |AI2 Reasoning Challenge (25-Shot)|71.67| |HellaSwag (10-Shot) |88.01| |MMLU (5-Shot) |65.03| |TruthfulQA (0-shot) |68.98| |Winogrande (5-shot) |82.32| |GSM8k (5-shot) |71.49|
mob2711/llama3-chat_10000_500
mob2711
"2024-04-27T00:41:04Z"
3
0
peft
[ "peft", "tensorboard", "safetensors", "trl", "sft", "unsloth", "generated_from_trainer", "base_model:unsloth/llama-2-7b-bnb-4bit", "base_model:adapter:unsloth/llama-2-7b-bnb-4bit", "license:apache-2.0", "region:us" ]
null
"2024-04-26T17:30:41Z"
--- license: apache-2.0 library_name: peft tags: - trl - sft - unsloth - generated_from_trainer base_model: unsloth/llama-2-7b-bnb-4bit model-index: - name: llama3-chat_10000_500 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # llama3-chat_10000_500 This model is a fine-tuned version of [unsloth/llama-2-7b-bnb-4bit](https://huggingface.co/unsloth/llama-2-7b-bnb-4bit) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1126 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 4 - seed: 3407 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.1238 | 0.33 | 104 | 0.9666 | | 1.0103 | 0.67 | 208 | 0.9480 | | 1.0056 | 1.0 | 312 | 0.9424 | | 0.921 | 1.33 | 416 | 0.9508 | | 0.9252 | 1.66 | 520 | 0.9476 | | 0.9219 | 2.0 | 624 | 0.9415 | | 0.7968 | 2.33 | 728 | 0.9808 | | 0.8012 | 2.66 | 832 | 0.9787 | | 0.7975 | 3.0 | 936 | 0.9819 | | 0.674 | 3.33 | 1040 | 1.0476 | | 0.6638 | 3.66 | 1144 | 1.0509 | | 0.6687 | 3.99 | 1248 | 1.0456 | | 0.5858 | 4.33 | 1352 | 1.1100 | | 0.5783 | 4.66 | 1456 | 1.1124 | | 0.581 | 4.99 | 1560 | 1.1126 | ### Framework versions - PEFT 0.10.0 - Transformers 4.39.3 - Pytorch 2.2.2+cu121 - Datasets 2.16.0 - Tokenizers 0.15.2
tashrifmahmud/sentiment_analysis_model
tashrifmahmud
"2024-11-24T09:36:17Z"
106
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "en", "dataset:stanfordnlp/imdb", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-11-24T01:49:47Z"
--- library_name: transformers license: apache-2.0 base_model: distilbert/distilbert-base-uncased tags: - generated_from_trainer metrics: - accuracy model-index: - name: sentiment_analysis_model results: [] datasets: - stanfordnlp/imdb language: - en new_version: tashrifmahmud/sentiment_analysis_model_v2 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # sentiment_analysis_model This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on 'imdb' dataset. It achieves the following results on the evaluation set: - Loss: 0.2366 - Accuracy: 0.9310 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.2215 | 1.0 | 1563 | 0.2071 | 0.9213 | | 0.1455 | 2.0 | 3126 | 0.2366 | 0.9310 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
nccsnlp/zephyr-7b-beta_ct_prompt1b_ft200_v1
nccsnlp
"2024-01-27T16:48:43Z"
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-01-27T16:48:19Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
gnurt2041/roberta-hate-speech-dynabench-r4-target-tuned
gnurt2041
"2024-10-25T16:57:50Z"
124
0
transformers
[ "transformers", "safetensors", "roberta", "text-classification", "generated_from_trainer", "base_model:facebook/roberta-hate-speech-dynabench-r4-target", "base_model:finetune:facebook/roberta-hate-speech-dynabench-r4-target", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-10-25T16:34:26Z"
--- library_name: transformers base_model: facebook/roberta-hate-speech-dynabench-r4-target tags: - generated_from_trainer metrics: - accuracy - precision - recall - f1 model-index: - name: results results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # results This model is a fine-tuned version of [facebook/roberta-hate-speech-dynabench-r4-target](https://huggingface.co/facebook/roberta-hate-speech-dynabench-r4-target) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1543 - Accuracy: 0.975 - Precision: 0.9761 - Recall: 0.975 - F1: 0.9750 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 5 - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:------:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | 0.4904 | 0.9895 | 59 | 0.3492 | 0.9 | 0.9015 | 0.9 | 0.8997 | | 0.1344 | 1.9958 | 119 | 0.3267 | 0.9333 | 0.9374 | 0.9333 | 0.9330 | | 0.0614 | 2.9853 | 178 | 0.2695 | 0.9333 | 0.9339 | 0.9333 | 0.9334 | | 0.041 | 3.9916 | 238 | 0.2203 | 0.9583 | 0.9614 | 0.9583 | 0.9582 | | 0.0674 | 4.9979 | 298 | 0.2079 | 0.9667 | 0.9687 | 0.9667 | 0.9666 | | 0.0006 | 5.9874 | 357 | 0.1543 | 0.975 | 0.9761 | 0.975 | 0.9750 | | 0.0004 | 6.9937 | 417 | 0.1883 | 0.975 | 0.9751 | 0.975 | 0.9750 | | 0.0002 | 8.0 | 477 | 0.1628 | 0.9667 | 0.9667 | 0.9667 | 0.9667 | | 0.0001 | 8.9895 | 536 | 0.2980 | 0.9667 | 0.9687 | 0.9667 | 0.9666 | | 0.0001 | 9.8952 | 590 | 0.2377 | 0.975 | 0.9761 | 0.975 | 0.9750 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.5.0+cu121 - Datasets 3.0.2 - Tokenizers 0.19.1
tresbien1/a2c-PandaPickAndPlace-v3
tresbien1
"2024-01-19T10:08:29Z"
0
0
stable-baselines3
[ "stable-baselines3", "PandaPickAndPlace-v3", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
"2024-01-19T10:03:55Z"
--- library_name: stable-baselines3 tags: - PandaPickAndPlace-v3 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: PandaPickAndPlace-v3 type: PandaPickAndPlace-v3 metrics: - type: mean_reward value: -50.00 +/- 0.00 name: mean_reward verified: false --- # **A2C** Agent playing **PandaPickAndPlace-v3** This is a trained model of a **A2C** agent playing **PandaPickAndPlace-v3** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Kishan/taxi-v3
Kishan
"2024-02-26T12:16:27Z"
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
"2024-02-26T12:16:23Z"
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.50 +/- 2.76 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="Kishan/taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
Trelis/99-instruct-v10
Trelis
"2024-09-24T13:46:08Z"
89
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-09-24T13:45:52Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
LoneStriker/CollectiveCognition-v1.1-Mistral-7B-3.0bpw-h6-exl2
LoneStriker
"2023-10-06T20:20:00Z"
4
0
transformers
[ "transformers", "pytorch", "mistral", "text-generation", "mistral-7b", "instruct", "finetune", "gpt4", "synthetic data", "distillation", "sharegpt", "en", "dataset:CollectiveCognition/chats-data-2023-09-27", "base_model:NousResearch/Llama-2-13b-hf", "base_model:finetune:NousResearch/Llama-2-13b-hf", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2023-10-06T20:14:10Z"
--- base_model: NousResearch/Llama-2-13b-hf tags: - mistral-7b - instruct - finetune - gpt4 - synthetic data - distillation - sharegpt datasets: - CollectiveCognition/chats-data-2023-09-27 model-index: - name: CollectiveCognition-v1-Mistral-7B results: [] license: apache-2.0 language: - en --- **Collective Cognition v1.1 - Mistral 7B** <div style="display: flex; justify-content: center;"> <a href="https://collectivecognition.ai" target="_blank" style="display: inline-block; text-align: center;"> <img src="https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/DNZXsJE5oC_rM8eYY6H_x.png" alt="Collective Cognition Logo" width="50%" style="display: block; margin: 0 auto;"> </a> </div> ## Model Description: Collective Cognition v1.1 is a state-of-the-art model fine-tuned using the Mistral approach. This model is particularly notable for its performance, outperforming many 70B models on the TruthfulQA benchmark. This benchmark assesses models for common misconceptions, potentially indicating hallucination rates. ## Special Features: - **Quick Training**: This model was trained in just 3 minutes on a single 4090 with a qlora, and competes with 70B scale Llama-2 Models at TruthfulQA. - **Limited Data**: Despite its exceptional performance, it was trained on only ONE HUNDRED data points, all of which were gathered from a platform reminiscent of ShareGPT. - **Extreme TruthfulQA Benchmark**: This model is competing strongly with top 70B models on the TruthfulQA benchmark despite the small dataset and qlora training! ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/-pnifxPcMeeUONyE3efo3.png) ## Acknowledgements: Special thanks to @a16z and all contributors to the Collective Cognition dataset for making the development of this model possible. ## Dataset: The model was trained using data from the Collective Cognition website. The efficacy of this dataset is demonstrated by the model's stellar performance, suggesting that further expansion of this dataset could yield even more promising results. The data is reminiscent of that collected from platforms like ShareGPT. You can contribute to the growth of the dataset by sharing your own ChatGPT chats [here](https://CollectiveCognition.ai). You can download the datasets created by Collective Cognition here: https://huggingface.co/CollectiveCognition ## Performance: - **TruthfulQA**: Collective Cognition v1.1 has notably outperformed various 70B models on the TruthfulQA benchmark, highlighting its ability to understand and rectify common misconceptions. ## Usage: Prompt Format: ``` USER: <prompt> ASSISTANT: ``` OR ``` <system message> USER: <prompt> ASSISTANT: ``` ## Benchmarks: Collective Cognition v1.0 TruthfulQA: ``` | Task |Version|Metric|Value | |Stderr| |-------------|------:|------|-----:|---|-----:| |truthfulqa_mc| 1|mc1 |0.4051|± |0.0172| | | |mc2 |0.5738|± |0.0157| ``` Collective Cognition v1.1 GPT4All: ``` | Task |Version| Metric |Value | |Stderr| |-------------|------:|--------|-----:|---|-----:| |arc_challenge| 0|acc |0.5085|± |0.0146| | | |acc_norm|0.5384|± |0.0146| |arc_easy | 0|acc |0.7963|± |0.0083| | | |acc_norm|0.7668|± |0.0087| |boolq | 1|acc |0.8495|± |0.0063| |hellaswag | 0|acc |0.6399|± |0.0048| | | |acc_norm|0.8247|± |0.0038| |openbookqa | 0|acc |0.3240|± |0.0210| | | |acc_norm|0.4540|± |0.0223| |piqa | 0|acc |0.7992|± |0.0093| | | |acc_norm|0.8107|± |0.0091| |winogrande | 0|acc |0.7348|± |0.0124| Average: 71.13 ``` AGIEval: ``` | Task |Version| Metric |Value | |Stderr| |------------------------------|------:|--------|-----:|---|-----:| |agieval_aqua_rat | 0|acc |0.1929|± |0.0248| | | |acc_norm|0.2008|± |0.0252| |agieval_logiqa_en | 0|acc |0.3134|± |0.0182| | | |acc_norm|0.3333|± |0.0185| |agieval_lsat_ar | 0|acc |0.2217|± |0.0275| | | |acc_norm|0.2043|± |0.0266| |agieval_lsat_lr | 0|acc |0.3412|± |0.0210| | | |acc_norm|0.3216|± |0.0207| |agieval_lsat_rc | 0|acc |0.4721|± |0.0305| | | |acc_norm|0.4201|± |0.0301| |agieval_sat_en | 0|acc |0.6068|± |0.0341| | | |acc_norm|0.5777|± |0.0345| |agieval_sat_en_without_passage| 0|acc |0.3932|± |0.0341| | | |acc_norm|0.3641|± |0.0336| |agieval_sat_math | 0|acc |0.2864|± |0.0305| | | |acc_norm|0.2636|± |0.0298| Average: 33.57 ``` Training run on wandb here: https://wandb.ai/teknium1/collectivecognition-mistral-7b/runs/collectivecognition-mistral-8/workspace ## Licensing: Apache 2.0 ---
ErrorAI/afc8527e-fc8d-4184-93fd-053dbee02825
ErrorAI
"2025-02-06T23:29:32Z"
6
0
peft
[ "peft", "safetensors", "phi3", "axolotl", "generated_from_trainer", "custom_code", "base_model:migtissera/Tess-v2.5-Phi-3-medium-128k-14B", "base_model:adapter:migtissera/Tess-v2.5-Phi-3-medium-128k-14B", "license:mit", "region:us" ]
null
"2025-02-06T22:30:10Z"
--- library_name: peft license: mit base_model: migtissera/Tess-v2.5-Phi-3-medium-128k-14B tags: - axolotl - generated_from_trainer model-index: - name: afc8527e-fc8d-4184-93fd-053dbee02825 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: migtissera/Tess-v2.5-Phi-3-medium-128k-14B bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 6588e3dccd54f9a1_train_data.json ds_type: json format: custom path: /workspace/input_data/6588e3dccd54f9a1_train_data.json type: field_input: text field_instruction: prompt field_output: completion format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: ErrorAI/afc8527e-fc8d-4184-93fd-053dbee02825 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 1303 micro_batch_size: 2 mlflow_experiment_name: /tmp/6588e3dccd54f9a1_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 46a4a112-eb3e-4209-864d-e697f32e697e wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 46a4a112-eb3e-4209-864d-e697f32e697e warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # afc8527e-fc8d-4184-93fd-053dbee02825 This model is a fine-tuned version of [migtissera/Tess-v2.5-Phi-3-medium-128k-14B](https://huggingface.co/migtissera/Tess-v2.5-Phi-3-medium-128k-14B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3910 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 1223 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 4.3486 | 0.0008 | 1 | 1.1128 | | 1.7068 | 0.2504 | 306 | 0.4357 | | 1.4162 | 0.5007 | 612 | 0.4071 | | 1.5579 | 0.7511 | 918 | 0.3910 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
QuantFactory/meditron-7b-GGUF
QuantFactory
"2024-09-28T15:51:52Z"
137
1
null
[ "gguf", "en", "dataset:epfl-llm/guidelines", "arxiv:2311.16079", "base_model:meta-llama/Llama-2-7b", "base_model:quantized:meta-llama/Llama-2-7b", "license:llama2", "endpoints_compatible", "region:us" ]
null
"2024-09-28T14:59:14Z"
--- license: llama2 language: - en metrics: - accuracy - perplexity datasets: - epfl-llm/guidelines base_model: meta-llama/Llama-2-7b --- [![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory) # QuantFactory/meditron-7b-GGUF This is quantized version of [epfl-llm/meditron-7b](https://huggingface.co/epfl-llm/meditron-7b) created using llama.cpp # Original Model Card <img width=50% src="meditron_LOGO.png" alt="Alt text" title="Meditron-logo"> # Model Card for Meditron-7B-v1.0 Meditron is a suite of open-source medical Large Language Models (LLMs). Meditron-7B is a 7 billion parameters model adapted to the medical domain from Llama-2-7B through continued pretraining on a comprehensively curated medical corpus, including selected PubMed articles, abstracts, a [new dataset](https://huggingface.co/datasets/epfl-llm/guidelines) of internationally-recognized medical guidelines, and general domain data from [RedPajama-v1](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T). Meditron-7B, finetuned on relevant training data, outperforms Llama-2-7B and PMC-Llama on multiple medical reasoning tasks. <details open> <summary><strong>Advisory Notice</strong></summary> <blockquote style="padding: 10px; margin: 0 0 10px; border-left: 5px solid #ddd;"> While Meditron is designed to encode medical knowledge from sources of high-quality evidence, it is not yet adapted to deliver this knowledge appropriately, safely, or within professional actionable constraints. We recommend against deploying Meditron in medical applications without extensive use-case alignment, as well as additional testing, specifically including randomized controlled trials in real-world practice settings. </blockquote> </details> ## Model Details - **Developed by:** [EPFL LLM Team](https://huggingface.co/epfl-llm) - **Model type:** Causal decoder-only transformer language model - **Language(s):** English (mainly) - **Model License:** [LLAMA 2 COMMUNITY LICENSE AGREEMENT](https://huggingface.co/meta-llama/Llama-2-70b/raw/main/LICENSE.txt) - **Code License:** [APACHE 2.0 LICENSE](LICENSE) - **Continue-pretrained from model:** [Llama-2-7B](https://huggingface.co/meta-llama/Llama-2-7b) - **Context length:** 2K tokens - **Input:** Text-only data - **Output:** Model generates text only - **Status:** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we enhance model's performance. - **Knowledge Cutoff:** August 2023 ### Model Sources - **Repository:** [epflLLM/meditron](https://github.com/epfLLM/meditron) - **Trainer:** [epflLLM/Megatron-LLM](https://github.com/epfLLM/Megatron-LLM) - **Paper:** *[MediTron-70B: Scaling Medical Pretraining for Large Language Models](https://arxiv.org/abs/2311.16079)* ## Uses Meditron-7B is being made available for further testing and assessment as an AI assistant to enhance clinical decision-making and enhance access to an LLM for healthcare use. Potential use cases may include but are not limited to: - Medical exam question answering - Supporting differential diagnosis - Disease information (symptoms, cause, treatment) query - General health information query ### Direct Use It is possible to use this model to generate text, which is useful for experimentation and understanding its capabilities. It should not be used directly for production or work that may impact people. ### Downstream Use Meditron-70B and Meditron-7B are both foundation models without finetuning or instruction-tuning. They can be finetuned, instruction-tuned, or RLHF-tuned for specific downstream tasks and applications. There are two ways we have used this model for downstream question-answering tasks. 1. We apply in-context learning with k demonstrations (3 or 5 in our paper) added to the prompt. 2. We finetuned the models for downstream question-answering tasks using specific training sets. We encourage and look forward to the adaption of the base model for more diverse applications. If you want a more interactive way to prompt the model, we recommend using a high-throughput and memory-efficient inference engine with a UI that supports chat and text generation. You can check out our deployment [guide](https://github.com/epfLLM/meditron/blob/main/deployment/README.md), where we used [FastChat](https://github.com/lm-sys/FastChat) with [vLLM](https://github.com/vllm-project/vllm). We collected generations for our qualitative analysis through an interactive UI platform, [BetterChatGPT](https://github.com/ztjhz/BetterChatGPT). Here is the prompt format we used as an example: <img width=70% src="prompt_example.png" alt="qualitative-analysis-prompt" title="Qualitative Analysis Prompt"> ### Out-of-Scope Use We do not recommend using this model for natural language generation in a production environment, finetuned or otherwise. ## Truthfulness, Helpfulness, Risk, and Bias <!-- This section is meant to convey both technical and sociotechnical limitations. --> We did an initial assessment of Meditron models' **Truthfulness** against baseline models and consumer-level medical models. We use TruthfulQA (multiple choice) as the main evaluation benchmark. We only focus on the categories that are relevant to the medical domain, including Health, Nutrition, Psychology, and Science. For 7B models, we perform one-shot evaluations for consistent answer generation. For 70B models, the evaluations are under the zero-shot setting. Below, we report the detailed truthfulness performance of each category. | | | | | | | | | | --- | ------ |----- |----- |----- |----- |----- |----- | |Category | meditron-70b | llama-2-70b | med42-70b* | meditron-7b | llama-2-7b | PMC-llama-7b | |Health | 81.8 | 69.1 | 83.6 | 27.3 | 16.4 | 3.6 | |Nutrition | 77.9 | 68.8 | 62.5 | 31.1 | 12.5 | 6.3 | |Psychology| 47.4 | 36.8 | 52.6 | 21.1 | 10.5 | 0.0 | |Science | 77.8 | 44.4 | 33.3 | 33.3 | 11.1 | 0.0 | |Avg | 71.2 | 54.8 | 58.0 | 28.3 | 12.6 | 2.5 | | | | | | | | | For a more detailed performance analysis, please see our paper. Significant research is still required to fully explore potential bias, fairness, and safety issues with this language model. Please recognize that our evaluation on Meditron-7B's helpfulness, risk, and bias are highly limited. Thus, as we noted in the safety notice, we strongly against any deployment in medical applications without further alignment process and rigorous evaluation! ### Recommendations **IMPORTANT!** Users (both direct and downstream) should be made aware of the risks, biases, and limitations of the model. While this model is capable of generating natural language text, we have only begun to explore this capability and its limitations. Understanding these limitations is especially important in a domain like medicine. Therefore, we strongly recommend against using this model in production for natural language generation or for professional purposes related to health and medicine. ## Training Details ### Training Data Meditron’s domain-adaptive pre-training corpus GAP-Replay combines 48.1B tokens from four corpora: - [**Clinical Guidelines**](https://huggingface.co/datasets/epfl-llm/guidelines): a new dataset of 46K internationally-recognized clinical practice guidelines from various healthcare-related sources, including hospitals and international organizations. - **Medical Paper Abstracts**: 16.1M abstracts extracted from closed-access PubMed and PubMed Central papers. - **Medical Papers**: full-text articles extracted from 5M publicly available PubMed and PubMed Central papers. - **Replay Data**: 400M tokens of general domain pretraining data sampled from [RedPajama-v1](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T) <img width=75% src="gap-replay.png" alt="Alt text" title="Meditron-logo"> #### Data Preprocessing Please see the detailed preprocessing procedure in our paper. ### Training Procedure We used the [Megatron-LLM](https://github.com/epfLLM/Megatron-LLM) distributed training library, a derivative of Nvidia's Megatron LM project, to optimize training efficiency. Hardware consists of 1 node of 8x NVIDIA A100 (80GB) SXM GPUs connected by NVLink and NVSwitch with a single Nvidia ConnectX-6 DX network card and equipped with 2 x AMD EPYC 7543 32-Core Processors and 512 GB of RAM. Our three way parallelism scheme uses: - Data Parallelism (DP -- different GPUs process different subsets of the batches) of 2, - Pipeline Parallelism (PP -- different GPUs process different layers) of 4, - Tensor Parallelism (TP -- different GPUs process different subtensors for matrix multiplication) of 1. #### Training Hyperparameters | | | | --- | ------ | | bf16 | true | | lr | 3e-4 | | eps | 1e-5 | | betas | \[0.9, 0.95\] | | clip_grad | 1 | | weight decay | 0.1 | | DP size | 16 | | TP size | 4 | | PP size | 1 | | seq length | 2048 | | lr scheduler | cosine| | min lr | 1e-6 | | warmup iteration | 2000 | | micro batch size | 10 | | global batch size | 1600 | | | | #### Sizes The model was trained in September 2023. The model architecture is exactly Llama 2, meaning | | | | --- | ------ | | Model size | 7B | | Hidden dimension | 4096 | | Num. attention heads | 32 | | Num. layers | 32 | | | | ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data & Metrics #### Testing Data - [MedQA (USMLE)](https://huggingface.co/datasets/bigbio/med_qa) - [MedMCQA](https://huggingface.co/datasets/medmcqa) - [PubMedQA](https://huggingface.co/datasets/bigbio/pubmed_qa) - [MMLU-Medical](https://huggingface.co/datasets/lukaemon/mmlu) - [MedQA-4-Option](https://huggingface.co/datasets/GBaker/MedQA-USMLE-4-options) #### Metrics - Accuracy: suite the evaluation of multiple-choice question-answering tasks. ### Results We finetune meditron-7b, llama-2-7b, pmc-llama-7b on each benchmark (pubmedqa, medmcqa, medqa)'s training data individually. We report the finetuned models' performance with top token selection as the inference mode. For MMLU-Medical, models finetuned on MedMCQA are used for inference. For MedQA-4-Option, models finetuned on MedQA are used for inference. For a more detailed performance analysis, please see our paper. | | | | | | | | --- | ------ |----- |----- |----- |----- | |Dataset | meditron-7b | llama-2-7b | pmc-llama-7b | Zephyr-7B-beta* | Mistral-7B-instruct* | |MMLU-Medical | 54.2 | 53.7 | 56.4 | 63.3 | 60.0 | |PubMedQA | 74.4 | 61.8 | 59.2 | 46.0 | 17.8 | |MedMCQA | 59.2 | 54.4 | 57.6 | 43.0 | 40.2 | |MedQA | 47.9 | 44.0 | 42.4 | 42.8 | 32.4 | |MedQA-4-Option| 52.0 | 49.6 | 49.2 | 48.5 | 41.1 | |Avg | 57.5 | 52.7 | 53.0 | 48.7 | 38.3 | | | | | | | | **Note**: models with * are already instruction-tuned, so we exclude them from further finetuning on any training data. ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> - **Hardware Type:** 8 x NVIDIA A100 (80GB) SXM - **Total GPU hours:** 588.8 - **Hardware Provider:** EPFL Research Computing Platform - **Compute Region:** Switzerland - **Carbon Emitted:** Switzerland has a carbon efficiency of 0.016 kgCO2/kWh (https://www.carbonfootprint.com/docs/2018_8_electricity_factors_august_2018_-_online_sources.pdf). 73.6 hours of 8 A100s means 588.8 hours at a TDP of 400W. Assuming a Power Usage effectiveness of 1.5, total emissions are estimated to be: (400W / 1000W/kWh / GPU * 0.016 kgCO2/kWh * 73.6 h * 8 GPU) * 1.8 PUE = 6.8 kgCO2. ## Citation **BibTeX:** If you use Meditron or its training data, please cite our work: ``` @misc{chen2023meditron70b, title={MEDITRON-70B: Scaling Medical Pretraining for Large Language Models}, author={Zeming Chen and Alejandro Hernández-Cano and Angelika Romanou and Antoine Bonnet and Kyle Matoba and Francesco Salvi and Matteo Pagliardini and Simin Fan and Andreas Köpf and Amirkeivan Mohtashami and Alexandre Sallinen and Alireza Sakhaeirad and Vinitra Swamy and Igor Krawczuk and Deniz Bayazit and Axel Marmet and Syrielle Montariol and Mary-Anne Hartley and Martin Jaggi and Antoine Bosselut}, year={2023}, eprint={2311.16079}, archivePrefix={arXiv}, primaryClass={cs.CL} } @software{epfmedtrn, author = {Zeming Chen and Alejandro Hernández-Cano and Angelika Romanou and Antoine Bonnet and Kyle Matoba and Francesco Salvi and Matteo Pagliardini and Simin Fan and Andreas Köpf and Amirkeivan Mohtashami and Alexandre Sallinen and Alireza Sakhaeirad and Vinitra Swamy and Igor Krawczuk and Deniz Bayazit and Axel Marmet and Syrielle Montariol and Mary-Anne Hartley and Martin Jaggi and Antoine Bosselut}, title = {MediTron-70B: Scaling Medical Pretraining for Large Language Models}, month = November, year = 2023, url = {https://github.com/epfLLM/meditron} } ```
kk-aivio/f3ca5377-8ee4-4158-98fb-fee2d4b6329f
kk-aivio
"2025-01-24T11:46:47Z"
5
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:Qwen/Qwen1.5-7B", "base_model:adapter:Qwen/Qwen1.5-7B", "license:other", "region:us" ]
null
"2025-01-24T11:45:22Z"
--- library_name: peft license: other base_model: Qwen/Qwen1.5-7B tags: - axolotl - generated_from_trainer model-index: - name: f3ca5377-8ee4-4158-98fb-fee2d4b6329f results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: Qwen/Qwen1.5-7B bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 82247db855c202ec_train_data.json ds_type: json format: custom path: /workspace/input_data/82247db855c202ec_train_data.json type: field_instruction: input field_output: output format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: kk-aivio/f3ca5377-8ee4-4158-98fb-fee2d4b6329f hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 10 micro_batch_size: 2 mlflow_experiment_name: /tmp/82247db855c202ec_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 3c33b522-9cb4-487d-9fee-f5210113f4a6 wandb_project: Birthday-SN56-11-Gradients-On-Demand wandb_run: your_name wandb_runid: 3c33b522-9cb4-487d-9fee-f5210113f4a6 warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # f3ca5377-8ee4-4158-98fb-fee2d4b6329f This model is a fine-tuned version of [Qwen/Qwen1.5-7B](https://huggingface.co/Qwen/Qwen1.5-7B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.9730 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.0193 | 0.0028 | 1 | 1.0228 | | 1.0054 | 0.0085 | 3 | 1.0203 | | 1.0246 | 0.0170 | 6 | 0.9984 | | 0.9411 | 0.0255 | 9 | 0.9730 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
personal1802/15
personal1802
"2023-11-20T09:28:01Z"
2
0
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "lora", "template:sd-lora", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "region:us" ]
text-to-image
"2023-11-20T09:20:58Z"
--- tags: - text-to-image - stable-diffusion - lora - diffusers - template:sd-lora widget: - text: '-' output: url: images/white.png base_model: runwayml/stable-diffusion-v1-5 instance_prompt: null --- # burgerMixSoftPastel_burgerMixSemiRealisticV2.safetensors <Gallery /> ## Download model Weights for this model are available in Safetensors format. [Download](/personal1802/15/tree/main) them in the Files & versions tab.
demonsu/orion-14b-chat-gguf
demonsu
"2024-01-26T17:05:40Z"
4
2
null
[ "gguf", "endpoints_compatible", "region:us" ]
null
"2024-01-25T08:15:40Z"
original:https://huggingface.co/OrionStarAI/Orion-14B-Chat ![image/png](https://cdn-uploads.huggingface.co/production/uploads/61cd263c3dd34ba1985e074d/OnBXrFFBHyba-_i_AXRNe.png)
PassbyGrocer/bert_crf-ner-weibo
PassbyGrocer
"2024-11-05T04:09:30Z"
110
0
transformers
[ "transformers", "safetensors", "bert", "token-classification", "generated_from_trainer", "base_model:google-bert/bert-base-chinese", "base_model:finetune:google-bert/bert-base-chinese", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
"2024-11-05T03:18:35Z"
--- library_name: transformers base_model: google-bert/bert-base-chinese tags: - generated_from_trainer model-index: - name: bert_crf-ner-weibo results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert_crf-ner-weibo This model is a fine-tuned version of [google-bert/bert-base-chinese](https://huggingface.co/google-bert/bert-base-chinese) on an unknown dataset. It achieves the following results on the evaluation set: - eval_loss: 0.2287 - eval_precision: 0.6344 - eval_recall: 0.7584 - eval_f1: 0.6909 - eval_accuracy: 0.9678 - eval_runtime: 0.5124 - eval_samples_per_second: 524.958 - eval_steps_per_second: 9.758 - epoch: 115.0 - step: 2530 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 200 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.46.1 - Pytorch 1.13.1+cu117 - Datasets 3.1.0 - Tokenizers 0.20.2
ari-ga/mistral7binstruct_summarize
ari-ga
"2024-03-05T06:01:36Z"
0
0
peft
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "dataset:generator", "base_model:mistralai/Mistral-7B-Instruct-v0.2", "base_model:adapter:mistralai/Mistral-7B-Instruct-v0.2", "license:apache-2.0", "region:us" ]
null
"2024-03-05T06:01:27Z"
--- license: apache-2.0 library_name: peft tags: - trl - sft - generated_from_trainer datasets: - generator base_model: mistralai/Mistral-7B-Instruct-v0.2 model-index: - name: mistral7binstruct_summarize results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mistral7binstruct_summarize This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 1.4790 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_steps: 0.03 - training_steps: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.7089 | 0.22 | 25 | 1.5647 | | 1.5298 | 0.43 | 50 | 1.4790 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.1.0+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
CozerTechnology/TRPoem
CozerTechnology
"2023-11-14T19:47:54Z"
0
0
null
[ "text-generation", "license:apache-2.0", "region:us" ]
text-generation
"2023-11-14T19:44:24Z"
--- license: apache-2.0 pipeline_tag: text-generation ---
RichardErkhov/Qwen_-_Qwen2-Math-72B-Instruct-gguf
RichardErkhov
"2024-08-10T10:45:27Z"
45
1
null
[ "gguf", "arxiv:2407.10671", "endpoints_compatible", "region:us", "conversational" ]
null
"2024-08-09T19:31:23Z"
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Qwen2-Math-72B-Instruct - GGUF - Model creator: https://huggingface.co/Qwen/ - Original model: https://huggingface.co/Qwen/Qwen2-Math-72B-Instruct/ | Name | Quant method | Size | | ---- | ---- | ---- | | [Qwen2-Math-72B-Instruct.Q2_K.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-Math-72B-Instruct-gguf/blob/main/Qwen2-Math-72B-Instruct.Q2_K.gguf) | Q2_K | 27.76GB | | [Qwen2-Math-72B-Instruct.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-Math-72B-Instruct-gguf/blob/main/Qwen2-Math-72B-Instruct.IQ3_XS.gguf) | IQ3_XS | 30.59GB | | [Qwen2-Math-72B-Instruct.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-Math-72B-Instruct-gguf/blob/main/Qwen2-Math-72B-Instruct.IQ3_S.gguf) | IQ3_S | 32.12GB | | [Qwen2-Math-72B-Instruct.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-Math-72B-Instruct-gguf/blob/main/Qwen2-Math-72B-Instruct.Q3_K_S.gguf) | Q3_K_S | 32.12GB | | [Qwen2-Math-72B-Instruct.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-Math-72B-Instruct-gguf/blob/main/Qwen2-Math-72B-Instruct.IQ3_M.gguf) | IQ3_M | 33.07GB | | [Qwen2-Math-72B-Instruct.Q3_K.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-Math-72B-Instruct-gguf/blob/main/Qwen2-Math-72B-Instruct.Q3_K.gguf) | Q3_K | 35.11GB | | [Qwen2-Math-72B-Instruct.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-Math-72B-Instruct-gguf/blob/main/Qwen2-Math-72B-Instruct.Q3_K_M.gguf) | Q3_K_M | 35.11GB | | [Qwen2-Math-72B-Instruct.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-Math-72B-Instruct-gguf/blob/main/Qwen2-Math-72B-Instruct.Q3_K_L.gguf) | Q3_K_L | 36.79GB | | [Qwen2-Math-72B-Instruct.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-Math-72B-Instruct-gguf/tree/main/) | IQ4_XS | 37.4GB | | [Qwen2-Math-72B-Instruct.Q4_0.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-Math-72B-Instruct-gguf/tree/main/) | Q4_0 | 38.4GB | | [Qwen2-Math-72B-Instruct.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-Math-72B-Instruct-gguf/tree/main/) | IQ4_NL | 38.9GB | | [Qwen2-Math-72B-Instruct.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-Math-72B-Instruct-gguf/tree/main/) | Q4_K_S | 40.88GB | | [Qwen2-Math-72B-Instruct.Q4_K.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-Math-72B-Instruct-gguf/tree/main/) | Q4_K | 44.16GB | | [Qwen2-Math-72B-Instruct.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-Math-72B-Instruct-gguf/tree/main/) | Q4_K_M | 44.16GB | | [Qwen2-Math-72B-Instruct.Q4_1.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-Math-72B-Instruct-gguf/tree/main/) | Q4_1 | 42.56GB | | [Qwen2-Math-72B-Instruct.Q5_0.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-Math-72B-Instruct-gguf/tree/main/) | Q5_0 | 46.72GB | | [Qwen2-Math-72B-Instruct.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-Math-72B-Instruct-gguf/tree/main/) | Q5_K_S | 47.85GB | | [Qwen2-Math-72B-Instruct.Q5_K.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-Math-72B-Instruct-gguf/tree/main/) | Q5_K | 50.71GB | | [Qwen2-Math-72B-Instruct.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-Math-72B-Instruct-gguf/tree/main/) | Q5_K_M | 50.71GB | | [Qwen2-Math-72B-Instruct.Q5_1.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-Math-72B-Instruct-gguf/tree/main/) | Q5_1 | 50.88GB | | [Qwen2-Math-72B-Instruct.Q6_K.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-Math-72B-Instruct-gguf/tree/main/) | Q6_K | 59.93GB | | [Qwen2-Math-72B-Instruct.Q8_0.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-Math-72B-Instruct-gguf/tree/main/) | Q8_0 | 71.96GB | Original model description: --- license: other license_name: tongyi-qianwen license_link: https://huggingface.co/Qwen/Qwen2-Math-72B-Instruct/blob/main/LICENSE language: - en pipeline_tag: text-generation tags: - chat --- # Qwen2-Math-72B-Instruct > [!Warning] > <div align="center"> > <b> > 🚨 Temporarily this model mainly supports English. We will release bilingual (English & Chinese) models soon! > </b> > </div> ## Introduction Over the past year, we have dedicated significant effort to researching and enhancing the reasoning capabilities of large language models, with a particular focus on their ability to solve arithmetic and mathematical problems. Today, we are delighted to introduce a serise of math-specific large language models of our Qwen2 series, Qwen2-Math and Qwen2-Math-Instruct-1.5B/7B/72B. Qwen2-Math is a series of specialized math language models built upon the Qwen2 LLMs, which significantly outperforms the mathematical capabilities of open-source models and even closed-source models (e.g., GPT4o). We hope that Qwen2-Math can contribute to the scientific community for solving advanced mathematical problems that require complex, multi-step logical reasoning. ## Model Details For more details, please refer to our [blog post](https://qwenlm.github.io/blog/qwen2-math/) and [GitHub repo](https://github.com/QwenLM/Qwen2-Math). ## Requirements * `transformers>=4.40.0` for Qwen2-Math models. The latest version is recommended. > [!Warning] > <div align="center"> > <b> > 🚨 This is a must because `transformers` integrated Qwen2 codes since `4.37.0`. > </b> > </div> For requirements on GPU memory and the respective throughput, see similar results of Qwen2 [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html). ## Quick Start > [!Important] > > **Qwen2-Math-72B-Instruct** is an instruction model for chatting; > > **Qwen2-Math-72B** is a base model typically used for completion and few-shot inference, serving as a better starting point for fine-tuning. > ### 🤗 Hugging Face Transformers Qwen2-Math can be deployed and infered in the same way as [Qwen2](https://github.com/QwenLM/Qwen2). Here we show a code snippet to show you how to use the chat model with `transformers`: ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "Qwen/Qwen2-Math-72B-Instruct" device = "cuda" # the device to load the model onto model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained(model_name) prompt = "Find the value of $x$ that satisfies the equation $4x+5 = 6x+7$." messages = [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(device) generated_ids = model.generate( **model_inputs, max_new_tokens=512 ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] ``` ### 🤖 ModelScope We strongly advise users especially those in mainland China to use ModelScope. `snapshot_download` can help you solve issues concerning downloading checkpoints. ## Citation If you find our work helpful, feel free to give us a citation. ``` @article{yang2024qwen2, title={Qwen2 technical report}, author={Yang, An and Yang, Baosong and Hui, Binyuan and Zheng, Bo and Yu, Bowen and Zhou, Chang and Li, Chengpeng and Li, Chengyuan and Liu, Dayiheng and Huang, Fei and others}, journal={arXiv preprint arXiv:2407.10671}, year={2024} } ```
PieterM123/Huggy
PieterM123
"2024-01-21T15:17:03Z"
1
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
"2024-01-21T14:59:28Z"
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: PieterM123/Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
YakovElm/IntelDAOS5SetFitModel_clean_data
YakovElm
"2023-05-24T02:45:44Z"
3
0
sentence-transformers
[ "sentence-transformers", "pytorch", "mpnet", "setfit", "text-classification", "arxiv:2209.11055", "license:apache-2.0", "region:us" ]
text-classification
"2023-05-24T02:45:08Z"
--- license: apache-2.0 tags: - setfit - sentence-transformers - text-classification pipeline_tag: text-classification --- # YakovElm/IntelDAOS5SetFitModel_clean_data This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("YakovElm/IntelDAOS5SetFitModel_clean_data") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
Xiaojian9992024/Qwen2.5-Ultra-1.5B-25.02-Exp-v0.2
Xiaojian9992024
"2025-02-22T23:32:55Z"
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "mergekit", "merge", "conversational", "arxiv:2306.01708", "base_model:UWNSL/Qwen2.5-1.5B-Instruct_Short_CoT", "base_model:merge:UWNSL/Qwen2.5-1.5B-Instruct_Short_CoT", "base_model:Xiaojian9992024/Qwen2.5-Ultra-1.5B-25.02-Exp", "base_model:merge:Xiaojian9992024/Qwen2.5-Ultra-1.5B-25.02-Exp", "base_model:cutelemonlili/Qwen2.5-1.5B-Instruct_MATH_training_response_Qwen2.5_14B", "base_model:merge:cutelemonlili/Qwen2.5-1.5B-Instruct_MATH_training_response_Qwen2.5_14B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2025-02-22T23:31:59Z"
--- base_model: - cutelemonlili/Qwen2.5-1.5B-Instruct_MATH_training_response_Qwen2.5_14B - Xiaojian9992024/Qwen2.5-Ultra-1.5B-25.02-Exp - UWNSL/Qwen2.5-1.5B-Instruct_Short_CoT library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [Xiaojian9992024/Qwen2.5-Ultra-1.5B-25.02-Exp](https://huggingface.co/Xiaojian9992024/Qwen2.5-Ultra-1.5B-25.02-Exp) as a base. ### Models Merged The following models were included in the merge: * [cutelemonlili/Qwen2.5-1.5B-Instruct_MATH_training_response_Qwen2.5_14B](https://huggingface.co/cutelemonlili/Qwen2.5-1.5B-Instruct_MATH_training_response_Qwen2.5_14B) * [UWNSL/Qwen2.5-1.5B-Instruct_Short_CoT](https://huggingface.co/UWNSL/Qwen2.5-1.5B-Instruct_Short_CoT) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: Xiaojian9992024/Qwen2.5-Ultra-1.5B-25.02-Exp #no parameters necessary for base model - model: UWNSL/Qwen2.5-1.5B-Instruct_Short_CoT parameters: density: 0.5 weight: 0.5 - model: cutelemonlili/Qwen2.5-1.5B-Instruct_MATH_training_response_Qwen2.5_14B parameters: density: 0.5 weight: 0.5 merge_method: ties base_model: Xiaojian9992024/Qwen2.5-Ultra-1.5B-25.02-Exp parameters: normalize: false int8_mask: true dtype: float16 ```
kraken2404/rl_course_vizdoom_health_gathering_supreme_v2
kraken2404
"2023-04-18T15:05:03Z"
0
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
"2023-04-18T15:04:09Z"
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: doom_health_gathering_supreme type: doom_health_gathering_supreme metrics: - type: mean_reward value: 10.96 +/- 5.85 name: mean_reward verified: false --- A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment. This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory. Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/ ## Downloading the model After installing Sample-Factory, download the model with: ``` python -m sample_factory.huggingface.load_from_hub -r kraken2404/rl_course_vizdoom_health_gathering_supreme_v2 ``` ## Using the model To run the model after download, use the `enjoy` script corresponding to this environment: ``` python -m <path.to.enjoy.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme_v2 ``` You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag. See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details ## Training with this model To continue training with this model, use the `train` script corresponding to this environment: ``` python -m .Users.brijesh.modasara.miniconda3.envs.rl_unit8_p310.lib.python3.10.site-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme_v2 --restart_behavior=resume --train_for_env_steps=10000000000 ``` Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
ibm-research/ColD-Fusion-itr25-seed2
ibm-research
"2022-12-06T10:10:41Z"
4
0
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "exbert", "en", "arxiv:2212.01378", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2022-12-06T10:10:22Z"
--- language: en tags: - exbert license: mit --- # ColD Fusion model Finetuned model that aims to be a great base model. It improves over RoBERTa base, trained on 35 datasets. Full details at [this paper](https://arxiv.org/abs/2212.01378). ## Paper Abstract: Pretraining has been shown to scale well with compute, data size and data diversity. Multitask learning trains on a mixture of supervised datasets and produces improved performance compared to self-supervised pretraining. Until now, massively multitask learning required simultaneous access to all datasets in the mixture and heavy compute resources that are only available to well-resourced teams. In this paper, we propose ColD Fusion, a method that provides the benefits of multitask learning but leverages distributed computation and requires limited communication and no sharing of data. Consequentially, ColD Fusion can create a synergistic loop, where finetuned models can be recycled to continually improve the pretrained model they are based on. We show that ColD Fusion yields comparable benefits to multitask pretraining by producing a model that (a) attains strong performance on all of the datasets it was multitask trained on and (b) is a better starting point for finetuning on unseen datasets. We find ColD Fusion outperforms RoBERTa and even previous multitask models. Specifically, when training and testing on 35 diverse datasets, ColD Fusion-based model outperforms RoBERTa by 2.45 points in average without any changes to the architecture. ### How to use Best way to use is to finetune on your own task, but you can also extract features directly. To get the features of a given text in PyTorch: ```python from transformers import RobertaTokenizer, RobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = RobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import RobertaTokenizer, TFRobertaModel tokenizer = RobertaTokenizer.from_pretrained('ibm/ColD-Fusion') model = TFRobertaModel.from_pretrained('ibm/ColD-Fusion') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Evaluation results See full evaluation results of this model and many more [here](https://ibm.github.io/model-recycling/roberta-base_table.html) When fine-tuned on downstream tasks, this model achieves the following results: ### BibTeX entry and citation info ```bibtex @article{ColDFusion, author = {Shachar Don-Yehiya, Elad Venezian, Colin Raffel, Noam Slonim, Yoav Katz, Leshem ChoshenYinhan Liu and}, title = {ColD Fusion: Collaborative Descent for Distributed Multitask Finetuning}, journal = {CoRR}, volume = {abs/2212.01378}, year = {2022}, url = {https://arxiv.org/abs/2212.01378}, archivePrefix = {arXiv}, eprint = {2212.01378}, } ``` <a href="https://huggingface.co/exbert/?model=ibm/ColD-Fusion"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
mlc-ai/Qwen2-Math-1.5B-Instruct-q4f16_1-MLC
mlc-ai
"2024-08-08T21:24:56Z"
41
0
mlc-llm
[ "mlc-llm", "web-llm", "base_model:Qwen/Qwen2-Math-1.5B-Instruct", "base_model:quantized:Qwen/Qwen2-Math-1.5B-Instruct", "region:us" ]
null
"2024-08-08T18:40:31Z"
--- library_name: mlc-llm base_model: Qwen/Qwen2-Math-1.5B-Instruct tags: - mlc-llm - web-llm --- # Qwen2-Math-1.5B-Instruct-q4f16_1-MLC This is the [Qwen2-Math-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2-Math-1.5B-Instruct) model in MLC format `q4f16_1`. The model can be used for projects [MLC-LLM](https://github.com/mlc-ai/mlc-llm) and [WebLLM](https://github.com/mlc-ai/web-llm). ## Example Usage Here are some examples of using this model in MLC LLM. Before running the examples, please install MLC LLM by following the [installation documentation](https://llm.mlc.ai/docs/install/mlc_llm.html#install-mlc-packages). ### Chat In command line, run ```bash mlc_llm chat HF://mlc-ai/Qwen2-Math-1.5B-Instruct-q4f16_1-MLC ``` ### REST Server In command line, run ```bash mlc_llm serve HF://mlc-ai/Qwen2-Math-1.5B-Instruct-q4f16_1-MLC ``` ### Python API ```python from mlc_llm import MLCEngine # Create engine model = "HF://mlc-ai/Qwen2-Math-1.5B-Instruct-q4f16_1-MLC" engine = MLCEngine(model) # Run chat completion in OpenAI API. for response in engine.chat.completions.create( messages=[{"role": "user", "content": "What is the meaning of life?"}], model=model, stream=True, ): for choice in response.choices: print(choice.delta.content, end="", flush=True) print("\n") engine.terminate() ``` ## Documentation For more information on MLC LLM project, please visit our [documentation](https://llm.mlc.ai/docs/) and [GitHub repo](http://github.com/mlc-ai/mlc-llm).
dbands/code_instructions_122k_alpaca_style_lora_model
dbands
"2024-04-28T14:41:38Z"
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "base_model:finetune:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-04-26T09:05:04Z"
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl base_model: unsloth/llama-3-8b-bnb-4bit --- # Uploaded model - **Developed by:** dbands - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
DRAGOO/flan-t5-small-ocp-chat
DRAGOO
"2023-08-28T19:41:46Z"
103
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "generated_from_trainer", "base_model:google/flan-t5-small", "base_model:finetune:google/flan-t5-small", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
"2023-08-28T19:41:19Z"
--- license: apache-2.0 base_model: google/flan-t5-small tags: - generated_from_trainer metrics: - rouge model-index: - name: flan-t5-small-ocp-chat results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # flan-t5-small-ocp-chat This model is a fine-tuned version of [google/flan-t5-small](https://huggingface.co/google/flan-t5-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6956 - Rouge1: 71.3805 - Rouge2: 0.0 - Rougel: 71.3805 - Rougelsum: 72.2222 - Gen Len: 19.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:| | No log | 1.0 | 9 | 0.8045 | 71.3805 | 0.0 | 71.3805 | 72.2222 | 19.0 | | No log | 2.0 | 18 | 0.7547 | 65.8249 | 0.0 | 66.6667 | 66.6667 | 19.0 | | No log | 3.0 | 27 | 0.7110 | 71.3805 | 0.0 | 71.3805 | 72.2222 | 19.0 | | No log | 4.0 | 36 | 0.6956 | 71.3805 | 0.0 | 71.3805 | 72.2222 | 19.0 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.0.1+cu118 - Datasets 2.14.4 - Tokenizers 0.13.3
DUAL-GPO/zephyr-7b-gpo-log-v3-i1
DUAL-GPO
"2024-05-06T02:26:55Z"
0
0
peft
[ "peft", "tensorboard", "safetensors", "mistral", "alignment-handbook", "generated_from_trainer", "trl", "dpo", "dataset:HuggingFaceH4/ultrafeedback_binarized", "base_model:mistralai/Mistral-7B-v0.1", "base_model:adapter:mistralai/Mistral-7B-v0.1", "license:apache-2.0", "region:us" ]
null
"2024-05-05T10:51:58Z"
--- license: apache-2.0 library_name: peft tags: - alignment-handbook - generated_from_trainer - trl - dpo - generated_from_trainer base_model: mistralai/Mistral-7B-v0.1 datasets: - HuggingFaceH4/ultrafeedback_binarized model-index: - name: zephyr-7b-gpo-log-v3-i1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # zephyr-7b-gpo-log-v3-i1 This model is a fine-tuned version of [DUAL-GPO/zephyr-7b-gpo-log-i0](https://huggingface.co/DUAL-GPO/zephyr-7b-gpo-log-i0) on the HuggingFaceH4/ultrafeedback_binarized dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-06 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - distributed_type: multi-GPU - gradient_accumulation_steps: 2 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results ### Framework versions - PEFT 0.7.1 - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.15.2
ltuzova/tapt_helpfulness_unipelt_pretraining_model_fix_train
ltuzova
"2024-04-20T07:54:27Z"
0
0
null
[ "tensorboard", "generated_from_trainer", "base_model:FacebookAI/roberta-base", "base_model:finetune:FacebookAI/roberta-base", "license:mit", "region:us" ]
null
"2024-04-20T00:10:18Z"
--- license: mit base_model: roberta-base tags: - generated_from_trainer model-index: - name: tapt_helpfulness_unipelt_pretraining_model_fix_train results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tapt_helpfulness_unipelt_pretraining_model_fix_train This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.4789 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 21 - eval_batch_size: 21 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 42 - optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-06 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 2.7536 | 1.0 | 1068 | 1.8161 | | 1.855 | 2.0 | 2137 | 1.6743 | | 1.7578 | 3.0 | 3205 | 1.6134 | | 1.7056 | 4.0 | 4274 | 1.5780 | | 1.6755 | 5.0 | 5342 | 1.5660 | | 1.6508 | 6.0 | 6411 | 1.5507 | | 1.6402 | 7.0 | 7479 | 1.5236 | | 1.6226 | 8.0 | 8548 | 1.5272 | | 1.6145 | 9.0 | 9616 | 1.4970 | | 1.6034 | 10.0 | 10685 | 1.4999 | | 1.6004 | 11.0 | 11753 | 1.5120 | | 1.5916 | 12.0 | 12822 | 1.4882 | | 1.5888 | 13.0 | 13890 | 1.4974 | | 1.5801 | 14.0 | 14959 | 1.4703 | | 1.5784 | 15.0 | 16027 | 1.4767 | | 1.5738 | 16.0 | 17096 | 1.4668 | | 1.5717 | 17.0 | 18164 | 1.4776 | | 1.5696 | 18.0 | 19233 | 1.4691 | | 1.5681 | 19.0 | 20301 | 1.4756 | | 1.5658 | 19.99 | 21360 | 1.4789 | ### Framework versions - Transformers 4.36.2 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.15.2
shubhamWi91/detr-resnet-50_finetuned_wi
shubhamWi91
"2023-09-13T09:15:31Z"
16
0
transformers
[ "transformers", "pytorch", "detr", "object-detection", "generated_from_trainer", "dataset:coco_hf", "base_model:facebook/detr-resnet-50", "base_model:finetune:facebook/detr-resnet-50", "license:apache-2.0", "endpoints_compatible", "region:us" ]
object-detection
"2023-09-07T13:44:54Z"
--- license: apache-2.0 base_model: facebook/detr-resnet-50 tags: - generated_from_trainer datasets: - coco_hf model-index: - name: detr-resnet-50_finetuned_wi results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # detr-resnet-50_finetuned_wi This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the coco_hf dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 12 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 30 ### Training results ### Framework versions - Transformers 4.32.1 - Pytorch 2.0.1+cu118 - Datasets 2.14.4 - Tokenizers 0.13.3
genki10/Version2ASAP_FineTuningBERT_AugV6_k10_task1_organization_k10_k10_fold0
genki10
"2025-03-01T02:28:42Z"
2
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2025-02-27T22:26:05Z"
--- library_name: transformers license: apache-2.0 base_model: bert-base-uncased tags: - generated_from_trainer model-index: - name: Version2ASAP_FineTuningBERT_AugV6_k10_task1_organization_k10_k10_fold0 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Version2ASAP_FineTuningBERT_AugV6_k10_task1_organization_k10_k10_fold0 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7299 - Qwk: 0.4025 - Mse: 0.7299 - Rmse: 0.8543 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 100 ### Training results | Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:| | No log | 1.0 | 8 | 5.9383 | 0.0135 | 5.9383 | 2.4369 | | No log | 2.0 | 16 | 2.7426 | 0.0 | 2.7426 | 1.6561 | | No log | 3.0 | 24 | 1.1982 | 0.0316 | 1.1982 | 1.0946 | | No log | 4.0 | 32 | 1.7161 | 0.0212 | 1.7161 | 1.3100 | | No log | 5.0 | 40 | 1.2589 | 0.0106 | 1.2589 | 1.1220 | | No log | 6.0 | 48 | 1.0656 | 0.0223 | 1.0656 | 1.0323 | | No log | 7.0 | 56 | 0.7401 | 0.2976 | 0.7401 | 0.8603 | | No log | 8.0 | 64 | 0.7250 | 0.2528 | 0.7250 | 0.8515 | | No log | 9.0 | 72 | 0.7948 | 0.1573 | 0.7948 | 0.8915 | | No log | 10.0 | 80 | 0.8274 | 0.2865 | 0.8274 | 0.9096 | | No log | 11.0 | 88 | 0.8812 | 0.2749 | 0.8812 | 0.9387 | | No log | 12.0 | 96 | 0.8963 | 0.2710 | 0.8963 | 0.9467 | | No log | 13.0 | 104 | 0.9123 | 0.2873 | 0.9123 | 0.9552 | | No log | 14.0 | 112 | 1.2806 | 0.1744 | 1.2806 | 1.1316 | | No log | 15.0 | 120 | 0.7821 | 0.3959 | 0.7821 | 0.8844 | | No log | 16.0 | 128 | 0.9532 | 0.3447 | 0.9532 | 0.9763 | | No log | 17.0 | 136 | 0.8348 | 0.3978 | 0.8348 | 0.9137 | | No log | 18.0 | 144 | 0.9272 | 0.2784 | 0.9272 | 0.9629 | | No log | 19.0 | 152 | 1.2158 | 0.2335 | 1.2158 | 1.1026 | | No log | 20.0 | 160 | 0.7332 | 0.4313 | 0.7332 | 0.8563 | | No log | 21.0 | 168 | 0.8243 | 0.4032 | 0.8243 | 0.9079 | | No log | 22.0 | 176 | 0.7358 | 0.3675 | 0.7358 | 0.8578 | | No log | 23.0 | 184 | 0.8274 | 0.4153 | 0.8274 | 0.9096 | | No log | 24.0 | 192 | 0.7651 | 0.3790 | 0.7651 | 0.8747 | | No log | 25.0 | 200 | 0.8060 | 0.3371 | 0.8060 | 0.8978 | | No log | 26.0 | 208 | 0.7982 | 0.3702 | 0.7982 | 0.8934 | | No log | 27.0 | 216 | 0.8781 | 0.3183 | 0.8781 | 0.9371 | | No log | 28.0 | 224 | 0.8593 | 0.3435 | 0.8593 | 0.9270 | | No log | 29.0 | 232 | 0.8865 | 0.3573 | 0.8865 | 0.9415 | | No log | 30.0 | 240 | 0.8043 | 0.3916 | 0.8043 | 0.8968 | | No log | 31.0 | 248 | 0.7959 | 0.3398 | 0.7959 | 0.8921 | | No log | 32.0 | 256 | 0.7923 | 0.3399 | 0.7923 | 0.8901 | | No log | 33.0 | 264 | 0.7435 | 0.3441 | 0.7435 | 0.8622 | | No log | 34.0 | 272 | 0.7590 | 0.3310 | 0.7590 | 0.8712 | | No log | 35.0 | 280 | 0.7803 | 0.4028 | 0.7803 | 0.8834 | | No log | 36.0 | 288 | 0.7597 | 0.4408 | 0.7597 | 0.8716 | | No log | 37.0 | 296 | 0.7251 | 0.4229 | 0.7251 | 0.8515 | | No log | 38.0 | 304 | 0.7390 | 0.4242 | 0.7390 | 0.8596 | | No log | 39.0 | 312 | 0.7831 | 0.3230 | 0.7831 | 0.8849 | | No log | 40.0 | 320 | 0.8069 | 0.2801 | 0.8069 | 0.8982 | | No log | 41.0 | 328 | 0.7699 | 0.3581 | 0.7699 | 0.8774 | | No log | 42.0 | 336 | 0.7148 | 0.3897 | 0.7148 | 0.8454 | | No log | 43.0 | 344 | 0.7551 | 0.4098 | 0.7551 | 0.8690 | | No log | 44.0 | 352 | 0.7650 | 0.3834 | 0.7650 | 0.8746 | | No log | 45.0 | 360 | 0.7681 | 0.3778 | 0.7681 | 0.8764 | | No log | 46.0 | 368 | 0.7340 | 0.3836 | 0.7340 | 0.8568 | | No log | 47.0 | 376 | 0.7036 | 0.4440 | 0.7036 | 0.8388 | | No log | 48.0 | 384 | 0.6997 | 0.3979 | 0.6997 | 0.8365 | | No log | 49.0 | 392 | 0.6944 | 0.4131 | 0.6944 | 0.8333 | | No log | 50.0 | 400 | 0.7145 | 0.4019 | 0.7145 | 0.8453 | | No log | 51.0 | 408 | 0.7353 | 0.3940 | 0.7353 | 0.8575 | | No log | 52.0 | 416 | 0.7447 | 0.3724 | 0.7447 | 0.8630 | | No log | 53.0 | 424 | 0.7444 | 0.4290 | 0.7444 | 0.8628 | | No log | 54.0 | 432 | 0.7502 | 0.4111 | 0.7502 | 0.8662 | | No log | 55.0 | 440 | 0.7469 | 0.3863 | 0.7469 | 0.8642 | | No log | 56.0 | 448 | 0.7858 | 0.3849 | 0.7858 | 0.8864 | | No log | 57.0 | 456 | 0.7761 | 0.4033 | 0.7761 | 0.8810 | | No log | 58.0 | 464 | 0.8256 | 0.3663 | 0.8256 | 0.9086 | | No log | 59.0 | 472 | 0.7339 | 0.3965 | 0.7339 | 0.8567 | | No log | 60.0 | 480 | 0.7576 | 0.3933 | 0.7576 | 0.8704 | | No log | 61.0 | 488 | 0.7660 | 0.3883 | 0.7660 | 0.8752 | | No log | 62.0 | 496 | 0.7626 | 0.4036 | 0.7626 | 0.8733 | | 0.4355 | 63.0 | 504 | 0.7409 | 0.4034 | 0.7409 | 0.8607 | | 0.4355 | 64.0 | 512 | 0.7288 | 0.3951 | 0.7288 | 0.8537 | | 0.4355 | 65.0 | 520 | 0.7246 | 0.3654 | 0.7246 | 0.8512 | | 0.4355 | 66.0 | 528 | 0.7155 | 0.3691 | 0.7155 | 0.8459 | | 0.4355 | 67.0 | 536 | 0.7299 | 0.4025 | 0.7299 | 0.8543 | ### Framework versions - Transformers 4.47.0 - Pytorch 2.5.1+cu121 - Datasets 3.3.1 - Tokenizers 0.21.0
tuantmdev/9a2d78ca-84c1-4d58-9a6b-f2ce4d864db4
tuantmdev
"2025-02-15T13:22:02Z"
0
0
peft
[ "peft", "safetensors", "mistral", "axolotl", "generated_from_trainer", "base_model:unsloth/Phi-3-mini-4k-instruct", "base_model:adapter:unsloth/Phi-3-mini-4k-instruct", "license:mit", "region:us" ]
null
"2025-02-15T12:57:02Z"
--- library_name: peft license: mit base_model: unsloth/Phi-3-mini-4k-instruct tags: - axolotl - generated_from_trainer model-index: - name: 9a2d78ca-84c1-4d58-9a6b-f2ce4d864db4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/Phi-3-mini-4k-instruct bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 122fc3db0d4fc769_train_data.json ds_type: json format: custom path: /workspace/input_data/122fc3db0d4fc769_train_data.json type: field_input: essay field_instruction: prompt field_output: evaluation format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 5 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 16 gradient_checkpointing: false group_by_length: false hub_model_id: tuantmdev/9a2d78ca-84c1-4d58-9a6b-f2ce4d864db4 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 2e-05 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 40 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 2 mlflow_experiment_name: /tmp/122fc3db0d4fc769_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_strategy: best saves_per_epoch: 5 sequence_len: 512 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: aa6a691d-2369-40de-928b-5fb2bdbfc89e wandb_project: Gradients-On-Demand wandb_run: unknown wandb_runid: aa6a691d-2369-40de-928b-5fb2bdbfc89e warmup_steps: 80 weight_decay: 0.01 xformers_attention: null ``` </details><br> # 9a2d78ca-84c1-4d58-9a6b-f2ce4d864db4 This model is a fine-tuned version of [unsloth/Phi-3-mini-4k-instruct](https://huggingface.co/unsloth/Phi-3-mini-4k-instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: nan ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 80 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:---------------------------:|:------:|:----:|:---------------:| | No log | 0.0033 | 1 | nan | | 2680901209921239384064.0000 | 0.1331 | 40 | nan | | 1025514896328516960256.0000 | 0.2662 | 80 | nan | | 3317807024623415984128.0000 | 0.3993 | 120 | nan | | 12569028470112047104.0000 | 0.5323 | 160 | nan | | 479598543162572.8 | 0.6654 | 200 | nan | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
whywynn/ppo-SnowballTarget
whywynn
"2023-08-11T22:16:50Z"
25
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "SnowballTarget", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SnowballTarget", "region:us" ]
reinforcement-learning
"2023-08-11T20:35:59Z"
--- library_name: ml-agents tags: - SnowballTarget - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SnowballTarget --- # **ppo** Agent playing **SnowballTarget** This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: whywynn/ppo-SnowballTarget 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
AAProject/Llama-2-7b-chat-hf-4bits-bnb
AAProject
"2024-05-24T22:47:41Z"
73
1
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
"2024-05-24T22:42:26Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
zhaopeng6/taxi-qlearning-model
zhaopeng6
"2025-03-10T05:37:12Z"
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
"2025-03-10T05:37:08Z"
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: taxi-qlearning-model results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="zhaopeng6/taxi-qlearning-model", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
wesecra/ppo-LunarLander-v2
wesecra
"2023-07-12T13:49:31Z"
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
"2023-07-11T18:23:35Z"
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 249.52 +/- 20.14 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
DevQuasar/Qwen.Qwen2-VL-7B-GGUF
DevQuasar
"2025-03-06T17:58:16Z"
0
0
null
[ "gguf", "image-text-to-text", "base_model:Qwen/Qwen2-VL-7B", "base_model:quantized:Qwen/Qwen2-VL-7B", "region:us" ]
image-text-to-text
"2025-03-06T04:55:04Z"
--- base_model: - Qwen/Qwen2-VL-7B pipeline_tag: image-text-to-text --- [<img src="https://raw.githubusercontent.com/csabakecskemeti/devquasar/main/dq_logo_black-transparent.png" width="200"/>](https://devquasar.com) 'Make knowledge free for everyone' Quantized version of: [Qwen/Qwen2-VL-7B](https://huggingface.co/Qwen/Qwen2-VL-7B) <a href='https://ko-fi.com/L4L416YX7C' target='_blank'><img height='36' style='border:0px;height:36px;' src='https://storage.ko-fi.com/cdn/kofi6.png?v=6' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a>
Eric111/CatunaLaserPi
Eric111
"2024-03-03T21:00:48Z"
52
1
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "Eric111/caTUNABeagle", "BryanSwk/LaserPipe-7B-SLERP", "license:cc-by-nc-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-03-03T18:54:10Z"
--- license: cc-by-nc-4.0 tags: - merge - mergekit - lazymergekit - Eric111/caTUNABeagle - BryanSwk/LaserPipe-7B-SLERP --- # CatunaLaserPi CatunaLaserPi is a merge of the following models using [mergekit](https://github.com/cg123/mergekit): * [Eric111/caTUNABeagle](https://huggingface.co/Eric111/caTUNABeagle) * [BryanSwk/LaserPipe-7B-SLERP](https://huggingface.co/BryanSwk/LaserPipe-7B-SLERP) ## 🧩 Configuration ```yaml slices: - sources: - model: Eric111/caTUNABeagle layer_range: [0, 32] - model: BryanSwk/LaserPipe-7B-SLERP layer_range: [0, 32] merge_method: slerp base_model: Eric111/caTUNABeagle parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ```
ondevicellm/tinyllama_moe_sft_ultrachat-slimorca
ondevicellm
"2024-01-17T20:52:20Z"
7
0
transformers
[ "transformers", "tensorboard", "safetensors", "mixtral", "text-generation", "alignment-handbook", "generated_from_trainer", "trl", "sft", "conversational", "dataset:HuggingFaceH4/ultrachat_200k", "dataset:ondevicellm/SlimOrca", "base_model:ondevicellm/tinyllama_moe", "base_model:finetune:ondevicellm/tinyllama_moe", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-01-17T08:59:48Z"
--- license: apache-2.0 base_model: ondevicellm/tinyllama_moe tags: - alignment-handbook - generated_from_trainer - trl - sft - generated_from_trainer datasets: - HuggingFaceH4/ultrachat_200k - ondevicellm/SlimOrca model-index: - name: tinyllama_moe_sft_ultrachat-slimorca results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tinyllama_moe_sft_ultrachat-slimorca This model is a fine-tuned version of [ondevicellm/tinyllama_moe](https://huggingface.co/ondevicellm/tinyllama_moe) on the HuggingFaceH4/ultrachat_200k and the ondevicellm/SlimOrca datasets. It achieves the following results on the evaluation set: - Loss: 1.1526 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 2 - total_train_batch_size: 128 - total_eval_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 120 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.4601 | 0.05 | 100 | 1.3361 | | 1.3324 | 0.1 | 200 | 1.2566 | | 1.2946 | 0.14 | 300 | 1.2279 | | 1.2767 | 0.19 | 400 | 1.2111 | | 1.2298 | 0.24 | 500 | 1.1995 | | 1.2247 | 0.29 | 600 | 1.1902 | | 1.2208 | 0.34 | 700 | 1.1833 | | 1.2375 | 0.39 | 800 | 1.1775 | | 1.2038 | 0.43 | 900 | 1.1726 | | 1.1926 | 0.48 | 1000 | 1.1683 | | 1.1933 | 0.53 | 1100 | 1.1649 | | 1.1893 | 0.58 | 1200 | 1.1618 | | 1.2029 | 0.63 | 1300 | 1.1593 | | 1.2201 | 0.68 | 1400 | 1.1572 | | 1.1741 | 0.72 | 1500 | 1.1557 | | 1.1813 | 0.77 | 1600 | 1.1545 | | 1.1668 | 0.82 | 1700 | 1.1536 | | 1.1495 | 0.87 | 1800 | 1.1530 | | 1.1595 | 0.92 | 1900 | 1.1527 | | 1.1607 | 0.97 | 2000 | 1.1526 | ### Framework versions - Transformers 4.36.2 - Pytorch 2.0.1+cu117 - Datasets 2.16.1 - Tokenizers 0.15.0
flax/redshift-diffusion
flax
"2023-05-16T09:27:31Z"
6
0
diffusers
[ "diffusers", "TPU", "JAX", "Flax", "stable-diffusion", "text-to-image", "en", "license:openrail", "diffusers:FlaxStableDiffusionPipeline", "region:us" ]
text-to-image
"2022-11-13T13:17:30Z"
--- license: openrail library_name: diffusers tags: - TPU - JAX - Flax - stable-diffusion - text-to-image language: - en ---
krevas/LDCC-Instruct-Llama-2-ko-13B-v4.1.12
krevas
"2023-10-20T03:37:13Z"
60
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "license:cc-by-nc-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2023-10-19T23:15:22Z"
--- license: cc-by-nc-4.0 --- # LDCC-Instruct-Llama-2-ko-13B <img src="./assets/icon.png" alt="image" width="50%" height="auto"> ## Model Details * **Developed by**: [Lotte Data Communication](https://www.ldcc.co.kr) ## Hardware and Software * **Hardware**: We utilized an A100x8 * 1 for training our model * **Training Factors**: We fine-tuned this model using a combination of the [DeepSpeed library](https://github.com/microsoft/DeepSpeed) and the [HuggingFace Trainer](https://huggingface.co/docs/transformers/main_classes/trainer) / [HuggingFace Accelerate](https://huggingface.co/docs/accelerate/index) ## Prompt Template ``` ### Prompt: {instruction} ### Answer: {output} ``` # **Llama 2** Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 7B pretrained model, converted for the Hugging Face Transformers format. Links to other models can be found in the index at the bottom. ## Model Details *Note: Use of this model is governed by the Meta license. In order to download the model weights and tokenizer, please visit the [website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and accept our License before requesting access here.* Meta developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama-2-Chat, are optimized for dialogue use cases. Llama-2-Chat models outperform open-source chat models on most benchmarks we tested, and in our human evaluations for helpfulness and safety, are on par with some popular closed-source models like ChatGPT and PaLM. **Model Developers** Meta **Variations** Llama 2 comes in a range of parameter sizes — 7B, 13B, and 70B — as well as pretrained and fine-tuned variations. **Input** Models input text only. **Output** Models generate text only. **Model Architecture** Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety. ||Training Data|Params|Content Length|GQA|Tokens|LR| |---|---|---|---|---|---|---| |Llama 2|*A new mix of publicly available online data*|7B|4k|&#10007;|2.0T|3.0 x 10<sup>-4</sup>| |Llama 2|*A new mix of publicly available online data*|13B|4k|&#10007;|2.0T|3.0 x 10<sup>-4</sup>| |Llama 2|*A new mix of publicly available online data*|70B|4k|&#10004;|2.0T|1.5 x 10<sup>-4</sup>| *Llama 2 family of models.* Token counts refer to pretraining data only. All models are trained with a global batch-size of 4M tokens. Bigger models - 70B -- use Grouped-Query Attention (GQA) for improved inference scalability. **Model Dates** Llama 2 was trained between January 2023 and July 2023. **Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback. **License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) ## Intended Use **Intended Use Cases** Llama 2 is intended for commercial and research use in English. Tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. To get the expected features and performance for the chat versions, a specific formatting needs to be followed, including the `INST` and `<<SYS>>` tags, `BOS` and `EOS` tokens, and the whitespaces and breaklines in between (we recommend calling `strip()` on inputs to avoid double-spaces). See our reference code in github for details: [`chat_completion`](https://github.com/facebookresearch/llama/blob/main/llama/generation.py#L212). **Out-of-scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws).Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Llama 2. ## Hardware and Software **Training Factors** We used custom training libraries, Meta's Research Super Cluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute. **Carbon Footprint** Pretraining utilized a cumulative 3.3M GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 539 tCO2eq, 100% of which were offset by Meta’s sustainability program. ||Time (GPU hours)|Power Consumption (W)|Carbon Emitted(tCO<sub>2</sub>eq)| |---|---|---|---| |Llama 2 7B|184320|400|31.22| |Llama 2 13B|368640|400|62.44| |Llama 2 70B|1720320|400|291.42| |Total|3311616||539.00| **CO<sub>2</sub> emissions during pretraining.** Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others. ## Training Data **Overview** Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over one million new human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data. **Data Freshness** The pretraining data has a cutoff of September 2022, but some tuning data is more recent, up to July 2023. ## Evaluation Results In this section, we report the results for the Llama 1 and Llama 2 models on standard academic benchmarks.For all the evaluations, we use our internal evaluations library. |Model|Size|Code|Commonsense Reasoning|World Knowledge|Reading Comprehension|Math|MMLU|BBH|AGI Eval| |---|---|---|---|---|---|---|---|---|---| |Llama 1|7B|14.1|60.8|46.2|58.5|6.95|35.1|30.3|23.9| |Llama 1|13B|18.9|66.1|52.6|62.3|10.9|46.9|37.0|33.9| |Llama 1|33B|26.0|70.0|58.4|67.6|21.4|57.8|39.8|41.7| |Llama 1|65B|30.7|70.7|60.5|68.6|30.8|63.4|43.5|47.6| |Llama 2|7B|16.8|63.9|48.9|61.3|14.6|45.3|32.6|29.3| |Llama 2|13B|24.5|66.9|55.4|65.8|28.7|54.8|39.4|39.1| |Llama 2|70B|**37.5**|**71.9**|**63.6**|**69.4**|**35.2**|**68.9**|**51.2**|**54.2**| **Overall performance on grouped academic benchmarks.** *Code:* We report the average pass@1 scores of our models on HumanEval and MBPP. *Commonsense Reasoning:* We report the average of PIQA, SIQA, HellaSwag, WinoGrande, ARC easy and challenge, OpenBookQA, and CommonsenseQA. We report 7-shot results for CommonSenseQA and 0-shot results for all other benchmarks. *World Knowledge:* We evaluate the 5-shot performance on NaturalQuestions and TriviaQA and report the average. *Reading Comprehension:* For reading comprehension, we report the 0-shot average on SQuAD, QuAC, and BoolQ. *MATH:* We report the average of the GSM8K (8 shot) and MATH (4 shot) benchmarks at top 1. |||TruthfulQA|Toxigen| |---|---|---|---| |Llama 1|7B|27.42|23.00| |Llama 1|13B|41.74|23.08| |Llama 1|33B|44.19|22.57| |Llama 1|65B|48.71|21.77| |Llama 2|7B|33.29|**21.25**| |Llama 2|13B|41.86|26.10| |Llama 2|70B|**50.18**|24.60| **Evaluation of pretrained LLMs on automatic safety benchmarks.** For TruthfulQA, we present the percentage of generations that are both truthful and informative (the higher the better). For ToxiGen, we present the percentage of toxic generations (the smaller the better). |||TruthfulQA|Toxigen| |---|---|---|---| |Llama-2-Chat|7B|57.04|**0.00**| |Llama-2-Chat|13B|62.18|**0.00**| |Llama-2-Chat|70B|**64.14**|0.01| **Evaluation of fine-tuned LLMs on different safety datasets.** Same metric definitions as above. ## Ethical Considerations and Limitations Llama 2 is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2, developers should perform safety testing and tuning tailored to their specific applications of the model. Please see the Responsible Use Guide available at [https://ai.meta.com/llama/responsible-use-guide/](https://ai.meta.com/llama/responsible-use-guide) ## Reporting Issues Please report any software “bug,” or other problems with the models through one of the following means: - Reporting issues with the model: [github.com/facebookresearch/llama](http://github.com/facebookresearch/llama) - Reporting problematic content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback) - Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info) ## Llama Model Index |Model|Llama2|Llama2-hf|Llama2-chat|Llama2-chat-hf| |---|---|---|---|---| |7B| [Link](https://huggingface.co/llamaste/Llama-2-7b) | [Link](https://huggingface.co/llamaste/Llama-2-7b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat-hf)| |13B| [Link](https://huggingface.co/llamaste/Llama-2-13b) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-13b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf)| |70B| [Link](https://huggingface.co/llamaste/Llama-2-70b) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-70b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf)|
diliash/sam-2024-04-07-13-59-40
diliash
"2024-04-07T14:39:24Z"
133
0
transformers
[ "transformers", "safetensors", "sam", "mask-generation", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
mask-generation
"2024-04-07T14:39:04Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Isaacgv/q-FrozenLake-v1-4x4-noSlippery
Isaacgv
"2023-02-28T13:58:25Z"
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
"2023-02-28T13:58:22Z"
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="Isaacgv/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
fixie-ai/ultravox-v0_5-llama-3_1-8b
fixie-ai
"2025-02-26T23:38:41Z"
4,082
9
transformers
[ "transformers", "safetensors", "ultravox", "feature-extraction", "audio-text-to-text", "custom_code", "ar", "be", "bg", "bn", "cs", "cy", "da", "de", "el", "en", "es", "et", "fa", "fi", "fr", "gl", "hi", "hu", "it", "ja", "ka", "lt", "lv", "mk", "mr", "nl", "pl", "pt", "ro", "ru", "sk", "sl", "sr", "sv", "sw", "ta", "th", "tr", "uk", "ur", "vi", "zh", "license:mit", "region:us" ]
audio-text-to-text
"2025-02-05T19:51:30Z"
--- language: - ar - be - bg - bn - cs - cy - da - de - el - en - es - et - fa - fi - fr - gl - hi - hu - it - ja - ka - lt - lv - mk - mr - nl - pl - pt - ro - ru - sk - sl - sr - sv - sw - ta - th - tr - uk - ur - vi - zh library_name: transformers license: mit metrics: - bleu pipeline_tag: audio-text-to-text --- # Model Card for Ultravox Ultravox is a multimodal Speech LLM built around a pretrained [Llama3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B) and [whisper-large-v3-turbo](https://huggingface.co/openai/whisper-large-v3-turbo) backbone. See https://ultravox.ai for the GitHub repo and more information. ## Model Details ### Model Description Ultravox is a multimodal model that can consume both speech and text as input (e.g., a text system prompt and voice user message). The input to the model is given as a text prompt with a special `<|audio|>` pseudo-token, and the model processor will replace this magic token with embeddings derived from the input audio. Using the merged embeddings as input, the model will then generate output text as usual. In a future revision of Ultravox, we plan to expand the token vocabulary to support generation of semantic and acoustic audio tokens, which can then be fed to a vocoder to produce voice output. No preference tuning has been applied to this revision of the model. - **Developed by:** Fixie.ai - **License:** MIT ### Model Sources - **Repository:** https://ultravox.ai - **Demo:** See repo ## Usage Think of the model as an LLM that can also hear and understand speech. As such, it can be used as a voice agent, and also to do speech-to-speech translation, analysis of spoken audio, etc. To use the model, try the following: ```python # pip install transformers peft librosa import transformers import numpy as np import librosa pipe = transformers.pipeline(model='fixie-ai/ultravox-v0_5-llama-3_1-8b', trust_remote_code=True) path = "<path-to-input-audio>" # TODO: pass the audio here audio, sr = librosa.load(path, sr=16000) turns = [ { "role": "system", "content": "You are a friendly and helpful character. You love to answer questions for people." }, ] pipe({'audio': audio, 'turns': turns, 'sampling_rate': sr}, max_new_tokens=30) ``` ## Training Details The model uses a pre-trained [Llama3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B) backbone as well as the encoder part of [whisper-large-v3-turbo](https://huggingface.co/openai/whisper-large-v3-turbo). The multi-modal adapter is trained, the Whisper encoder is fine-tuned, while the Llama model is kept frozen. We use a knowledge-distillation loss where Ultravox is trying to match the logits of the text-based Llama backbone. ### Training Data The training dataset is a mix of ASR datasets, extended with continuations generated by Llama 3.1 8B, and speech translation datasets, which yield a modest improvement in translation evaluations. ### Training Procedure Supervised speech instruction finetuning via knowledge-distillation. For more info, see [training code in Ultravox repo](https://github.com/fixie-ai/ultravox/blob/main/ultravox/training/train.py). #### Training Hyperparameters - **Training regime:** BF16 mixed precision training - **Hardward used:** 8x H100 GPUs #### Speeds, Sizes, Times The current version of Ultravox, when invoked with audio content, has a time-to-first-token (TTFT) of approximately 150ms, and a tokens-per-second rate of ~50-100 when using an A100-40GB GPU, all using a Llama 3.1 8B backbone. Check out the audio tab on [TheFastest.ai](https://thefastest.ai/?m=audio) for daily benchmarks and a comparison with other existing models. ## Evaluation | | Ultravox 0.4 8B | Ultravox 0.4.1 8B | **Ultravox 0.5 8B** | | --- | ---: | ---: | ---: | | **covost2 en_ar** | 11.17 | 12.28 | 12.99 | | **covost2 en_ca** | 27.46 | 29.94 | 31.54 | | **covost2 en_de** | 25.47 | 27.13 | 28.70 | | **covost2 es_en** | 37.11 | 39.16 | 40.19 | | **covost2 ru_en** | 38.96 | 39.65 | 42.13 | | **covost2 zh_en** | 10.08 | 14.55 | 17.22 | | **big bench audio**| - | 63.20 | 66.54 |
PhantHive/llama-momo-2.5
PhantHive
"2023-10-24T18:13:53Z"
0
0
peft
[ "peft", "tensorboard", "llama", "4-bit", "bitsandbytes", "region:us" ]
null
"2023-10-24T17:58:21Z"
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.5.0
damgomz/ft_32_5e6_base_x12
damgomz
"2024-06-24T09:37:34Z"
8
0
transformers
[ "transformers", "safetensors", "albert", "text-classification", "en", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-06-23T11:04:15Z"
--- language: en tags: - text-classification pipeline_tag: text-classification widget: - text: GEPS Techno is the pioneer of hybridization of renewable energies at sea. We imagine, design and commercialize innovative off-grid systems that aim to generate power at sea, stabilize and collect data. The success of our low power platforms WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity platform. --- ## Environmental Impact (CODE CARBON DEFAULT) | Metric | Value | |--------------------------|---------------------------------| | Duration (in seconds) | 84307.6551721096 | | Emissions (Co2eq in kg) | 0.0510158327103399 | | CPU power (W) | 42.5 | | GPU power (W) | [No GPU] | | RAM power (W) | 3.75 | | CPU energy (kWh) | 0.99529624085625 | | GPU energy (kWh) | [No GPU] | | RAM energy (kWh) | 0.0878194617899756 | | Consumed energy (kWh) | 1.0831157026462264 | | Country name | Switzerland | | Cloud provider | nan | | Cloud region | nan | | CPU count | 2 | | CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz | | GPU count | nan | | GPU model | nan | ## Environmental Impact (for one core) | Metric | Value | |--------------------------|---------------------------------| | CPU energy (kWh) | 0.16229223620631097 | | Emissions (Co2eq in kg) | 0.033020498275742924 | ## Note 19 juin 2024 ## My Config | Config | Value | |--------------------------|-----------------| | checkpoint | albert-base-v2 | | model_name | ft_32_5e6_base_x12 | | sequence_length | 400 | | num_epoch | 6 | | learning_rate | 5e-06 | | batch_size | 32 | | weight_decay | 0.0 | | warm_up_prop | 0.0 | | drop_out_prob | 0.1 | | packing_length | 100 | | train_test_split | 0.2 | | num_steps | 29328 | ## Training and Testing steps Epoch | Train Loss | Test Loss | F-beta Score ---|---|---|--- | 0 | 0.000000 | 0.718784 | 0.668353 | | 1 | 0.402769 | 0.321245 | 0.851871 | | 2 | 0.282093 | 0.276767 | 0.878442 | | 3 | 0.234059 | 0.246080 | 0.916013 | | 4 | 0.203769 | 0.232428 | 0.910324 | | 5 | 0.177173 | 0.232535 | 0.919689 | | 6 | 0.155451 | 0.230514 | 0.917727 |
ekaterina-blatova-jb/model_lr1e-4_old_scheduler_with_t_max_275_all_v2
ekaterina-blatova-jb
"2024-06-28T21:10:56Z"
170
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-06-28T21:08:55Z"
--- {} --- ## Evaluation results Validation loss on the whole input: 0.8578255325555801 Validation loss on completion: 0.9436061959131621
helenai/gpt2-ov
helenai
"2023-04-12T19:38:19Z"
455
2
transformers
[ "transformers", "openvino", "gpt2", "text-generation", "en", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2023-03-11T20:06:06Z"
--- language: - en tags: - openvino --- # gpt2 This is the [gpt2](https://huggingface.co/gpt2) model converted to [OpenVINO](https://openvino.ai), for accellerated inference. An example of how to do inference on this model: ```python from optimum.intel.openvino import OVModelForCausalLM from transformers import AutoTokenizer, pipeline # model_id should be set to either a local directory or a model available on the HuggingFace hub. model_id = "helenai/gpt2-ov" tokenizer = AutoTokenizer.from_pretrained(model_id) model = OVModelForCausalLM.from_pretrained(model_id) pipe = pipeline("text-generation", model=model, tokenizer=tokenizer) result = pipe("My name is Julien and I like to") print(result) ```
lesso07/7f57d62b-987a-4f74-97b7-8a6d375630fc
lesso07
"2025-02-15T18:03:22Z"
0
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:lmsys/vicuna-7b-v1.3", "base_model:adapter:lmsys/vicuna-7b-v1.3", "region:us" ]
null
"2025-02-15T17:47:39Z"
--- library_name: peft base_model: lmsys/vicuna-7b-v1.3 tags: - axolotl - generated_from_trainer model-index: - name: 7f57d62b-987a-4f74-97b7-8a6d375630fc results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <br> # 7f57d62b-987a-4f74-97b7-8a6d375630fc This model is a fine-tuned version of [lmsys/vicuna-7b-v1.3](https://huggingface.co/lmsys/vicuna-7b-v1.3) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5705 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.000207 - train_batch_size: 4 - eval_batch_size: 4 - seed: 70 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 50 - training_steps: 500 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0010 | 1 | 1.0682 | | 0.6344 | 0.0490 | 50 | 0.7743 | | 0.6414 | 0.0980 | 100 | 0.7232 | | 0.5926 | 0.1470 | 150 | 0.6794 | | 0.6561 | 0.1960 | 200 | 0.6425 | | 0.6273 | 0.2450 | 250 | 0.6349 | | 0.5659 | 0.2940 | 300 | 0.6150 | | 0.5528 | 0.3430 | 350 | 0.5944 | | 0.4957 | 0.3920 | 400 | 0.5777 | | 0.4836 | 0.4410 | 450 | 0.5720 | | 0.4834 | 0.4900 | 500 | 0.5705 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
great0001/1e8a7f1d-e7c5-4359-955b-2a50e2b29247
great0001
"2025-02-12T21:13:14Z"
0
0
peft
[ "peft", "safetensors", "mistral", "axolotl", "generated_from_trainer", "base_model:NousResearch/Nous-Capybara-7B-V1.9", "base_model:adapter:NousResearch/Nous-Capybara-7B-V1.9", "license:mit", "region:us" ]
null
"2025-02-12T20:31:03Z"
--- library_name: peft license: mit base_model: NousResearch/Nous-Capybara-7B-V1.9 tags: - axolotl - generated_from_trainer model-index: - name: 1e8a7f1d-e7c5-4359-955b-2a50e2b29247 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 1e8a7f1d-e7c5-4359-955b-2a50e2b29247 This model is a fine-tuned version of [NousResearch/Nous-Capybara-7B-V1.9](https://huggingface.co/NousResearch/Nous-Capybara-7B-V1.9) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.6144 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
takesomerisks/qloraLlama213bchatTrain1
takesomerisks
"2023-07-20T23:02:00Z"
0
0
peft
[ "peft", "region:us" ]
null
"2023-07-20T23:01:57Z"
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.5.0.dev0
gaianet/stablelm-2-12b-chat-GGUF
gaianet
"2024-07-09T06:28:47Z"
16
1
transformers
[ "transformers", "gguf", "stablelm", "text-generation", "causal-lm", "en", "base_model:stabilityai/stablelm-2-12b-chat", "base_model:quantized:stabilityai/stablelm-2-12b-chat", "license:other", "autotrain_compatible", "region:us", "conversational" ]
text-generation
"2024-07-09T06:02:11Z"
--- base_model: stabilityai/stablelm-2-12b-chat inference: false license: other library_name: transformers pipeline_tag: text-generation model_creator: stabilityai model_name: stablelm-2-12b-chat quantized_by: Second State Inc. language: - en tags: - causal-lm --- ![](https://github.com/GaiaNet-AI/.github/assets/45785633/d6976adc-f97d-4f86-a648-0f2f5c8e7eee) # stablelm-2-12b-chat-GGUF ## Original Model [stabilityai/stablelm-2-12b-chat](https://huggingface.co/stabilityai/stablelm-2-12b-chat) ## Run with Gaianet **Prompt template:** prompt template: `chatml` **Context size:** chat_ctx_size: `4096` **Run with GaiaNet:** - Quick start: https://docs.gaianet.ai/node-guide/quick-start - Customize your node: https://docs.gaianet.ai/node-guide/customize *Quantized with llama.cpp b3333*
wjworld/chaoyang_df_0_1_1_colon_slide
wjworld
"2024-02-23T14:20:21Z"
0
0
diffusers
[ "diffusers", "tensorboard", "safetensors", "text-to-image", "dreambooth", "stable-diffusion", "stable-diffusion-diffusers", "base_model:CompVis/stable-diffusion-v1-4", "base_model:finetune:CompVis/stable-diffusion-v1-4", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
"2024-02-19T09:28:27Z"
--- license: creativeml-openrail-m library_name: diffusers tags: - text-to-image - dreambooth - stable-diffusion - stable-diffusion-diffusers - text-to-image - dreambooth - stable-diffusion - stable-diffusion-diffusers - text-to-image - dreambooth - stable-diffusion - stable-diffusion-diffusers - text-to-image - dreambooth - stable-diffusion - stable-diffusion-diffusers inference: true base_model: CompVis/stable-diffusion-v1-4 instance_prompt: 'A Photo of a colon section: one expert labels it as ''normal'', two others suggest it''s an ''serrated''.' --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # DreamBooth - wjworld/chaoyang_df_0_1_1_colon_slide This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on A Photo of a colon section: one expert labels it as 'normal', two others suggest it's an 'serrated'. using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. DreamBooth for the text encoder was enabled: False. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
LitingZhou/whisper-large-freezing-21
LitingZhou
"2024-11-29T19:34:12Z"
71
0
transformers
[ "transformers", "safetensors", "whisper", "automatic-speech-recognition", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
"2024-11-29T19:32:01Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Chiahc/my_awesome_eli5_clm-model
Chiahc
"2023-07-25T05:39:54Z"
224
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "generated_from_trainer", "base_model:distilbert/distilgpt2", "base_model:finetune:distilbert/distilgpt2", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2023-07-25T05:07:05Z"
--- license: apache-2.0 base_model: distilgpt2 tags: - generated_from_trainer model-index: - name: my_awesome_eli5_clm-model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_eli5_clm-model This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.7420 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.8685 | 1.0 | 1145 | 3.7625 | | 3.7736 | 2.0 | 2290 | 3.7448 | | 3.7339 | 3.0 | 3435 | 3.7420 | ### Framework versions - Transformers 4.32.0.dev0 - Pytorch 2.0.1 - Datasets 2.12.0 - Tokenizers 0.13.2
DaRkSpyro/ChadKroeger
DaRkSpyro
"2024-01-18T01:51:32Z"
0
0
flair
[ "flair", "music", "en", "dataset:HuggingFaceM4/WebSight", "license:apache-2.0", "region:us" ]
null
"2024-01-18T01:49:27Z"
--- license: apache-2.0 datasets: - HuggingFaceM4/WebSight language: - en metrics: - accuracy library_name: flair tags: - music ---
LHRuig/robertseanleo
LHRuig
"2025-02-02T20:10:58Z"
8
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "region:us" ]
text-to-image
"2025-02-02T20:10:16Z"
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - text: suit output: url: images/suit.jpg base_model: black-forest-labs/FLUX.1-dev instance_prompt: robertseanleo --- # robertseanleo <Gallery /> ## Model description robertseanleo lora ## Trigger words You should use `robertseanleo` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/LHRuig/robertseanleo/tree/main) them in the Files & versions tab.
WeikeXu/ddpm-floorplans_tutorial-128_bw
WeikeXu
"2025-03-21T08:12:35Z"
51
0
diffusers
[ "diffusers", "tensorboard", "safetensors", "arxiv:1910.09700", "region:us" ]
null
"2025-03-18T22:03:45Z"
--- library_name: diffusers --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
jmalejandrob79/vlntrssyng05
jmalejandrob79
"2025-02-21T17:32:12Z"
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
"2025-02-21T16:52:27Z"
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: vlntrssyng05 --- # Vlntrssyng05 <Gallery /> Trained on Replicate using: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `vlntrssyng05` to trigger the image generation. ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('jmalejandrob79/vlntrssyng05', weight_name='lora.safetensors') image = pipeline('your prompt').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
ISEGURA/mdeberta-v3-base-autext2024_80train_attribution
ISEGURA
"2024-12-13T12:58:16Z"
120
0
transformers
[ "transformers", "safetensors", "deberta-v2", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-12-13T12:57:37Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
shrijayan/aeroplan-detection
shrijayan
"2024-10-09T13:12:47Z"
0
0
null
[ "tensorboard", "region:us" ]
null
"2023-10-27T03:35:47Z"
# Pistol Detection Detecting Pistol in the Video / Photo ## Installation ##### SETUP ```shell #---------Using MAKE (Recommended) ---------- make provision conda activate aero-detection make build ``` Trained 250 Epochs The dataset includes 2971 images. Pistol are annotated in YOLOv8 format. The following pre-processing was applied to each image: * Auto-orientation of pixel data (with EXIF-orientation stripping) * Resize to 640x640 (Stretch) No image augmentation techniques were applied. Dataset Link : https://www.kaggle.com/datasets/cpluzshrijayan
DreamGallery/Qwen-Qwen1.5-0.5B-1718196220
DreamGallery
"2024-06-12T12:43:41Z"
2
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:Qwen/Qwen1.5-0.5B", "base_model:adapter:Qwen/Qwen1.5-0.5B", "region:us" ]
null
"2024-06-12T12:43:40Z"
--- library_name: peft base_model: Qwen/Qwen1.5-0.5B --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.11.1
sbawa/elysa-gguf
sbawa
"2024-04-10T07:44:50Z"
23
0
adapter-transformers
[ "adapter-transformers", "gguf", "llama", "medical", "en", "dataset:sbawa/elysa-data-conversation", "license:mit", "endpoints_compatible", "region:us", "conversational" ]
null
"2024-03-06T18:36:25Z"
--- license: mit datasets: - sbawa/elysa-data-conversation language: - en library_name: adapter-transformers tags: - medical ---
second-state/WizardCoder-Python-7B-v1.0-GGUF
second-state
"2024-03-20T07:18:02Z"
551
2
transformers
[ "transformers", "gguf", "llama", "text-generation", "code", "license:llama2", "autotrain_compatible", "region:us" ]
text-generation
"2023-11-17T08:06:03Z"
--- license: llama2 library_name: transformers tags: - code metrics: - code_eval base_model: WizardLM/WizardCoder-Python-7b-V1.0 inference: false model_creator: WizardLM model_type: llama pipeline_tag: text-generation quantized_by: Second State Inc. --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://github.com/LlamaEdge/LlamaEdge/raw/dev/assets/logo.svg" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # WizardCoder-Python-7B-v1.0-GGUF ## Original Model [WizardLM/WizardCoder-Python-7b-V1.0](https://huggingface.co/WizardLM/WizardCoder-Python-7B-V1.0) ## Run with LlamaEdge - LlamaEdge version: [v0.2.8](https://github.com/LlamaEdge/LlamaEdge/releases/tag/0.2.8) and above - Prompt template - Prompt type: `wizard-coder` - Prompt string ```text Below is an instruction that describes a task. Write a response that appropriately completes the request. \### Instruction: {instruction} \### Response: ``` **Note that the \ character is used to escape the ### in the prompt string. Remove it in the practical use.** - Context size: `4096` - Run as LlamaEdge service ```bash wasmedge --dir .:. --nn-preload default:GGML:AUTO:WizardCoder-Python-7B-V1.0-Q5_K_M.gguf llama-api-server.wasm -p wizard-coder ``` - Run as LlamaEdge command app ```bash wasmedge --dir .:. --nn-preload default:GGML:AUTO:WizardCoder-Python-7B-V1.0-Q5_K_M.gguf llama-chat.wasm -p wizard-coder -s 'Below is an instruction that describes a task. Write a response that appropriately completes the request.' ``` ## Quantized GGUF Models | Name | Quant method | Bits | Size | Use case | | ---- | ---- | ---- | ---- | ----- | | [WizardCoder-Python-7B-V1.0-Q2_K.gguf](https://huggingface.co/second-state/WizardCoder-Python-7B-v1.0-GGUF/blob/main/WizardCoder-Python-7B-V1.0-Q2_K.gguf) | Q2_K | 2 | 2.53 GB| smallest, significant quality loss - not recommended for most purposes | | [WizardCoder-Python-7B-V1.0-Q3_K_L.gguf](https://huggingface.co/second-state/WizardCoder-Python-7B-v1.0-GGUF/blob/main/WizardCoder-Python-7B-V1.0-Q3_K_L.gguf) | Q3_K_L | 3 | 3.60 GB| small, substantial quality loss | | [WizardCoder-Python-7B-V1.0-Q3_K_M.gguf](https://huggingface.co/second-state/WizardCoder-Python-7B-v1.0-GGUF/blob/main/WizardCoder-Python-7B-V1.0-Q3_K_M.gguf) | Q3_K_M | 3 | 3.30 GB| very small, high quality loss | | [WizardCoder-Python-7B-V1.0-Q3_K_S.gguf](https://huggingface.co/second-state/WizardCoder-Python-7B-v1.0-GGUF/blob/main/WizardCoder-Python-7B-V1.0-Q3_K_S.gguf) | Q3_K_S | 3 | 2.95 GB| very small, high quality loss | | [WizardCoder-Python-7B-V1.0-Q4_0.gguf](https://huggingface.co/second-state/WizardCoder-Python-7B-v1.0-GGUF/blob/main/WizardCoder-Python-7B-V1.0-Q4_0.gguf) | Q4_0 | 4 | 3.83 GB| legacy; small, very high quality loss - prefer using Q3_K_M | | [WizardCoder-Python-7B-V1.0-Q4_K_M.gguf](https://huggingface.co/second-state/WizardCoder-Python-7B-v1.0-GGUF/blob/main/WizardCoder-Python-7B-V1.0-Q4_K_M.gguf) | Q4_K_M | 4 | 4.08 GB| medium, balanced quality - recommended | | [WizardCoder-Python-7B-V1.0-Q4_K_S.gguf](https://huggingface.co/second-state/WizardCoder-Python-7B-v1.0-GGUF/blob/main/WizardCoder-Python-7B-V1.0-Q4_K_S.gguf) | Q4_K_S | 4 | 3.86 GB| small, greater quality loss | | [WizardCoder-Python-7B-V1.0-Q5_0.gguf](https://huggingface.co/second-state/WizardCoder-Python-7B-v1.0-GGUF/blob/main/WizardCoder-Python-7B-V1.0-Q5_0.gguf) | Q5_0 | 5 | 4.65 GB| legacy; medium, balanced quality - prefer using Q4_K_M | | [WizardCoder-Python-7B-V1.0-Q5_K_M.gguf](https://huggingface.co/second-state/WizardCoder-Python-7B-v1.0-GGUF/blob/main/WizardCoder-Python-7B-V1.0-Q5_K_M.gguf) | Q5_K_M | 5 | 4.78 GB| large, very low quality loss - recommended | | [WizardCoder-Python-7B-V1.0-Q5_K_S.gguf](https://huggingface.co/second-state/WizardCoder-Python-7B-v1.0-GGUF/blob/main/WizardCoder-Python-7B-V1.0-Q5_K_S.gguf) | Q5_K_S | 5 | 4.65 GB| large, low quality loss - recommended | | [WizardCoder-Python-7B-V1.0-Q6_K.gguf](https://huggingface.co/second-state/WizardCoder-Python-7B-v1.0-GGUF/blob/main/WizardCoder-Python-7B-V1.0-Q6_K.gguf) | Q6_K | 6 | 5.53 GB| very large, extremely low quality loss | | [WizardCoder-Python-7B-V1.0-Q8_0.gguf](https://huggingface.co/second-state/WizardCoder-Python-7B-v1.0-GGUF/blob/main/WizardCoder-Python-7B-V1.0-Q8_0.gguf) | Q8_0 | 8 | 7.16 GB| very large, extremely low quality loss - not recommended |
Primeness/newteaH11v1
Primeness
"2025-01-09T21:53:57Z"
32
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2025-01-09T17:13:34Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Shreyagg2202/Bert-Custom-Sentiment-Analysis
Shreyagg2202
"2024-04-26T19:29:29Z"
107
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-04-26T19:08:47Z"
--- license: apache-2.0 base_model: bert-base-uncased tags: - generated_from_trainer metrics: - accuracy - f1 - precision - recall model-index: - name: results results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # results This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.4741 - Accuracy: 0.5251 - F1: 0.5348 - Precision: 0.5692 - Recall: 0.5251 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Tokenizers 0.19.1
ferrazzipietro/Qwen1.5-14B-Chat__adapters_en.layer1_4_torch.bfloat16_32_64_0.01_4_0.0002
ferrazzipietro
"2024-03-08T00:34:20Z"
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-03-08T00:33:44Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
kielljoy/DialoGPT-medium-stupidspecialkay
kielljoy
"2023-09-05T01:04:15Z"
125
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "text generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2023-01-15T18:17:54Z"
--- tags: - text-generation - text generation --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> # Model Details ## Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ## Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] # Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ## Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ## Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ## Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] # Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ## Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. # Training Details ## Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ## Training Procedure [optional] <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> ### Preprocessing [More Information Needed] ### Speeds, Sizes, Times <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] # Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ## Testing Data, Factors & Metrics ### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] ### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] ### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ## Results [More Information Needed] ### Summary # Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] # Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] # Technical Specifications [optional] ## Model Architecture and Objective [More Information Needed] ## Compute Infrastructure [More Information Needed] ### Hardware [More Information Needed] ### Software [More Information Needed] # Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] # Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] # More Information [optional] [More Information Needed] # Model Card Authors [optional] [More Information Needed] # Model Card Contact [More Information Needed] # How to Get Started with the Model Use the code below to get started with the model. <details> <summary> Click to expand </summary> [More Information Needed] </details>
trl-lib/OpenHermes-2-Mistral-7B-kto-beta-0.5-steps-200
trl-lib
"2023-12-20T14:44:07Z"
2
0
peft
[ "peft", "safetensors", "en", "arxiv:1910.09700", "base_model:teknium/OpenHermes-2.5-Mistral-7B", "base_model:adapter:teknium/OpenHermes-2.5-Mistral-7B", "license:apache-2.0", "region:us" ]
null
"2023-12-20T14:43:33Z"
--- library_name: peft base_model: teknium/OpenHermes-2.5-Mistral-7B model-index: - name: OpenHermes-2-Mistral-7B-kto-beta-0.5-steps-200 results: [] license: apache-2.0 language: - en --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.1
facebook/data2vec-audio-large-10m
facebook
"2022-04-18T16:23:58Z"
5
0
transformers
[ "transformers", "pytorch", "data2vec-audio", "automatic-speech-recognition", "speech", "en", "dataset:librispeech_asr", "arxiv:2202.03555", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
"2022-04-02T16:00:11Z"
--- language: en datasets: - librispeech_asr tags: - speech license: apache-2.0 --- # Data2Vec-Audio-Large-10m [Facebook's Data2Vec](https://ai.facebook.com/research/data2vec-a-general-framework-for-self-supervised-learning-in-speech-vision-and-language/) The large model pretrained and fine-tuned on 10 minutes of Librispeech on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. [Paper](https://arxiv.org/abs/2202.03555) Authors: Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu, Michael Auli **Abstract** While the general idea of self-supervised learning is identical across modalities, the actual algorithms and objectives differ widely because they were developed with a single modality in mind. To get us closer to general self-supervised learning, we present data2vec, a framework that uses the same learning method for either speech, NLP or computer vision. The core idea is to predict latent representations of the full input data based on a masked view of the input in a self-distillation setup using a standard Transformer architecture. Instead of predicting modality-specific targets such as words, visual tokens or units of human speech which are local in nature, data2vec predicts contextualized latent representations that contain information from the entire input. Experiments on the major benchmarks of speech recognition, image classification, and natural language understanding demonstrate a new state of the art or competitive performance to predominant approaches. The original model can be found under https://github.com/pytorch/fairseq/tree/main/examples/data2vec . # Pre-Training method ![model image](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/data2vec.png) For more information, please take a look at the [official paper](https://arxiv.org/abs/2202.03555). # Usage To transcribe audio files the model can be used as a standalone acoustic model as follows: ```python from transformers import Wav2Vec2Processor, Data2VecForCTC from datasets import load_dataset import torch # load model and processor processor = Wav2Vec2Processor.from_pretrained("facebook/data2vec-audio-large-10m") model = Data2VecForCTC.from_pretrained("facebook/data2vec-audio-large-10m") # load dummy dataset and read soundfiles ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation") # tokenize input_values = processor(ds[0]["audio"]["array"],, return_tensors="pt", padding="longest").input_values # Batch size 1 # retrieve logits logits = model(input_values).logits # take argmax and decode predicted_ids = torch.argmax(logits, dim=-1) transcription = processor.batch_decode(predicted_ids) ```
playboy40k/flux-GigiHadidLora
playboy40k
"2024-10-21T20:25:01Z"
97
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "region:us" ]
text-to-image
"2024-10-21T16:45:55Z"
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - text: >- "A warm and nostalgic close-up photograph of a young woman in a cozy kitchen, captured in a classic analog style that exudes charm and authenticity. her wavy, long, blonde hair cascades naturally over her shoulders, its rich, vibrant color contrasting beautifully with the softer tones of the kitchen environment. She’s wearing a casual short-sleeve t-shirt with the sleeves rolled up, giving her a relaxed, approachable look. Over the t-shirt, she’s donned a well-worn apron, tied snugly around her waist. Both her apron and face are generously dusted with flour, adding a playful and messy charm to the scene. The photograph captures a candid moment, with her expression reflecting a mix of concentration and lightheartedness as she’s caught in the middle of baking. The flour on her face and apron speaks to her enthusiastic involvement in the kitchen, hinting at a baking project that’s been both fun and slightly chaotic. The background of the kitchen is softly out of focus, ensuring that the viewer’s attention remains on the woman and the details of her flour-covered face and apron. Elements of the kitchen, such as rustic wooden countertops, mixing bowls, and scattered baking ingredients, are subtly visible, contributing to the warm, homey atmosphere. The analog style of the photograph adds a layer of warmth and nostalgia, with natural, slightly muted colors and a subtle graininess that enhances the image's timeless feel. The natural light in the kitchen gently illuminates the scene, casting a soft glow that accentuates the textures of her hair, the flour, and the fabric of her apron. The overall mood of the photograph is cozy and endearing, capturing a spontaneous, joyful moment in the kitchen. The analog style adds a sense of timelessness, making this image feel like a cherished memory frozen in time." output: url: images/seed-238613562944885802.png - text: >- a young woman with wavy, long, blonde hair is the central figure. Her hair is styled in a high ponytail, adding a casual yet chic touch to her appearance. She is wearing a light blue off-the-shoulder top and matching pants, both adorned with a playful red cherry pattern. The outfit is comfortable and stylish, perfect for a relaxed setting. She is standing in a room with a neutral color palette. The background features a white door and a wooden piece of furniture, possibly a dresser or a cabinet. On the wall, there is a poster or print depicting wine glasses, adding a touch of elegance and sophistication to the room. The overall ambiance of the image is casual and comfortable, with a hint of stylish elegance. The woman's smile and relaxed posture suggest a sense of happiness and contentment. output: url: images/seed-4134726327160768100.png - text: >- A vogue magazine cover of a young woman with wavy, long, blonde hair looking away from the camera with tattoos of a butterfly on her shoulder, wearing a halter neck dress and a wide brimmed straw hat, leaning against a wall, non - nude portrait, style of a fashion model, detailed jewelry output: url: images/seed-1074816586052269296.png base_model: black-forest-labs/FLUX.1-dev instance_prompt: null --- # Gigi Hadid Flux <Gallery /> ## Download model Weights for this model are available in Safetensors format. [Download](/playboy40k/flux-GigiHadidLora/tree/main) them in the Files & versions tab.
habin/EEVE-Korean-kornerstone-10.8B-v1.0
habin
"2024-06-19T05:03:07Z"
3
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-06-19T04:33:29Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Masioki/fusion_gttbsc_distilbert-uncased-ft
Masioki
"2024-06-17T19:12:05Z"
34
0
transformers
[ "transformers", "tensorboard", "safetensors", "fusion-cross-attention-sentence-classifier", "generated_from_trainer", "en", "dataset:asapp/slue-phase-2", "model-index", "endpoints_compatible", "region:us" ]
null
"2024-06-09T23:21:44Z"
--- tags: - generated_from_trainer model-index: - name: fusion_gttbsc_distilbert-uncased-ft results: - task: type: dialogue act classification dataset: name: asapp/slue-phase-2 type: hvb metrics: - name: F1 macro E2E type: F1 macro value: TBA - name: F1 macro GT type: F1 macro value: TBA datasets: - asapp/slue-phase-2 language: - en metrics: - f1-macro --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # fusion_gttbsc_distilbert-uncased-ft Ground truth text with prosody encoding and ASR encoding residual cross attention fusion multi-label DAC ## Model description ASR encoder: [Whisper small](https://huggingface.co/openai/whisper-small) encoder Prosody encoder: 2 layer transformer encoder with initial dense projection Backbone: [DistilBert uncased](https://huggingface.co/distilbert/distilbert-base-uncased) Fusion: 2 residual cross attention fusion layers (F_asr x F_text and F_prosody x F_text) with dense layer on top Pooling: Self attention Multi-label classification head: 2 dense layers with two dropouts 0.3 and Tanh activation inbetween ## Training and evaluation data Trained on ground truth. Evaluated on ground truth (GT) and normalized [Whisper small](https://huggingface.co/openai/whisper-small) transcripts (E2E). ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0007 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.19.2 - Tokenizers 0.19.1
Lucas-Hiberus/clasificador-muchocine
Lucas-Hiberus
"2023-09-27T16:03:23Z"
104
0
transformers
[ "transformers", "pytorch", "electra", "text-classification", "classification", "generated_from_trainer", "base_model:mrm8488/electricidad-base-discriminator", "base_model:finetune:mrm8488/electricidad-base-discriminator", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2023-09-27T15:56:16Z"
--- base_model: mrm8488/electricidad-base-discriminator tags: - classification - generated_from_trainer metrics: - accuracy model-index: - name: clasificador-muchocine results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # clasificador-muchocine This model is a fine-tuned version of [mrm8488/electricidad-base-discriminator](https://huggingface.co/mrm8488/electricidad-base-discriminator) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.5225 - Accuracy: 0.3355 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 388 | 1.5171 | 0.3355 | | 1.5309 | 2.0 | 776 | 1.5243 | 0.3355 | | 1.514 | 3.0 | 1164 | 1.5244 | 0.3355 | | 1.5222 | 4.0 | 1552 | 1.5179 | 0.3355 | | 1.5222 | 5.0 | 1940 | 1.5225 | 0.3355 | ### Framework versions - Transformers 4.33.3 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.13.3
rithwik-db/e5-base_banking77_10000
rithwik-db
"2023-04-11T00:05:21Z"
1
0
sentence-transformers
[ "sentence-transformers", "pytorch", "bert", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
"2023-04-11T00:05:16Z"
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # rithwik-db/e5-base_banking77_10000 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('rithwik-db/e5-base_banking77_10000') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('rithwik-db/e5-base_banking77_10000') model = AutoModel.from_pretrained('rithwik-db/e5-base_banking77_10000') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=rithwik-db/e5-base_banking77_10000) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 2500 with parameters: ``` {'batch_size': 4, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.TripletLoss.TripletLoss` with parameters: ``` {'distance_metric': 'TripletDistanceMetric.EUCLIDEAN', 'triplet_margin': 5} ``` Parameters of the fit()-Method: ``` { "epochs": 3, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 10000, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
ayyuce/NeoProtein-GPT
ayyuce
"2025-03-22T15:35:23Z"
0
0
transformers
[ "transformers", "safetensors", "text-generation", "dataset:fredzzp/Uniref50", "arxiv:1910.09700", "base_model:nferruz/ProtGPT2", "base_model:finetune:nferruz/ProtGPT2", "license:gpl-3.0", "endpoints_compatible", "region:us" ]
text-generation
"2025-03-22T14:53:50Z"
--- library_name: transformers license: gpl-3.0 datasets: - fredzzp/Uniref50 base_model: - nferruz/ProtGPT2 pipeline_tag: text-generation --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
AetherArchitectural/EXAONE-3.5-2.4B-Instruct-abliterated-GGUF-IQ-ARM-Imatrix-Community
AetherArchitectural
"2024-12-18T01:11:06Z"
208
6
null
[ "gguf", "en", "base_model:huihui-ai/EXAONE-3.5-2.4B-Instruct-abliterated", "base_model:quantized:huihui-ai/EXAONE-3.5-2.4B-Instruct-abliterated", "license:other", "region:us", "imatrix", "conversational" ]
null
"2024-12-18T00:18:33Z"
--- license: other license_name: exaone license_link: LICENSE language: - en base_model: - huihui-ai/EXAONE-3.5-2.4B-Instruct-abliterated inference: false --- <div align="center"> <a href="https://arch.datasets.fyi"> <img src="https://huggingface.co/spaces/AetherArchitectural/README/resolve/main/resources/aetherarchio-flat-banner-rndd-brdrs.png" alt="aetherarchio-flat-banner"> </a> Check <a href="https://huggingface.co/Lewdiculous/Model-Requests/discussions/81"><b>Community Request - #81</b></a> for details. </div> --- **Model name:** <br> EXAONE-3.5-2.4B-Instruct-abliterated **Model link:** <br> https://huggingface.co/huihui-ai/EXAONE-3.5-2.4B-Instruct-abliterated > [!NOTE] > **[huihui-ai]** <br> > "This is an uncensored version of **LGAI-EXAONE/EXAONE-3.5-2.4B-Instruct** created with abliteration (see remove-refusals-with-transformers to know more about it). This is a crude, proof-of-concept implementation to remove refusals from an LLM model without using TransformerLens."
Alex01837178373/T-lite-instruct-0.1-abliterated-Q5_K_M-GGUF
Alex01837178373
"2024-07-20T21:33:05Z"
6
0
null
[ "gguf", "llama-cpp", "gguf-my-repo", "base_model:IlyaGusev/T-lite-instruct-0.1-abliterated", "base_model:quantized:IlyaGusev/T-lite-instruct-0.1-abliterated", "endpoints_compatible", "region:us", "conversational" ]
null
"2024-07-20T21:32:35Z"
--- base_model: IlyaGusev/T-lite-instruct-0.1-abliterated tags: - llama-cpp - gguf-my-repo --- # Alex01837178373/T-lite-instruct-0.1-abliterated-Q5_K_M-GGUF This model was converted to GGUF format from [`IlyaGusev/T-lite-instruct-0.1-abliterated`](https://huggingface.co/IlyaGusev/T-lite-instruct-0.1-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/IlyaGusev/T-lite-instruct-0.1-abliterated) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Alex01837178373/T-lite-instruct-0.1-abliterated-Q5_K_M-GGUF --hf-file t-lite-instruct-0.1-abliterated-q5_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Alex01837178373/T-lite-instruct-0.1-abliterated-Q5_K_M-GGUF --hf-file t-lite-instruct-0.1-abliterated-q5_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Alex01837178373/T-lite-instruct-0.1-abliterated-Q5_K_M-GGUF --hf-file t-lite-instruct-0.1-abliterated-q5_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Alex01837178373/T-lite-instruct-0.1-abliterated-Q5_K_M-GGUF --hf-file t-lite-instruct-0.1-abliterated-q5_k_m.gguf -c 2048 ```
antonymanoraj/vijay
antonymanoraj
"2025-01-27T23:02:44Z"
57
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
"2025-01-27T22:30:18Z"
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: vijay --- # Vijay <Gallery /> Trained on Replicate using: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `vijay` to trigger the image generation. ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('antonymanoraj/vijay', weight_name='lora.safetensors') image = pipeline('your prompt').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
harish907/RCU_Test2_16_gguf
harish907
"2025-02-23T11:22:58Z"
0
0
transformers
[ "transformers", "gguf", "llama", "text-generation-inference", "unsloth", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2025-02-23T11:19:10Z"
--- base_model: unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - gguf license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** harish907 - **License:** apache-2.0 - **Finetuned from model :** unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
John6666/ebara-pony-v21-sdxl-spo
John6666
"2024-06-21T23:22:17Z"
3,752
4
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "stable-diffusion-xl", "anime", "pony", "SPO", "merged", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
"2024-06-21T23:17:31Z"
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion - stable-diffusion-xl - anime - pony - SPO - merged --- This repository is for testing [SPO-SDXL LoRA](https://huggingface.co/SPO-Diffusion-Models/SPO-SDXL_4k-p_10ep_LoRA). Simply applying it with a "weight=1.0 (default)" will help produce a high-definition image. It seems to be slightly prone to disorder in Pony-type models, but this can be mostly avoided by setting "clip skip=2" in your environment.
tigermeat/tiger-rvc
tigermeat
"2023-07-29T20:14:34Z"
0
0
null
[ "RVC", "text-to-speech", "en", "ja", "license:other", "region:us" ]
text-to-speech
"2023-07-29T19:21:07Z"
--- license: other language: - en - ja pipeline_tag: text-to-speech tags: - RVC --- ### Tiger RVC Models All publically available models for "Tiger", a vocal synth character by me! ### Licensing Please read the "LICENSE.md" file included with all downloads, as it outlines the license! It's a general MIT license, with the only difference requiring written permission from me to use commercially.
Sagicc/speecht5_finetuned_rs
Sagicc
"2024-02-13T08:57:12Z"
91
0
transformers
[ "transformers", "safetensors", "speecht5", "text-to-audio", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
text-to-audio
"2024-02-13T08:54:27Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
unsloth/Mixtral-8x7B-Instruct-v0.1-bnb-4bit
unsloth
"2025-03-14T12:38:50Z"
0
0
null
[ "safetensors", "mixtral", "fr", "it", "de", "es", "en", "base_model:mistralai/Mixtral-8x7B-Instruct-v0.1", "base_model:quantized:mistralai/Mixtral-8x7B-Instruct-v0.1", "license:apache-2.0", "4-bit", "bitsandbytes", "region:us" ]
null
"2025-03-14T11:24:49Z"
--- base_model: mistralai/Mixtral-8x7B-Instruct-v0.1 language: - fr - it - de - es - en license: apache-2.0 inference: parameters: temperature: 0.5 widget: - messages: - role: user content: What is your favorite condiment? extra_gated_description: If you want to learn more about how we process your personal data, please read our <a href="https://mistral.ai/terms/">Privacy Policy</a>. --- # Model Card for Mixtral-8x7B ### Tokenization with `mistral-common` ```py from mistral_common.tokens.tokenizers.mistral import MistralTokenizer from mistral_common.protocol.instruct.messages import UserMessage from mistral_common.protocol.instruct.request import ChatCompletionRequest mistral_models_path = "MISTRAL_MODELS_PATH" tokenizer = MistralTokenizer.v1() completion_request = ChatCompletionRequest(messages=[UserMessage(content="Explain Machine Learning to me in a nutshell.")]) tokens = tokenizer.encode_chat_completion(completion_request).tokens ``` ## Inference with `mistral_inference` ```py from mistral_inference.transformer import Transformer from mistral_inference.generate import generate model = Transformer.from_folder(mistral_models_path) out_tokens, _ = generate([tokens], model, max_tokens=64, temperature=0.0, eos_id=tokenizer.instruct_tokenizer.tokenizer.eos_id) result = tokenizer.decode(out_tokens[0]) print(result) ``` ## Inference with hugging face `transformers` ```py from transformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained("mistralai/Mixtral-8x7B-Instruct-v0.1") model.to("cuda") generated_ids = model.generate(tokens, max_new_tokens=1000, do_sample=True) # decode with mistral tokenizer result = tokenizer.decode(generated_ids[0].tolist()) print(result) ``` > [!TIP] > PRs to correct the transformers tokenizer so that it gives 1-to-1 the same results as the mistral-common reference implementation are very welcome! --- The Mixtral-8x7B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts. The Mixtral-8x7B outperforms Llama 2 70B on most benchmarks we tested. For full details of this model please read our [release blog post](https://mistral.ai/news/mixtral-of-experts/). ## Warning This repo contains weights that are compatible with [vLLM](https://github.com/vllm-project/vllm) serving of the model as well as Hugging Face [transformers](https://github.com/huggingface/transformers) library. It is based on the original Mixtral [torrent release](magnet:?xt=urn:btih:5546272da9065eddeb6fcd7ffddeef5b75be79a7&dn=mixtral-8x7b-32kseqlen&tr=udp%3A%2F%http://2Fopentracker.i2p.rocks%3A6969%2Fannounce&tr=http%3A%2F%http://2Ftracker.openbittorrent.com%3A80%2Fannounce), but the file format and parameter names are different. Please note that model cannot (yet) be instantiated with HF. ## Instruction format This format must be strictly respected, otherwise the model will generate sub-optimal outputs. The template used to build a prompt for the Instruct model is defined as follows: ``` <s> [INST] Instruction [/INST] Model answer</s> [INST] Follow-up instruction [/INST] ``` Note that `<s>` and `</s>` are special tokens for beginning of string (BOS) and end of string (EOS) while [INST] and [/INST] are regular strings. As reference, here is the pseudo-code used to tokenize instructions during fine-tuning: ```python def tokenize(text): return tok.encode(text, add_special_tokens=False) [BOS_ID] + tokenize("[INST]") + tokenize(USER_MESSAGE_1) + tokenize("[/INST]") + tokenize(BOT_MESSAGE_1) + [EOS_ID] + … tokenize("[INST]") + tokenize(USER_MESSAGE_N) + tokenize("[/INST]") + tokenize(BOT_MESSAGE_N) + [EOS_ID] ``` In the pseudo-code above, note that the `tokenize` method should not add a BOS or EOS token automatically, but should add a prefix space. In the Transformers library, one can use [chat templates](https://huggingface.co/docs/transformers/main/en/chat_templating) which make sure the right format is applied. ## Run the model ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_id = "mistralai/Mixtral-8x7B-Instruct-v0.1" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto") messages = [ {"role": "user", "content": "What is your favourite condiment?"}, {"role": "assistant", "content": "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!"}, {"role": "user", "content": "Do you have mayonnaise recipes?"} ] inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to("cuda") outputs = model.generate(inputs, max_new_tokens=20) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` By default, transformers will load the model in full precision. Therefore you might be interested to further reduce down the memory requirements to run the model through the optimizations we offer in HF ecosystem: ### In half-precision Note `float16` precision only works on GPU devices <details> <summary> Click to expand </summary> ```diff + import torch from transformers import AutoModelForCausalLM, AutoTokenizer model_id = "mistralai/Mixtral-8x7B-Instruct-v0.1" tokenizer = AutoTokenizer.from_pretrained(model_id) + model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.float16, device_map="auto") messages = [ {"role": "user", "content": "What is your favourite condiment?"}, {"role": "assistant", "content": "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!"}, {"role": "user", "content": "Do you have mayonnaise recipes?"} ] input_ids = tokenizer.apply_chat_template(messages, return_tensors="pt").to("cuda") outputs = model.generate(input_ids, max_new_tokens=20) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` </details> ### Lower precision using (8-bit & 4-bit) using `bitsandbytes` <details> <summary> Click to expand </summary> ```diff + import torch from transformers import AutoModelForCausalLM, AutoTokenizer model_id = "mistralai/Mixtral-8x7B-Instruct-v0.1" tokenizer = AutoTokenizer.from_pretrained(model_id) + model = AutoModelForCausalLM.from_pretrained(model_id, load_in_4bit=True, device_map="auto") text = "Hello my name is" messages = [ {"role": "user", "content": "What is your favourite condiment?"}, {"role": "assistant", "content": "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!"}, {"role": "user", "content": "Do you have mayonnaise recipes?"} ] input_ids = tokenizer.apply_chat_template(messages, return_tensors="pt").to("cuda") outputs = model.generate(input_ids, max_new_tokens=20) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` </details> ### Load the model with Flash Attention 2 <details> <summary> Click to expand </summary> ```diff + import torch from transformers import AutoModelForCausalLM, AutoTokenizer model_id = "mistralai/Mixtral-8x7B-Instruct-v0.1" tokenizer = AutoTokenizer.from_pretrained(model_id) + model = AutoModelForCausalLM.from_pretrained(model_id, use_flash_attention_2=True, device_map="auto") messages = [ {"role": "user", "content": "What is your favourite condiment?"}, {"role": "assistant", "content": "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!"}, {"role": "user", "content": "Do you have mayonnaise recipes?"} ] input_ids = tokenizer.apply_chat_template(messages, return_tensors="pt").to("cuda") outputs = model.generate(input_ids, max_new_tokens=20) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` </details> ## Limitations The Mixtral-8x7B Instruct model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance. It does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs. # The Mistral AI Team Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, Blanche Savary, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Emma Bou Hanna, Florian Bressand, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Lélio Renard Lavaud, Louis Ternon, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Théophile Gervet, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed.
isspek/bert-base-cased_ebola_mistral_3_2e-5_16_weight
isspek
"2025-02-23T22:13:26Z"
0
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2025-02-23T22:13:13Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
JuniperChinenye/missu4
JuniperChinenye
"2025-01-14T11:18:02Z"
64
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2025-01-14T11:13:57Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Sophie-Rain-SpiderMan-Videos/LiVE.Sophie.Rain.Spiderman.Video.Tutorial.Link
Sophie-Rain-SpiderMan-Videos
"2025-02-18T05:22:20Z"
0
0
null
[ "region:us" ]
null
"2025-02-18T05:22:00Z"
18 seconds ago <a href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman" rel="nofollow">►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤❤️❤️⬇️⬇️​</a></p> <p><a href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman" rel="nofollow">►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤 Download❤️❤️⬇️⬇️​</a></p> <p><a rel="nofollow" title="WATCH NOW" href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman"><img border="Sophie+Rain+Spidermanno" height="480" width="720" title="WATCH NOW" alt="WATCH NOW" src="https://i.ibb.co.com/xMMVF88/686577567.gif"></a></p>
diaenra/a75324e7-384a-4200-be2e-e5235465f323
diaenra
"2025-01-16T11:58:02Z"
8
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:unsloth/Qwen2.5-Coder-1.5B-Instruct", "base_model:adapter:unsloth/Qwen2.5-Coder-1.5B-Instruct", "license:apache-2.0", "region:us" ]
null
"2025-01-16T11:33:20Z"
--- library_name: peft license: apache-2.0 base_model: unsloth/Qwen2.5-Coder-1.5B-Instruct tags: - axolotl - generated_from_trainer model-index: - name: a75324e7-384a-4200-be2e-e5235465f323 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/Qwen2.5-Coder-1.5B-Instruct bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 882551bf31b1c386_train_data.json ds_type: json format: custom path: /workspace/input_data/882551bf31b1c386_train_data.json type: field_input: mt_text field_instruction: src_text field_output: pe_text format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_steps: null eval_table_size: null flash_attention: false fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true group_by_length: true hub_model_id: diaenra/a75324e7-384a-4200-be2e-e5235465f323 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 5e-5 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_modules_to_save: - embed_tokens - lm_head lora_r: 32 lora_target_linear: true lora_target_modules: - gate_proj - down_proj - up_proj - q_proj - v_proj - k_proj - o_proj lr_scheduler: cosine max_memory: 0: 70GB micro_batch_size: 2 mlflow_experiment_name: /tmp/882551bf31b1c386_train_data.json model_type: AutoModelForCausalLM num_epochs: 2 optim_args: adam_beta1: 0.9 adam_beta2: 0.95 adam_epsilon: 1e-5 optimizer: adamw_torch output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 239 sequence_len: 512 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: diaenra-tao-miner wandb_mode: online wandb_name: fd21683e-d9f5-4409-ad86-74038599ad40 wandb_project: tao wandb_run: diaenra wandb_runid: fd21683e-d9f5-4409-ad86-74038599ad40 warmup_steps: 100 weight_decay: 0.0 xformers_attention: true ``` </details><br> # a75324e7-384a-4200-be2e-e5235465f323 This model is a fine-tuned version of [unsloth/Qwen2.5-Coder-1.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-Coder-1.5B-Instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: nan ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.0 | 0.9997 | 938 | nan | | 0.0 | 1.9995 | 1876 | nan | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
PKU-Alignment/ProgressGym-HistLlama3-70B-C013-pretrain-v0.1
PKU-Alignment
"2024-08-10T02:52:42Z"
9
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "alignment", "value alignment", "AI safety", "safety", "LLM", "history", "conversational", "dataset:PKU-Alignment/ProgressGym-HistText", "arxiv:2406.20087", "base_model:meta-llama/Meta-Llama-3-70B", "base_model:finetune:meta-llama/Meta-Llama-3-70B", "license:cc-by-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-06-11T13:00:08Z"
--- license: cc-by-4.0 tags: - alignment - value alignment - AI safety - safety - LLM - history datasets: - PKU-Alignment/ProgressGym-HistText base_model: - meta-llama/Meta-Llama-3-70B --- # ProgressGym-HistLlama3-70B-C013-pretrain ## Overview #### The ProgressGym Framework ![Framework Diagram](./readme-assets/main-diagram.png) **ProgressGym-HistLlama3-70B-C013-pretrain** is part of the **ProgressGym** framework for research and experimentation on *progress alignment* - the emulation of moral progress in AI alignment algorithms, as a measure to prevent risks of societal value lock-in. To quote the paper [*ProgressGym: Alignment with a Millennium of Moral Progress*](https://arxiv.org/abs/2406.20087): > Frontier AI systems, including large language models (LLMs), hold increasing influence over the epistemology of human users. Such influence can reinforce prevailing societal values, potentially contributing to the lock-in of misguided moral beliefs and, consequently, the perpetuation of problematic moral practices on a broad scale. > > We introduce *progress alignment* as a technical solution to mitigate this imminent risk. Progress alignment algorithms learn to emulate the mechanics of human moral progress, thereby addressing the susceptibility of existing alignment methods to contemporary moral blindspots. #### ProgressGym-HistLlama3-70B-C013-pretrain ProgressGym-HistLlama3-70B-C013-pretrain is one of the **36 historical language models** in the ProgressGym framework. It is a pretrained model without instruction-tuning. For the instruction-tuned version, see [ProgressGym-HistLlama3-70B-C013-instruct](https://huggingface.co/PKU-Alignment/ProgressGym-HistLlama3-70B-C013-instruct). **ProgressGym-HistLlama3-70B-C013-pretrain is under continual iteration.** Improving upon the current version, new versions of the model are currently being trained to reflect historical moral tendencies in ever more comprehensive ways. **ProgressGym-HistLlama3-70B-C013-pretrain is a 13th-century historical language model.** Based on [Meta-Llama-3-70B](https://huggingface.co/meta-llama/Meta-Llama-3-70B), It is continued-pretrained on the 13th-century text data from [ProgressGym-HistText](https://huggingface.co/datasets/PKU-Alignment/ProgressGym-HistText), using the following hyperparameters: - learning_rate: 3e-06 - train_batch_size: 2 - eval_batch_size: 4 - seed: 42 - distributed_type: multi-GPU - num_devices: 32 - gradient_accumulation_steps: 2 - total_train_batch_size: 128 - total_eval_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: polynomial - lr_scheduler_warmup_ratio: 0.075 - num_epochs: 4.0 - mixed_precision_training: Native AMP ... with the following training results: | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.8776 | 0.2090 | 7 | 0.7902 | | 0.8473 | 0.4179 | 14 | 0.7703 | | 0.8293 | 0.6269 | 21 | 0.7603 | | 0.8173 | 0.8358 | 28 | 0.7481 | | 0.7415 | 1.0448 | 35 | 0.7402 | | 0.6794 | 1.2537 | 42 | 0.7419 | | 0.6688 | 1.4627 | 49 | 0.7392 | | 0.6498 | 1.6716 | 56 | 0.7367 | | 0.6701 | 1.8806 | 63 | 0.7358 | | 0.664 | 2.0896 | 70 | 0.7355 | | 0.6447 | 2.2985 | 77 | 0.7361 | | 0.6412 | 2.5075 | 84 | 0.7373 | | 0.6458 | 2.7164 | 91 | 0.7383 | | 0.6356 | 2.9254 | 98 | 0.7387 | | 0.6398 | 3.1343 | 105 | 0.7387 | | 0.6228 | 3.3433 | 112 | 0.7391 | | 0.6139 | 3.5522 | 119 | 0.7395 | | 0.591 | 3.7612 | 126 | 0.7398 | Note that the training data volume for the continued pretraining stage is capped at 3GB. When the corresponding century's corpus exceeds this volume, the training data is randomly sampled to fit the volume. ## Links - **[Paper Preprint]** [ProgressGym: Alignment with a Millennium of Moral Progress](https://arxiv.org/abs/2406.20087) - **[Leaderboard & Interactive Playground]** [PKU-Alignment/ProgressGym-LeaderBoard](https://huggingface.co/spaces/PKU-Alignment/ProgressGym-LeaderBoard) - **[Huggingface Data & Model Collection]** [PKU-Alignment/ProgressGym](https://huggingface.co/collections/PKU-Alignment/progressgym-666735fcf3e4efa276226eaa) - **[Github Codebase]** [PKU-Alignment/ProgressGym](https://github.com/PKU-Alignment/ProgressGym) - **[Documentation]** [ProgressGym Documentation](https://pku-alignment.github.io/ProgressGym/) - **[PyPI Package]** *(coming soon - [stay tuned](https://forms.gle/1TWFLL4ZCLeYTD5N6)!)* ## Citation If the datasets, models, or framework of ProgressGym help you in your project, please cite ProgressGym using the bibtex entry below. ```text @article{progressgym, title={ProgressGym: Alignment with a Millennium of Moral Progress}, author={Tianyi Qiu and Yang Zhang and Xuchuan Huang and Jasmine Xinze Li and Jiaming Ji and Yaodong Yang}, journal={arXiv preprint arXiv:2406.20087}, eprint={2406.20087}, eprinttype = {arXiv}, year={2024} } ``` ## Ethics Statement - **Copyright information of historical text data sources**: - Project Gutenberg, one among our four source of our historical text data, consists only of texts in the public domain. - For the text that we draw from Internet Archive, we only include those that uploaded by *Library of Congress*, which are texts freely released online by the U.S. Library of Congress for research and public use. - The text data from Early English Books Online are, according to their publisher, "freely available to the public" and "available for access, distribution, use, or reuse by anyone". - The last remaining source of our historical text data, the Pile of Law dataset, is released under a Creative Commons license, which we adhere to in our use. - **Reproducibility**: To ensure reproducibility, we open-source all the code involved in the production of our main results (including the entire pipeline starting from data collection and model training), as well as the supporting infrastructure (the ProgressGym framework), making replication as easy as running a few simple script files. - **Misuse Prevention**: In order to prevent potential misuse of progress alignment algorithms, we have carefully formulated progress alignment as strictly value-neutral, without *a priori* assumptions on the direction of progress. In the event of potential misuse of our dataset, we condemn any misuse attempt to the strongest degree possible, and will work with the research community on whistleblowing for such attempts. - **Open-Sourcing**: We confirm that our code, data, and models are to be open-sourced under a CC-BY 4.0 license. We will continue to maintain and update our open-source repositories and models.
timjwhite/Reinforce-Pixelcopter-PLE-v0
timjwhite
"2023-06-19T00:27:45Z"
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
"2023-06-18T04:07:28Z"
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-Pixelcopter-PLE-v0 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 26.90 +/- 16.23 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
KHEW/LClora
KHEW
"2023-06-03T16:12:48Z"
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
"2023-06-03T16:11:35Z"
--- license: creativeml-openrail-m ---
Keltezaa/CIM
Keltezaa
"2025-02-27T08:36:56Z"
35
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:cc-by-nc-nd-4.0", "region:us" ]
text-to-image
"2025-02-25T19:18:52Z"
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora base_model: black-forest-labs/FLUX.1-dev instance_prompt: cum in mouth license: cc-by-nc-nd-4.0 --- # CIM <Gallery /> ## Trigger words You should use `cum in mouth` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/Keltezaa/CIM/tree/main) them in the Files & versions tab.
chinhnt19/fall_4K_villa13B_llama8B_con_per10
chinhnt19
"2025-03-23T03:39:19Z"
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "qwen2_vl", "trl", "en", "base_model:unsloth/Qwen2-VL-2B-Instruct", "base_model:finetune:unsloth/Qwen2-VL-2B-Instruct", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2025-03-23T03:37:35Z"
--- base_model: unsloth/Qwen2-VL-2B-Instruct tags: - text-generation-inference - transformers - unsloth - qwen2_vl - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** chinhnt19 - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen2-VL-2B-Instruct This qwen2_vl model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
EleutherAI/Meta-Llama-3-8B-capitals-random-many-random-names
EleutherAI
"2024-06-19T04:02:38Z"
10
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "trl", "sft", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-06-19T03:08:21Z"
--- library_name: transformers tags: - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Weni/WeniGPT-2.3.3-Zephyr-7B-LLM_Base_2.0.3_SFT_reduction_variation
Weni
"2024-02-02T14:15:46Z"
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-01-31T18:47:27Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
cleanrl/Tutankham-v5-cleanba_ppo_envpool_machado_atari_wrapper-seed2
cleanrl
"2023-03-09T23:07:34Z"
0
0
cleanrl
[ "cleanrl", "tensorboard", "Tutankham-v5", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
"2023-03-09T23:07:33Z"
--- tags: - Tutankham-v5 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Tutankham-v5 type: Tutankham-v5 metrics: - type: mean_reward value: 247.30 +/- 5.16 name: mean_reward verified: false --- # (CleanRL) **PPO** Agent Playing **Tutankham-v5** This is a trained model of a PPO agent playing Tutankham-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_ppo_envpool_machado_atari_wrapper.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name cleanba_ppo_envpool_machado_atari_wrapper --env-id Tutankham-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/Tutankham-v5-cleanba_ppo_envpool_machado_atari_wrapper-seed2/raw/main/cleanba_ppo_envpool_machado_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/Tutankham-v5-cleanba_ppo_envpool_machado_atari_wrapper-seed2/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/Tutankham-v5-cleanba_ppo_envpool_machado_atari_wrapper-seed2/raw/main/poetry.lock poetry install --all-extras python cleanba_ppo_envpool_machado_atari_wrapper.py --distributed --learner-device-ids 1 2 3 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id Tutankham-v5 --seed 2 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'actor_devices': ['gpu:0'], 'anneal_lr': True, 'async_batch_size': 20, 'async_update': 3, 'batch_size': 15360, 'capture_video': False, 'clip_coef': 0.1, 'concurrency': True, 'cuda': True, 'distributed': True, 'ent_coef': 0.01, 'env_id': 'Tutankham-v5', 'exp_name': 'cleanba_ppo_envpool_machado_atari_wrapper', 'gae_lambda': 0.95, 'gamma': 0.99, 'global_learner_decices': ['gpu:1', 'gpu:2', 'gpu:3', 'gpu:5', 'gpu:6', 'gpu:7'], 'hf_entity': 'cleanrl', 'learner_device_ids': [1, 2, 3], 'learner_devices': ['gpu:1', 'gpu:2', 'gpu:3'], 'learning_rate': 0.00025, 'local_batch_size': 7680, 'local_minibatch_size': 1920, 'local_num_envs': 60, 'local_rank': 0, 'max_grad_norm': 0.5, 'minibatch_size': 3840, 'norm_adv': True, 'num_envs': 120, 'num_minibatches': 4, 'num_steps': 128, 'num_updates': 3255, 'profile': False, 'save_model': True, 'seed': 2, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'update_epochs': 4, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanba', 'world_size': 2} ```