modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
list | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
|---|---|---|---|---|---|---|---|---|---|
syndi-models/roberta-base-squad2
|
syndi-models
| 2023-03-24T14:20:45Z
| 9
| 0
|
transformers
|
[
"transformers",
"pytorch",
"tf",
"jax",
"rust",
"safetensors",
"roberta",
"question-answering",
"en",
"dataset:squad_v2",
"license:cc-by-4.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-05-09T19:12:36Z
|
---
language: en
license: cc-by-4.0
datasets:
- squad_v2
model-index:
- name: deepset/roberta-base-squad2
results:
- task:
type: question-answering
name: Question Answering
dataset:
name: squad_v2
type: squad_v2
config: squad_v2
split: validation
metrics:
- type: exact_match
value: 79.9309
name: Exact Match
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMDhhNjg5YzNiZGQ1YTIyYTAwZGUwOWEzZTRiYzdjM2QzYjA3ZTUxNDM1NjE1MTUyMjE1MGY1YzEzMjRjYzVjYiIsInZlcnNpb24iOjF9.EH5JJo8EEFwU7osPz3s7qanw_tigeCFhCXjSfyN0Y1nWVnSfulSxIk_DbAEI5iE80V4EKLyp5-mYFodWvL2KDA
- type: f1
value: 82.9501
name: F1
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMjk5ZDYwOGQyNjNkMWI0OTE4YzRmOTlkY2JjNjQ0YTZkNTMzMzNkYTA0MDFmNmI3NjA3NjNlMjhiMDQ2ZjJjNSIsInZlcnNpb24iOjF9.DDm0LNTkdLbGsue58bg1aH_s67KfbcmkvL-6ZiI2s8IoxhHJMSf29H_uV2YLyevwx900t-MwTVOW3qfFnMMEAQ
- type: total
value: 11869
name: total
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMGFkMmI2ODM0NmY5NGNkNmUxYWViOWYxZDNkY2EzYWFmOWI4N2VhYzY5MGEzMTVhOTU4Zjc4YWViOGNjOWJjMCIsInZlcnNpb24iOjF9.fexrU1icJK5_MiifBtZWkeUvpmFISqBLDXSQJ8E6UnrRof-7cU0s4tX_dIsauHWtUpIHMPZCf5dlMWQKXZuAAA
---
# roberta-base for QA
This is the [roberta-base](https://huggingface.co/roberta-base) model, fine-tuned using the [SQuAD2.0](https://huggingface.co/datasets/squad_v2) dataset. It's been trained on question-answer pairs, including unanswerable questions, for the task of Question Answering.
## Overview
**Language model:** roberta-base
**Language:** English
**Downstream-task:** Extractive QA
**Training data:** SQuAD 2.0
**Eval data:** SQuAD 2.0
**Code:** See [an example QA pipeline on Haystack](https://haystack.deepset.ai/tutorials/first-qa-system)
**Infrastructure**: 4x Tesla v100
## Hyperparameters
```
batch_size = 96
n_epochs = 2
base_LM_model = "roberta-base"
max_seq_len = 386
learning_rate = 3e-5
lr_schedule = LinearWarmup
warmup_proportion = 0.2
doc_stride=128
max_query_length=64
```
## Using a distilled model instead
Please note that we have also released a distilled version of this model called [deepset/tinyroberta-squad2](https://huggingface.co/deepset/tinyroberta-squad2). The distilled model has a comparable prediction quality and runs at twice the speed of the base model.
## Usage
### In Haystack
Haystack is an NLP framework by deepset. You can use this model in a Haystack pipeline to do question answering at scale (over many documents). To load the model in [Haystack](https://github.com/deepset-ai/haystack/):
```python
reader = FARMReader(model_name_or_path="deepset/roberta-base-squad2")
# or
reader = TransformersReader(model_name_or_path="deepset/roberta-base-squad2",tokenizer="deepset/roberta-base-squad2")
```
For a complete example of ``roberta-base-squad2`` being used for Question Answering, check out the [Tutorials in Haystack Documentation](https://haystack.deepset.ai/tutorials/first-qa-system)
### In Transformers
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
model_name = "deepset/roberta-base-squad2"
# a) Get predictions
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
QA_input = {
'question': 'Why is model conversion important?',
'context': 'The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks.'
}
res = nlp(QA_input)
# b) Load model & tokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Performance
Evaluated on the SQuAD 2.0 dev set with the [official eval script](https://worksheets.codalab.org/rest/bundles/0x6b567e1cf2e041ec80d7098f031c5c9e/contents/blob/).
```
"exact": 79.87029394424324,
"f1": 82.91251169582613,
"total": 11873,
"HasAns_exact": 77.93522267206478,
"HasAns_f1": 84.02838248389763,
"HasAns_total": 5928,
"NoAns_exact": 81.79983179142137,
"NoAns_f1": 81.79983179142137,
"NoAns_total": 5945
```
## Authors
**Branden Chan:** [email protected]
**Timo Möller:** [email protected]
**Malte Pietsch:** [email protected]
**Tanay Soni:** [email protected]
## About us
<div class="grid lg:grid-cols-2 gap-x-4 gap-y-3">
<div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center">
<img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/deepset-logo-colored.png" class="w-40"/>
</div>
<div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center">
<img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/haystack-logo-colored.png" class="w-40"/>
</div>
</div>
[deepset](http://deepset.ai/) is the company behind the open-source NLP framework [Haystack](https://haystack.deepset.ai/) which is designed to help you build production ready NLP systems that use: Question answering, summarization, ranking etc.
Some of our other work:
- [Distilled roberta-base-squad2 (aka "tinyroberta-squad2")]([https://huggingface.co/deepset/tinyroberta-squad2)
- [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert)
- [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad)
## Get in touch and join the Haystack community
<p>For more info on Haystack, visit our <strong><a href="https://github.com/deepset-ai/haystack">GitHub</a></strong> repo and <strong><a href="https://docs.haystack.deepset.ai">Documentation</a></strong>.
We also have a <strong><a class="h-7" href="https://haystack.deepset.ai/community">Discord community open to everyone!</a></strong></p>
[Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Discord](https://haystack.deepset.ai/community) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai)
By the way: [we're hiring!](http://www.deepset.ai/jobs)
|
nuoo/lora_sks_dogs
|
nuoo
| 2023-03-24T14:03:45Z
| 0
| 0
| null |
[
"stable-diffusion",
"stable-diffusion-ppdiffusers",
"text-to-image",
"ppdiffusers",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-03-24T14:03:40Z
|
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: a photo of sks dog in a bucket
tags:
- stable-diffusion
- stable-diffusion-ppdiffusers
- text-to-image
- ppdiffusers
- lora
inference: false
---
# LoRA DreamBooth - nuoo/lora_sks_dogs
本仓库的 LoRA 权重是基于 runwayml/stable-diffusion-v1-5 训练而来的,我们采用[DreamBooth](https://dreambooth.github.io/)的技术并使用 a photo of sks dog in a bucket 文本进行了训练。 下面是在训练过程中生成的一些图片。
|
huggingtweets/bbc
|
huggingtweets
| 2023-03-24T14:02:27Z
| 105
| 0
|
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-03-24T14:02:19Z
|
---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1450714008097595395/1NBbHxgg_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">BBC</div>
<div style="text-align: center; font-size: 14px;">@bbc</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from BBC.
| Data | BBC |
| --- | --- |
| Tweets downloaded | 3249 |
| Retweets | 1741 |
| Short tweets | 16 |
| Tweets kept | 1492 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/ugl7rhcq/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @bbc's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3cyaakms) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3cyaakms/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/bbc')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
emmuzoo/Reinforce-Pixelcopter-PLE-v0
|
emmuzoo
| 2023-03-24T13:56:59Z
| 0
| 0
| null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-24T13:56:55Z
|
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 23.80 +/- 15.86
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
beomi/KoAlpaca-13B-LoRA
|
beomi
| 2023-03-24T13:54:58Z
| 0
| 7
| null |
[
"alpaca",
"llama",
"KoAlpaca",
"ko",
"en",
"license:mit",
"region:us"
] | null | 2023-03-22T13:30:22Z
|
---
license: mit
language:
- ko
- en
tags:
- alpaca
- llama
- KoAlpaca
---
# KoAlpaca: Korean Alpaca Model based on Stanford Alpaca (feat. LLAMA and Polyglot-ko)
- More informations at https://github.com/Beomi/KoAlpaca
- This repository contains finetuned(LoRA) KoAlpaca model weights based on LLAMA 13B model.
- Note: This repo has only the LoRA weights.
- Used Korean dataset and English dataset to train model.
|
thu-coai/roberta-base-cdconv
|
thu-coai
| 2023-03-24T13:25:32Z
| 24
| 1
|
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"zh",
"Conversational",
"arxiv:2210.08511",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-02T07:36:49Z
|
---
language:
- zh
tags:
- pytorch
- zh
- Conversational
---
[hfl/chinese-roberta-wwm-ext](https://huggingface.co/hfl/chinese-roberta-wwm-ext) first pre-trained on CMNLI and OCNLI and then fine-tuned on the [CDConv dataset](https://github.com/thu-coai/cdconv). It supports 2-class classification for 2-turn dialogue contradiction detection. Usage example:
```python
import torch
from transformers.models.bert import BertTokenizer, BertForSequenceClassification
tokenizer = BertTokenizer.from_pretrained('thu-coai/roberta-base-cdconv')
model = BertForSequenceClassification.from_pretrained('thu-coai/roberta-base-cdconv')
model.eval()
turn1 = [
"嗯嗯,你喜欢钓鱼吗?", # user
"喜欢啊,钓鱼很好玩的", # bot
]
turn2 = [
"你喜欢钓鱼吗?", # user
"不喜欢,我喜欢看别人钓鱼", # bot, we want to identify whether this utterance makes a contradiction
] # turn1 and turn2 are not required to be two consecutive turns
text1 = "[SEP]".join(turn1 + turn2[:1])
text2 = turn2[1]
model_input = tokenizer(text1, text2, return_tensors='pt', return_token_type_ids=True, return_attention_mask=True)
model_output = model(**model_input, return_dict=False)
prediction = torch.argmax(model_output[0].cpu(), dim=-1)[0].item()
print(prediction) # output 1. 0 for non-contradiction, 1 for contradiction
```
This fine-tuned model obtains 75.7 accuracy and 72.3 macro-F1 on the test set.
Please kindly cite the [original paper](https://arxiv.org/abs/2210.08511) if you use this model.
```bib
@inproceedings{zheng-etal-2022-cdconv,
title={Towards Emotional Support Dialog Systems},
author={Zheng, Chujie and
Zhou, Jinfeng and
Zheng, Yinhe and
Peng, Libiao and
Guo, Zhen and
Wu, Wenquan and
Niu, Zhengyu and
Wu, Hua and
Huang, Minlie},
booktitle={Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing},
year={2022}
}
```
|
AshtonIsNotHere/GatorTron-OG-bc-ctr-nli
|
AshtonIsNotHere
| 2023-03-24T13:23:26Z
| 92
| 0
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"megatron-bert",
"text-classification",
"generated_from_trainer",
"medical",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-03-23T13:50:21Z
|
---
tags:
- generated_from_trainer
- medical
model-index:
- name: GatorTron-OG-bc-ctr-nli
results: []
language:
- en
widget:
- text: "[CLS]Patients in NCT02953860 receive less mg of Enzalutamide than Fulvestrant on a weekly basis. [SEP] Fulvestrant with Enzalutamide: 500mg of Fulvestrant will be given IM on days 1, 15, 28, then every 4 weeks as per standard of care (SOC) and 160mg of Enzalutamide will be given PO daily. Patients will receive a tumor biopsy at the start of treatment and 4 weeks after the start of treatment, with an optional 3rd biopsy at the end treatment.[SEP]"
example_title: "Contradiction Example 1"
---
# GatorTron-OG-bc-ctr-nli
## Model description
[GatorTron](https://huggingface.co/AshtonIsNotHere/GatorTron-OG-breast-cancer) model domain adapted on breast cancer studies and fine-tuned for [SemEval-2023 Task7: NLI4CT](https://sites.google.com/view/nli4ct/home), Subtask 1. Takes hypothesis and premise statements as input and outputs the entailment relationship (`entailment` or `contradiction`).
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.1
- Datasets 2.8.0
- Tokenizers 0.11.0
|
SirVeggie/salutemix
|
SirVeggie
| 2023-03-24T13:22:21Z
| 0
| 8
| null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-03-13T19:51:37Z
|
---
license: creativeml-openrail-m
---
# SaluteMix model
SaluteMix is a yet-another semi-realistic mix. Name comes from 99% success rate when using salute tag. All previews are pure txt2img.
I highly recommend `EasyNegative embedding`, or `(low quality, worst quality:1.4), (bad anatomy), extra digit, fewer digits, (extra arms:1.2), bad hands, by (bad-artist:0.6), bad-image-v2-39000` as the negative prompt.
Should be fairly competent at nsfw stuff.
CivitAI page: https://civitai.com/models/19238/salutemix
**Negative embeddings:** \
https://huggingface.co/datasets/gsdf/EasyNegative \
https://huggingface.co/nick-x-hacker/bad-artist \
https://huggingface.co/Xynon/models/tree/main/experimentals/TI
## Recipe
```
animebrush3 = custom mix with wlop style (details missing)
cn-any = Counterfeit-V2.5 + (nixeu-any - anythingV3) @1.0
cn-f = Counterfeit-V2.5 + (nixeu-f - wd1.3) @1.0
cn-flo = Counterfeit-V2.5 + (floydian_nixeu - sd1.4) @1.0
cn-temp = cn-any + cn-f @0.4
cn-full = cn-temp + cn-flo @0.6
temp1 = AOM2_nsfw + 7th_anime_v3_C @0.5
cn-mix = cn-full + temp1 @0.5
step1 = animebrush3 + 2dn_1 @0.5
temp2 = chilloutmix_ni + grapefruitv4 @0.3
step2 = step1 + temp2 @0.25
SaluteMix = step2 + cn-mix @0.2
```
## Links to models
https://civitai.com/models/4807/2dn \
https://civitai.com/models/6424/chilloutmix \
https://civitai.com/models/2583/grapefruit-hentai-model \
Floydian's nixeu: https://huggingface.co/FloydianSound/Nixeu_Diffusion_v1-5 \
Orange mixes: https://huggingface.co/WarriorMama777/OrangeMixs \
7th_anime: https://huggingface.co/syaimu/7th_Layer \
Counterfeit: https://huggingface.co/gsdf/Counterfeit-V2.5 \
Nixeu models: https://huggingface.co/SirVeggie/nixeu \
https://huggingface.co/SirVeggie/wlop
|
basboot/ppo-LunarLander-v2
|
basboot
| 2023-03-24T13:22:14Z
| 0
| 0
|
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-24T12:18:06Z
|
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 293.01 +/- 16.66
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
ofields/violet-v1
|
ofields
| 2023-03-24T13:10:37Z
| 6
| 0
|
transformers
|
[
"transformers",
"yolos",
"object-detection",
"vision",
"arxiv:1910.09700",
"license:mit",
"endpoints_compatible",
"region:us"
] |
object-detection
| 2023-03-23T19:44:04Z
|
---
license: mit
tags:
- object-detection
- vision
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/football-match.jpg
example_title: Football Match
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/airport.jpg
example_title: Airport
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
abuchane/wav2vec2-xlsr-amharic-speech-emotion-recognition-arabic-model
|
abuchane
| 2023-03-24T13:04:13Z
| 32
| 0
|
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2023-03-24T12:55:41Z
|
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-xlsr-amharic-speech-emotion-recognition-arabic-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xlsr-amharic-speech-emotion-recognition-arabic-model
This model is a fine-tuned version of [elgeish/wav2vec2-large-xlsr-53-arabic](https://huggingface.co/elgeish/wav2vec2-large-xlsr-53-arabic) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.28.0.dev0
- Pytorch 1.13.1+cu116
- Datasets 2.10.2.dev0
- Tokenizers 0.13.2
|
butchland/round2-a2c-PandaReachDense-v2
|
butchland
| 2023-03-24T12:54:35Z
| 5
| 0
|
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-24T12:29:42Z
|
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -8.47 +/- 0.90
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
marbonora/my_awesome_billsum_model
|
marbonora
| 2023-03-24T12:46:11Z
| 105
| 0
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:billsum",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-03-24T12:28:52Z
|
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- billsum
metrics:
- rouge
model-index:
- name: my_awesome_billsum_model
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: billsum
type: billsum
config: default
split: ca_test
args: default
metrics:
- name: Rouge1
type: rouge
value: 0.1399
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_billsum_model
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the billsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5717
- Rouge1: 0.1399
- Rouge2: 0.0461
- Rougel: 0.1159
- Rougelsum: 0.1157
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 62 | 2.8733 | 0.1274 | 0.0331 | 0.1049 | 0.1049 | 19.0 |
| No log | 2.0 | 124 | 2.6521 | 0.1381 | 0.0436 | 0.1123 | 0.1121 | 19.0 |
| No log | 3.0 | 186 | 2.5899 | 0.1365 | 0.0432 | 0.1123 | 0.1122 | 19.0 |
| No log | 4.0 | 248 | 2.5717 | 0.1399 | 0.0461 | 0.1159 | 0.1157 | 19.0 |
### Framework versions
- Transformers 4.27.3
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
Rafamele/sd-butterflies-32-rafa
|
Rafamele
| 2023-03-24T12:45:39Z
| 31
| 0
|
diffusers
|
[
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2023-03-24T12:44:37Z
|
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('Rafamele/sd-butterflies-32-rafa')
image = pipeline().images[0]
image
```
|
somosnlp-hackathon-2023/bertin-gpt-j-6B-es-finetuned-salpaca
|
somosnlp-hackathon-2023
| 2023-03-24T12:45:27Z
| 0
| 15
| null |
[
"es",
"dataset:bertin-project/alpaca-spanish",
"license:apache-2.0",
"region:us"
] | null | 2023-03-22T19:51:23Z
|
---
datasets:
- bertin-project/alpaca-spanish
language:
- es
license: apache-2.0
---
<div style="text-align:center;width:350px;height:350px;">
<img src="https://huggingface.co/hackathon-somos-nlp-2023/bertin-gpt-j-6B-es-finetuned-salpaca/resolve/main/Alpaca.png" alt="SAlpaca logo"">
</div>
# SAlpaca: Spanish + Alpaca (WIP)
## Adapter Description
This adapter was created with the [PEFT](https://github.com/huggingface/peft) library and allowed the base model [bertin-project/bertin-gpt-j-6B](https://huggingface.co/bertin-project/bertin-gpt-j-6B) to be fine-tuned on the [Spanish Alpaca Dataset](https://huggingface.co/datasets/bertin-project/alpaca-spanish) by using the method *LoRA*.
## How to use
```py
import torch
from peft import PeftModel, PeftConfig
from transformers import AutoModelForCausalLM, AutoTokenizer
peft_model_id = "hackathon-somos-nlp-2023/bertin-gpt-j-6B-es-finetuned-salpaca"
config = PeftConfig.from_pretrained(peft_model_id)
model = AutoModelForCausalLM.from_pretrained(config.base_model_name_or_path, return_dict=True, load_in_8bit=True, device_map='auto')
# tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path)
tokenizer = AutoTokenizer.from_pretrained(peft_model_id)
# Load the Lora model
model = PeftModel.from_pretrained(model, peft_model_id)
def gen_conversation(text):
text = "<SC>instruction: " + text + "\n "
batch = tokenizer(text, return_tensors='pt')
with torch.cuda.amp.autocast():
output_tokens = model.generate(**batch, max_new_tokens=256, eos_token_id=50258, early_stopping = True, temperature=.9)
print('\n\n', tokenizer.decode(output_tokens[0], skip_special_tokens=False))
text = "hola"
gen_conversation(text)
```
## Resources used
Google Colab machine with the following specifications
<div style="text-align:center;width:550px;height:550px;">
<img src="https://huggingface.co/hackathon-somos-nlp-2023/bertin-gpt-j-6B-es-finetuned-salpaca/resolve/main/resource.jpeg" alt="Resource logo">
</div>
## Citation
```
@misc {hackathon-somos-nlp-2023,
author = { {Edison Bejarano, Leonardo Bolaños, Alberto Ceballos, Santiago Pineda, Nicolay Potes} },
title = { SAlpaca },
year = 2023,
url = { https://huggingface.co/hackathon-somos-nlp-2023/bertin-gpt-j-6B-es-finetuned-salpaca }
publisher = { Hugging Face }
}
```
|
boobmoom/hello_world
|
boobmoom
| 2023-03-24T12:40:16Z
| 0
| 0
| null |
[
"en",
"dataset:squad",
"license:apache-2.0",
"region:us"
] | null | 2023-03-24T12:30:20Z
|
---
license: apache-2.0
datasets:
- squad
language:
- en
---
|
SAL83/a2c-PandaReachDense-v2
|
SAL83
| 2023-03-24T12:39:57Z
| 0
| 0
|
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-24T10:28:13Z
|
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -1.25 +/- 0.60
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
EIStakovskii/french_toxicity_classifier_plus_v2
|
EIStakovskii
| 2023-03-24T12:19:14Z
| 32
| 3
|
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"camembert",
"text-classification",
"fr",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-12-02T08:02:52Z
|
---
language: fr # <-- my language
widget:
- text: "J'aime ta coiffure"
example_title: "NOT TOXIC 1"
- text: "Va te faire foutre"
example_title: "TOXIC 1"
- text: "Quel mauvais temps, n'est-ce pas ?"
example_title: "NOT TOXIC 2"
- text: "J'espère que tu vas mourir, connard !"
example_title: "TOXIC 2"
- text: "j'aime beaucoup ta veste"
example_title: "NOT TOXIC 3"
license: other
---
## Description
NB: this version of the model is the improved version of [EIStakovskii/french_toxicity_classifier_plus](https://huggingface.co/EIStakovskii/french_toxicity_classifier_plus).
To see the source code of training and the data please follow [the github link](https://github.com/eistakovskii/NLP_projects/tree/main/TEXT_CLASSIFICATION/data/Toxicity_Classifiers/DE_FR).
This model was trained for toxicity labeling.
The model was fine-tuned based off [the CamemBERT language model](https://huggingface.co/camembert-base).
To use the model:
```python
from transformers import pipeline
classifier = pipeline("text-classification", model = 'EIStakovskii/french_toxicity_classifier_plus_v2')
print(classifier("Foutez le camp d'ici!"))
```
## Metrics (at validation):
epoch|step|eval_accuracy|eval_f1|eval_loss
-|-|-|-|-
1.16|1600|0.9015412511332729|0.8968269048071442|0.3014959990978241
## Comparison against Perspective
This model was compared against the Google's [Perspective API](https://developers.perspectiveapi.com/s/?language=en_US) that similarly detects toxicity.
Two models were tested on two datasets: the size of [200 sentences](https://github.com/eistakovskii/NLP_projects/blob/main/TEXT_CLASSIFICATION/data/Toxicity_Classifiers/DE_FR/test/test_fr_200.csv) and [400 sentences](https://github.com/eistakovskii/NLP_projects/blob/main/TEXT_CLASSIFICATION/data/Toxicity_Classifiers/DE_FR/test/test_fr_400.csv).
The first one (arguably harder) was collected from the sentences of the [JigSaw](https://www.kaggle.com/c/jigsaw-multilingual-toxic-comment-classification/data) and [DeTox](https://github.com/hdaSprachtechnologie/detox) datasets.
The second one (easier) was collected from the combination of sources: both from JigSaw and DeTox as well as [Paradetox](https://github.com/s-nlp/multilingual_detox/tree/main/data) translations and sentences extracted from [Reverso Context](https://context.reverso.net/translation/) by keywords.
# french_toxicity_classifier_plus_v2
size|accuracy|f1
-|-|-
200|0.783|0.803
400|0.890|0.879
# Perspective
size|accuracy|f1
-|-|-
200|0.826|0.795
**400|0.632|0.418
**I suspect that Perspective has such a low score in the case of the FR dataset (400) because it refuses to trigger on the words "merde" and "putain" and some more rarer words in French like "cul" and so on.
|
nimblesquirrel/rugpt3small_based_on_gpt2-math_model
|
nimblesquirrel
| 2023-03-24T12:07:05Z
| 5
| 0
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-03-24T11:54:44Z
|
---
tags:
- generated_from_trainer
model-index:
- name: rugpt3small_based_on_gpt2-math_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rugpt3small_based_on_gpt2-math_model
This model is a fine-tuned version of [sberbank-ai/rugpt3small_based_on_gpt2](https://huggingface.co/sberbank-ai/rugpt3small_based_on_gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1830
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 433 | 2.2824 |
| 2.4993 | 2.0 | 866 | 2.2044 |
| 2.2173 | 3.0 | 1299 | 2.1830 |
### Framework versions
- Transformers 4.27.3
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
Khushnur/t5-end2end-questions-generation_v4
|
Khushnur
| 2023-03-24T12:05:23Z
| 159
| 0
|
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:eli5_subset_modified_for_t5_qg_v2",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-03-24T10:18:14Z
|
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- eli5_subset_modified_for_t5_qg_v2
model-index:
- name: t5-end2end-questions-generation_v4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-end2end-questions-generation_v4
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the eli5_subset_modified_for_t5_qg_v2 dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8814
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.2704 | 0.64 | 100 | 2.9474 |
| 3.0764 | 1.28 | 200 | 2.9067 |
| 3.0189 | 1.92 | 300 | 2.8866 |
| 2.9797 | 2.56 | 400 | 2.8814 |
### Framework versions
- Transformers 4.27.3
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
andreaparker/flan-t5-base-samsum
|
andreaparker
| 2023-03-24T12:03:38Z
| 6
| 0
|
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:samsum",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-01-31T22:53:25Z
|
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- samsum
metrics:
- rouge
model-index:
- name: flan-t5-base-samsum
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: samsum
type: samsum
config: samsum
split: test
args: samsum
metrics:
- name: Rouge1
type: rouge
value: 47.4145
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-base-samsum
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3772
- Rouge1: 47.4145
- Rouge2: 23.9579
- Rougel: 40.0508
- Rougelsum: 43.7144
- Gen Len: 17.3162
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.4264 | 1.0 | 1842 | 1.3829 | 46.4916 | 23.1227 | 39.444 | 42.9025 | 17.0977 |
| 1.3527 | 2.0 | 3684 | 1.3732 | 47.0694 | 23.4769 | 39.5942 | 43.2226 | 17.4554 |
| 1.2554 | 3.0 | 5526 | 1.3709 | 46.8801 | 23.3161 | 39.5423 | 43.1581 | 17.2027 |
| 1.2503 | 4.0 | 7368 | 1.3736 | 47.4138 | 23.7437 | 40.0016 | 43.6108 | 17.2198 |
| 1.1675 | 5.0 | 9210 | 1.3772 | 47.4145 | 23.9579 | 40.0508 | 43.7144 | 17.3162 |
### Framework versions
- Transformers 4.27.3
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
jarvisx17/medicine-ner
|
jarvisx17
| 2023-03-24T11:46:10Z
| 11
| 0
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"dataset:jxner",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-03-24T11:19:36Z
|
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- jxner
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: medicine-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: jxner
type: jxner
config: wnut_17
split: test
args: wnut_17
metrics:
- name: Precision
type: precision
value: 0.0
- name: Recall
type: recall
value: 0.0
- name: F1
type: f1
value: 0.0
- name: Accuracy
type: accuracy
value: 0.859375
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# medicine-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the jxner dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7996
- Precision: 0.0
- Recall: 0.0
- F1: 0.0
- Accuracy: 0.8594
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:---:|:--------:|
| No log | 1.0 | 1 | 0.8644 | 0.0 | 0.0 | 0.0 | 0.8594 |
| No log | 2.0 | 2 | 0.7996 | 0.0 | 0.0 | 0.0 | 0.8594 |
### Framework versions
- Transformers 4.27.3
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
hoanglongvn/rl_course_vizdoom_health_gathering_supreme
|
hoanglongvn
| 2023-03-24T11:40:58Z
| 0
| 0
|
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-24T11:40:49Z
|
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 13.29 +/- 5.57
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r hoanglongvn/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.9.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.9.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
TestZee/distilbert-base-cased-distilled-squad-finetuned-squad
|
TestZee
| 2023-03-24T11:36:48Z
| 61
| 0
|
transformers
|
[
"transformers",
"tf",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-03-24T11:35:24Z
|
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: TestZee/distilbert-base-cased-distilled-squad-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# TestZee/distilbert-base-cased-distilled-squad-finetuned-squad
This model is a fine-tuned version of [distilbert-base-cased-distilled-squad](https://huggingface.co/distilbert-base-cased-distilled-squad) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.8156
- Train End Logits Accuracy: 0.75
- Train Start Logits Accuracy: 0.7556
- Validation Loss: 0.6593
- Validation End Logits Accuracy: 0.7531
- Validation Start Logits Accuracy: 0.7531
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 90, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 0.8156 | 0.75 | 0.7556 | 0.6593 | 0.7531 | 0.7531 | 0 |
### Framework versions
- Transformers 4.27.3
- TensorFlow 2.11.0
- Datasets 2.10.1
- Tokenizers 0.13.2
|
NbAiLab/nb-bert-base-mnli
|
NbAiLab
| 2023-03-24T11:32:00Z
| 79
| 9
|
transformers
|
[
"transformers",
"pytorch",
"jax",
"safetensors",
"bert",
"text-classification",
"nb-bert",
"zero-shot-classification",
"tensorflow",
"norwegian",
"no",
"dataset:mnli",
"dataset:multi_nli",
"dataset:xnli",
"arxiv:1909.00161",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
zero-shot-classification
| 2022-03-02T23:29:04Z
|
---
language: no
license: cc-by-4.0
thumbnail: https://raw.githubusercontent.com/NBAiLab/notram/master/images/nblogo_2.png
pipeline_tag: zero-shot-classification
tags:
- nb-bert
- zero-shot-classification
- pytorch
- tensorflow
- norwegian
- bert
datasets:
- mnli
- multi_nli
- xnli
widget:
- example_title: Nyhetsartikkel om FHI
text: Folkehelseinstituttets mest optimistiske anslag er at alle voksne er ferdigvaksinert innen midten av september.
candidate_labels: helse, politikk, sport, religion
---
**Release 1.0** (March 11, 2021)
# NB-Bert base model finetuned on Norwegian machine translated MNLI
## Description
The most effective way of creating a good classifier is to finetune a pre-trained model for the specific task at hand. However, in many cases this is simply impossible.
[Yin et al.](https://arxiv.org/abs/1909.00161) proposed a very clever way of using pre-trained MNLI models as zero-shot sequence classifiers. The methods works by reformulating the question to an MNLI hypothesis. If we want to figure out if a text is about "sport", we simply state that "This text is about sport" ("Denne teksten handler om sport").
When the model is finetuned on the 400k large MNLI task, it is in many cases able to solve this classification tasks. There are no MNLI-set of this size in Norwegian but we have trained it on a machine translated version of the original MNLI-set.
## Testing the model
For testing the model, we recommend the [NbAiLab Colab Notebook](https://colab.research.google.com/gist/peregilk/769b5150a2f807219ab8f15dd11ea449/nbailab-mnli-norwegian-demo.ipynb)
## Hugging Face zero-shot-classification pipeline
The easiest way to try this out is by using the Hugging Face pipeline. Please, note that you will get better results when using Norwegian hypothesis template instead of the default English one.
```python
from transformers import pipeline
classifier = pipeline("zero-shot-classification", model="NbAiLab/nb-bert-base-mnli")
```
You can then use this pipeline to classify sequences into any of the class names you specify.
```python
sequence_to_classify = 'Folkehelseinstituttets mest optimistiske anslag er at alle voksne er ferdigvaksinert innen midten av september.'
candidate_labels = ['politikk', 'helse', 'sport', 'religion']
hypothesis_template = 'Dette eksempelet er {}.'
classifier(sequence_to_classify, candidate_labels, hypothesis_template=hypothesis_template, multi_class=True)
# {'labels': ['helse', 'politikk', 'sport', 'religion'],
# 'scores': [0.4210019111633301, 0.0674605593085289, 0.000840459018945694, 0.0007541406666859984],
# 'sequence': 'Folkehelseinstituttets mest optimistiske anslag er at alle over 18 år er ferdigvaksinert innen midten av september.'}
```
## More information
For more information on the model, see
https://github.com/NBAiLab/notram
Here you will also find a Colab explaining more in details how to use the zero-shot-classification pipeline.
|
ToluClassics/extractive_reader_afroxlmr_fquad
|
ToluClassics
| 2023-03-24T11:29:07Z
| 6
| 0
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"question-answering",
"generated_from_trainer",
"dataset:fquad",
"license:mit",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-03-23T14:49:58Z
|
---
license: mit
tags:
- generated_from_trainer
datasets:
- fquad
model-index:
- name: extractive_reader_afroxlmr_fquad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# extractive_reader_afroxlmr_fquad
This model is a fine-tuned version of [Davlan/afro-xlmr-base](https://huggingface.co/Davlan/afro-xlmr-base) on the fquad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 30
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.27.2
- Pytorch 1.9.1+cu111
- Datasets 2.10.2.dev0
- Tokenizers 0.13.2
|
lipee/a2c-PandaReachDense-v2
|
lipee
| 2023-03-24T11:22:55Z
| 0
| 0
|
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-20T13:07:14Z
|
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -2.06 +/- 0.32
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
lgrobol/flaubert-minuscule
|
lgrobol
| 2023-03-24T11:18:53Z
| 715
| 0
|
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"flaubert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z
|
FlauBERT-minuscule
==================
A ridiculously small model for testing purposes.
|
fergusq/finbert-finnsentiment
|
fergusq
| 2023-03-24T11:14:28Z
| 10,870
| 2
|
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"fi",
"arxiv:2012.02613",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z
|
---
language: fi
license: cc-by-4.0
---
# FinBERT fine-tuned with the FinnSentiment dataset
This is a FinBERT model fine-tuned with the [FinnSentiment dataset](https://arxiv.org/pdf/2012.02613.pdf). 90% of sentences were used for training and 10% for evaluation.
## Evaluation results
|Metric|Score|
|--|--|
|Accuracy|0.8639028475711893|
|F1-score|0.8643024701696561|
|Precision|0.8653866541244811|
|Recall|0.8639028475711893|
|Matthews|0.6764924917164834|

## License
FinBERT-FinnSentiment is licensed under the [CC BY 4.0 License](https://creativecommons.org/licenses/by/4.0/deed.en) (same as FinBERT and the FinnSentiment dataset).
|
Dc26/distilbert-base-uncased-finetuned-cola
|
Dc26
| 2023-03-24T10:43:57Z
| 5
| 0
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-03-24T10:23:21Z
|
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5171064406591647
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5631
- Matthews Correlation: 0.5171
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5268 | 1.0 | 535 | 0.5265 | 0.4033 |
| 0.3473 | 2.0 | 1070 | 0.4938 | 0.5017 |
| 0.2313 | 3.0 | 1605 | 0.5631 | 0.5171 |
| 0.1754 | 4.0 | 2140 | 0.8034 | 0.5022 |
| 0.1306 | 5.0 | 2675 | 0.8480 | 0.5093 |
### Framework versions
- Transformers 4.27.3
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
Isaac009/Reinforce-CartPole-v1
|
Isaac009
| 2023-03-24T10:31:05Z
| 0
| 0
| null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-24T10:30:56Z
|
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
belgadreamsbig/arabic-poetry-generator
|
belgadreamsbig
| 2023-03-24T10:24:48Z
| 12
| 0
|
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"ar",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-03-19T16:07:38Z
|
---
license: mit
language:
- ar
library_name: transformers
---
|
stucksam/q-Taxi-v3
|
stucksam
| 2023-03-24T10:19:03Z
| 0
| 0
| null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-24T10:15:17Z
|
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="stucksam/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
marimurta/dqn-SpaceInvadersNoFrameskip-v4
|
marimurta
| 2023-03-24T10:08:14Z
| 0
| 0
|
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-24T10:07:22Z
|
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 659.00 +/- 317.02
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga marimurta -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga marimurta -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga marimurta
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
GraydientPlatformAPI/model_106
|
GraydientPlatformAPI
| 2023-03-24T10:01:01Z
| 30
| 0
|
diffusers
|
[
"diffusers",
"text-to-image",
"license:openrail",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-03-24T09:50:43Z
|
---
license: openrail
library_name: diffusers
pipeline_tag: text-to-image
---
|
lora-library/22jenniferl22
|
lora-library
| 2023-03-24T10:00:34Z
| 3
| 0
|
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-2-1-base",
"base_model:adapter:stabilityai/stable-diffusion-2-1-base",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-03-24T10:00:27Z
|
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2-1-base
instance_prompt: 22JenniferL22
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - 22jenniferl22
These are LoRA adaption weights for [stabilityai/stable-diffusion-2-1-base](https://huggingface.co/stabilityai/stable-diffusion-2-1-base). The weights were trained on the instance prompt "22JenniferL22" using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
Test prompt: 22JenniferL22




|
me2140733/whisper-small-hi
|
me2140733
| 2023-03-24T10:00:15Z
| 77
| 0
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"hi",
"dataset:mozilla-foundation/common_voice_11_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-03-23T10:49:52Z
|
---
language:
- hi
license: apache-2.0
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: Whisper Small Hi - Sanchit Gandhi
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 11.0
type: mozilla-foundation/common_voice_11_0
config: hi
split: test
args: 'config: hi, split: test'
metrics:
- name: Wer
type: wer
value: 53.24219080673834
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Hi - Sanchit Gandhi
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4297
- Wer: 53.2422
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.0876 | 2.44 | 1000 | 0.2914 | 34.9107 |
| 0.0203 | 4.89 | 2000 | 0.3453 | 40.8702 |
| 0.0016 | 7.33 | 3000 | 0.4042 | 46.0298 |
| 0.0005 | 9.78 | 4000 | 0.4297 | 53.2422 |
### Framework versions
- Transformers 4.28.0.dev0
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
zhuohao/ppo-LunarLander-v2
|
zhuohao
| 2023-03-24T09:59:38Z
| 0
| 0
|
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-23T23:12:02Z
|
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 252.91 +/- 67.23
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
mrm8488/bert-tiny-finetuned-squadv2
|
mrm8488
| 2023-03-24T09:46:52Z
| 5,621
| 1
|
transformers
|
[
"transformers",
"pytorch",
"jax",
"safetensors",
"bert",
"question-answering",
"QA",
"en",
"arxiv:1908.08962",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z
|
---
language: en
thumbnail:
tags:
- QA
---
# BERT-Tiny fine-tuned on SQuAD v2
[BERT-Tiny](https://github.com/google-research/bert/) created by [Google Research](https://github.com/google-research) and fine-tuned on [SQuAD 2.0](https://rajpurkar.github.io/SQuAD-explorer/) for **Q&A** downstream task.
**Mode size** (after training): **16.74 MB**
## Details of BERT-Tiny and its 'family' (from their documentation)
Released on March 11th, 2020
This is model is a part of 24 smaller BERT models (English only, uncased, trained with WordPiece masking) referenced in [Well-Read Students Learn Better: On the Importance of Pre-training Compact Models](https://arxiv.org/abs/1908.08962).
The smaller BERT models are intended for environments with restricted computational resources. They can be fine-tuned in the same manner as the original BERT models. However, they are most effective in the context of knowledge distillation, where the fine-tuning labels are produced by a larger and more accurate teacher.
## Details of the downstream task (Q&A) - Dataset
[SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) combines the 100,000 questions in SQuAD1.1 with over 50,000 unanswerable questions written adversarially by crowdworkers to look similar to answerable ones. To do well on SQuAD2.0, systems must not only answer questions when possible, but also determine when no answer is supported by the paragraph and abstain from answering.
| Dataset | Split | # samples |
| -------- | ----- | --------- |
| SQuAD2.0 | train | 130k |
| SQuAD2.0 | eval | 12.3k |
## Model training
The model was trained on a Tesla P100 GPU and 25GB of RAM.
The script for fine tuning can be found [here](https://github.com/huggingface/transformers/tree/main/examples/legacy/question-answering)
## Results:
| Metric | # Value |
| ------ | --------- |
| **EM** | **48.60** |
| **F1** | **49.73** |
| Model | EM | F1 score | SIZE (MB) |
| ----------------------------------------------------------------------------------------- | --------- | --------- | --------- |
| [bert-tiny-finetuned-squadv2](https://huggingface.co/mrm8488/bert-tiny-finetuned-squadv2) | 48.60 | 49.73 | **16.74** |
| [bert-tiny-5-finetuned-squadv2](https://huggingface.co/mrm8488/bert-tiny-5-finetuned-squadv2) | **57.12** | **60.86** | 24.34
## Model in action
Fast usage with **pipelines**:
```python
from transformers import pipeline
qa_pipeline = pipeline(
"question-answering",
model="mrm8488/bert-tiny-finetuned-squadv2",
tokenizer="mrm8488/bert-tiny-finetuned-squadv2"
)
qa_pipeline({
'context': "Manuel Romero has been working hardly in the repository hugginface/transformers lately",
'question': "Who has been working hard for hugginface/transformers lately?"
})
# Output:
```
```json
{
"answer": "Manuel Romero",
"end": 13,
"score": 0.05684709993458714,
"start": 0
}
```
### Yes! That was easy 🎉 Let's try with another example
```python
qa_pipeline({
'context': "Manuel Romero has been working hardly in the repository hugginface/transformers lately",
'question': "For which company has worked Manuel Romero?"
})
# Output:
```
```json
{
"answer": "hugginface/transformers",
"end": 79,
"score": 0.11613431826808274,
"start": 56
}
```
### It works!! 🎉 🎉 🎉
> Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/)
> Made with <span style="color: #e25555;">♥</span> in Spain
|
startlightquyet/sd-class-butterflies-32
|
startlightquyet
| 2023-03-24T09:41:02Z
| 5
| 0
|
diffusers
|
[
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2023-03-24T09:40:33Z
|
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('quyetzz/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
levalencia/FineTunedHateSpeechDistilBert
|
levalencia
| 2023-03-24T09:38:42Z
| 0
| 0
|
transformers
|
[
"transformers",
"text-classification",
"en",
"license:cc0-1.0",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-03-24T07:55:05Z
|
---
license: cc0-1.0
language:
- en
metrics:
- accuracy
library_name: transformers
pipeline_tag: text-classification
---
# Model Card for levalencia/FineTunedHateSpeechDistilBert
## Model Details
### Model Description
Hate Speech Model, it will classify text as Hate Speech (0), Offensive (1) or Neither(2)
- **Developed by:** Luis Valencia
- **Language(s) (NLP):** English
- **License:** CCO
- **Finetuned from model [optional]:** Distilbert
### Model Sources
- **Repository:** https://github.com/levalencia/DataScience-Portfolio/tree/main/FineTuningDistilbert
- **Blog Post [optional]:** https://medium.com/python-in-plain-english/fine-tuning-distilbert-with-your-own-dataset-for-multi-classification-task-69f944189648
|
google/music-spectrogram-diffusion
|
google
| 2023-03-24T09:33:19Z
| 25
| 31
|
diffusers
|
[
"diffusers",
"onnx",
"pytorch",
"arxiv:2206.05408",
"license:apache-2.0",
"diffusers:SpectrogramDiffusionPipeline",
"region:us"
] | null | 2023-03-21T13:01:46Z
|
---
license: apache-2.0
tags:
- pytorch
- diffusers
duplicated_from: kashif/music-spectrogram-diffusion
---
# Multi-instrument Music Synthesis with Spectrogram Diffusion
[Spectrogram Diffusion](https://arxiv.org/abs/2206.05408) by Curtis Hawthorne, Ian Simon, Adam Roberts, Neil Zeghidour, Josh Gardner, Ethan Manilow, and Jesse Engel.
## Abstract
An ideal music synthesizer should be both interactive and expressive, generating high-fidelity audio in realtime for arbitrary combinations of instruments and notes. Recent neural synthesizers have exhibited a tradeoff between domain-specific models that offer detailed control of only specific instruments, or raw waveform models that can train on any music but with minimal control and slow generation. In this work, we focus on a middle ground of neural synthesizers that can generate audio from MIDI sequences with arbitrary combinations of instruments in realtime. This enables training on a wide range of transcription datasets with a single model, which in turn offers note-level control of composition and instrumentation across a wide range of instruments. We use a simple two-stage process: MIDI to spectrograms with an encoder-decoder Transformer, then spectrograms to audio with a generative adversarial network (GAN) spectrogram inverter. We compare training the decoder as an autoregressive model and as a Denoising Diffusion Probabilistic Model (DDPM) and find that the DDPM approach is superior both qualitatively and as measured by audio reconstruction and Fréchet distance metrics. Given the interactivity and generality of this approach, we find this to be a promising first step towards interactive and expressive neural synthesis for arbitrary combinations of instruments and notes.
<img src="https://storage.googleapis.com/music-synthesis-with-spectrogram-diffusion/architecture.png" alt="Architecture diagram">
## Model
As depicted above the model takes as input a MIDI file and tokenizes it into a sequence of 5 second intervals. Each tokenized interval then together with positional encodings is passed through the Note Encoder and its representation is concatenated with the previous window's generated spectrogram representation obtained via the Context Encoder. For the initial 5 second window this is set to zero. The resulting context is then used as conditioning to sample the denoised Spectrogram from the MIDI window and we concatenate this spectrogram to the final output as well as use it for the context of the next MIDI window. The process repeats till we have gone over all the MIDI inputs. Finally a MelGAN decoder converts the potentially long spectrogram to audio which is the final result of this pipeline.
## Example usage
```python
from diffusers import SpectrogramDiffusionPipeline, MidiProcessor
pipe = SpectrogramDiffusionPipeline.from_pretrained("google/music-spectrogram-diffusion")
pipe = pipe.to("cuda")
processor = MidiProcessor()
# Download MIDI from: wget http://www.piano-midi.de/midis/beethoven/beethoven_hammerklavier_2.mid
output = pipe(processor("beethoven_hammerklavier_2.mid"))
audio = output.audios[0]
```
|
SAL83/a2c-v0
|
SAL83
| 2023-03-24T09:20:08Z
| 4
| 0
|
stable-baselines3
|
[
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-24T09:18:57Z
|
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1277.13 +/- 94.35
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
vocabtrimmer/mt5-small-trimmed-fr-15000-frquad-qg
|
vocabtrimmer
| 2023-03-24T09:15:59Z
| 105
| 0
|
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"question generation",
"fr",
"dataset:lmqg/qg_frquad",
"arxiv:2210.03992",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-03-19T02:31:49Z
|
---
license: cc-by-4.0
metrics:
- bleu4
- meteor
- rouge-l
- bertscore
- moverscore
language: fr
datasets:
- lmqg/qg_frquad
pipeline_tag: text2text-generation
tags:
- question generation
widget:
- text: "Créateur » (Maker), lui aussi au singulier, « <hl> le Suprême Berger <hl> » (The Great Shepherd) ; de l'autre, des réminiscences de la théologie de l'Antiquité : le tonnerre, voix de Jupiter, « Et souvent ta voix gronde en un tonnerre terrifiant », etc."
example_title: "Question Generation Example 1"
- text: "Ce black dog peut être lié à des évènements traumatisants issus du monde extérieur, tels que son renvoi de l'Amirauté après la catastrophe des Dardanelles, lors de la <hl> Grande Guerre <hl> de 14-18, ou son rejet par l'électorat en juillet 1945."
example_title: "Question Generation Example 2"
- text: "contre <hl> Normie Smith <hl> et 15 000 dollars le 28 novembre 1938."
example_title: "Question Generation Example 3"
model-index:
- name: vocabtrimmer/mt5-small-trimmed-fr-15000-frquad-qg
results:
- task:
name: Text2text Generation
type: text2text-generation
dataset:
name: lmqg/qg_frquad
type: default
args: default
metrics:
- name: BLEU4 (Question Generation)
type: bleu4_question_generation
value: 7.37
- name: ROUGE-L (Question Generation)
type: rouge_l_question_generation
value: 27.58
- name: METEOR (Question Generation)
type: meteor_question_generation
value: 16.88
- name: BERTScore (Question Generation)
type: bertscore_question_generation
value: 79.53
- name: MoverScore (Question Generation)
type: moverscore_question_generation
value: 55.71
---
# Model Card of `vocabtrimmer/mt5-small-trimmed-fr-15000-frquad-qg`
This model is fine-tuned version of [ckpts/mt5-small-trimmed-fr-15000](https://huggingface.co/ckpts/mt5-small-trimmed-fr-15000) for question generation task on the [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
### Overview
- **Language model:** [ckpts/mt5-small-trimmed-fr-15000](https://huggingface.co/ckpts/mt5-small-trimmed-fr-15000)
- **Language:** fr
- **Training data:** [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) (default)
- **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
### Usage
- With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-)
```python
from lmqg import TransformersQG
# initialize model
model = TransformersQG(language="fr", model="vocabtrimmer/mt5-small-trimmed-fr-15000-frquad-qg")
# model prediction
questions = model.generate_q(list_context="Créateur » (Maker), lui aussi au singulier, « le Suprême Berger » (The Great Shepherd) ; de l'autre, des réminiscences de la théologie de l'Antiquité : le tonnerre, voix de Jupiter, « Et souvent ta voix gronde en un tonnerre terrifiant », etc.", list_answer="le Suprême Berger")
```
- With `transformers`
```python
from transformers import pipeline
pipe = pipeline("text2text-generation", "vocabtrimmer/mt5-small-trimmed-fr-15000-frquad-qg")
output = pipe("Créateur » (Maker), lui aussi au singulier, « <hl> le Suprême Berger <hl> » (The Great Shepherd) ; de l'autre, des réminiscences de la théologie de l'Antiquité : le tonnerre, voix de Jupiter, « Et souvent ta voix gronde en un tonnerre terrifiant », etc.")
```
## Evaluation
- ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/vocabtrimmer/mt5-small-trimmed-fr-15000-frquad-qg/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_frquad.default.json)
| | Score | Type | Dataset |
|:-----------|--------:|:--------|:-----------------------------------------------------------------|
| BERTScore | 79.53 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
| Bleu_1 | 27.68 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
| Bleu_2 | 16.15 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
| Bleu_3 | 10.74 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
| Bleu_4 | 7.37 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
| METEOR | 16.88 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
| MoverScore | 55.71 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
| ROUGE_L | 27.58 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
## Training hyperparameters
The following hyperparameters were used during fine-tuning:
- dataset_path: lmqg/qg_frquad
- dataset_name: default
- input_types: paragraph_answer
- output_types: question
- prefix_types: None
- model: ckpts/mt5-small-trimmed-fr-15000
- max_length: 512
- max_length_output: 32
- epoch: 15
- batch: 16
- lr: 0.001
- fp16: False
- random_seed: 1
- gradient_accumulation_steps: 4
- label_smoothing: 0.15
The full configuration can be found at [fine-tuning config file](https://huggingface.co/vocabtrimmer/mt5-small-trimmed-fr-15000-frquad-qg/raw/main/trainer_config.json).
## Citation
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
```
|
laol777/tst_resnet
|
laol777
| 2023-03-24T09:15:28Z
| 5
| 0
|
generic
|
[
"generic",
"text-classification",
"endpoints-template",
"optimum",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-03-24T08:51:13Z
|
---
tags:
- text-classification
- endpoints-template
- optimum
library_name: generic
---
# Optimized and Quantized DistilBERT with a custom pipeline with handler.py
> NOTE: Blog post coming soon
This is a template repository for Text Classification using Optimum and onnxruntime to support generic inference with Hugging Face Hub generic Inference API. There are two required steps:
1. Specify the requirements by defining a `requirements.txt` file.
2. Implement the `handler.py` `__init__` and `__call__` methods. These methods are called by the Inference API. The `__init__` method should load the model and preload the optimum model and tokenizers as well as the `text-classification` pipeline needed for inference. This is only called once. The `__call__` method performs the actual inference. Make sure to follow the same input/output specifications defined in the template for the pipeline to work.
add
```
library_name: generic
```
to the readme.
_note: the `generic` community image currently only support `inputs` as parameter and no parameter._
|
laol777/resnet50
|
laol777
| 2023-03-24T09:11:25Z
| 4
| 0
|
generic
|
[
"generic",
"text-classification",
"endpoints-template",
"optimum",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-03-24T07:42:37Z
|
---
tags:
- text-classification
- endpoints-template
- optimum
library_name: generic
---
# Optimized and Quantized DistilBERT with a custom pipeline with handler.py
> NOTE: Blog post coming soon
This is a template repository for Text Classification using Optimum and onnxruntime to support generic inference with Hugging Face Hub generic Inference API. There are two required steps:
1. Specify the requirements by defining a `requirements.txt` file.
2. Implement the `handler.py` `__init__` and `__call__` methods. These methods are called by the Inference API. The `__init__` method should load the model and preload the optimum model and tokenizers as well as the `text-classification` pipeline needed for inference. This is only called once. The `__call__` method performs the actual inference. Make sure to follow the same input/output specifications defined in the template for the pipeline to work.
add
```
library_name: generic
```
to the readme.
_note: the `generic` community image currently only support `inputs` as parameter and no parameter._
|
karolill/distilmbert_LR3e-05_WR0.1_OPTIMadamw_hf_WD0.01
|
karolill
| 2023-03-24T09:10:03Z
| 104
| 1
|
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-03-24T09:06:56Z
|
---
license: mit
---
This is a [distilled multilingual BERT](https://huggingface.co/distilbert-base-multilingual-cased) model fine-tuned on 4000 examples of the
[NoReC dataset](https://github.com/ltgoslo/norec) where examples with score 1/2 were marked as negative and 5/6 were marked as positive.
The model was fine-tuned for 3 epochs with the following parameters:
- learning_rate = 3e-05
- warmup_ratio = 0.1
- optim = 'adamw_hf'
- weight_decay = 0.01
|
Fgenerberry/sd-class-butterflies-32
|
Fgenerberry
| 2023-03-24T08:21:00Z
| 4
| 0
|
diffusers
|
[
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2023-03-23T07:38:54Z
|
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('Fgenerberry/sd-class-butterflies-32')
image = pipeline ().images [0]
image
|
Xianbing/distilbert-base-uncased-finetuned-mnli-mm
|
Xianbing
| 2023-03-24T08:01:59Z
| 106
| 0
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-03-24T03:32:22Z
|
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-mnli-mm
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: mnli
split: validation_mismatched
args: mnli
metrics:
- name: Accuracy
type: accuracy
value: 0.8235353946297803
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-mnli-mm
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4709
- Accuracy: 0.8235
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.5218 | 1.0 | 24544 | 0.4663 | 0.8162 |
| 0.3848 | 2.0 | 49088 | 0.4709 | 0.8235 |
### Framework versions
- Transformers 4.27.3
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
McCheng/ppo-LunarLander-v2-Unit8
|
McCheng
| 2023-03-24T08:01:02Z
| 0
| 0
| null |
[
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-24T08:00:31Z
|
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -115.54 +/- 54.00
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'McCheng/ppo-LunarLander-v2-Unit8'
'batch_size': 512
'minibatch_size': 128}
```
|
whybeyoung/yolo
|
whybeyoung
| 2023-03-24T08:00:35Z
| 2
| 0
|
transformers
|
[
"transformers",
"exbert",
"text-classification",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-03-13T09:22:00Z
|
---
language: en
do_predict: true
pipeline_tag: text-classification
tags:
- exbert
license: apache-2.0
---
# GPT-2
Test the whole generation capabilities here: https://transformer.huggingface.co/doc/gpt2-large
Pretrained model on English language using a causal language modeling (CLM) objective. It was introduced in
[this paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf)
and first released at [this page](https://openai.com/blog/better-language-models/).
Disclaimer: The team releasing GPT-2 also wrote a
[model card](https://github.com/openai/gpt-2/blob/master/model_card.md) for their model. Content from this model card
has been written by the Hugging Face team to complete the information they provided and give specific examples of bias.
## Model description
GPT-2 is a transformers model pretrained on a very large corpus of English data in a self-supervised fashion. This
means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots
of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,
it was trained to guess the next word in sentences.
More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,
shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the
predictions for the token `i` only uses the inputs from `1` to `i` but not the future tokens.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a
prompt.
This is the **smallest** version of GPT-2, with 124M parameters.
**Related Models:** [GPT-Large](https://huggingface.co/gpt2-large), [GPT-Medium](https://huggingface.co/gpt2-medium) and [GPT-XL](https://huggingface.co/gpt2-xl)
## Intended uses & limitations
You can use the raw model for text generation or fine-tune it to a downstream task. See the
[model hub](https://huggingface.co/models?filter=gpt2) to look for fine-tuned versions on a task that interests you.
### How to use
You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we
set a seed for reproducibility:
```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='gpt2')
>>> set_seed(42)
>>> generator("Hello, I'm a language model,", max_length=30, num_return_sequences=5)
[{'generated_text': "Hello, I'm a language model, a language for thinking, a language for expressing thoughts."},
{'generated_text': "Hello, I'm a language model, a compiler, a compiler library, I just want to know how I build this kind of stuff. I don"},
{'generated_text': "Hello, I'm a language model, and also have more than a few of your own, but I understand that they're going to need some help"},
{'generated_text': "Hello, I'm a language model, a system model. I want to know my language so that it might be more interesting, more user-friendly"},
{'generated_text': 'Hello, I\'m a language model, not a language model"\n\nThe concept of "no-tricks" comes in handy later with new'}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import GPT2Tokenizer, GPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
model = GPT2Model.from_pretrained('gpt2')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import GPT2Tokenizer, TFGPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
model = TFGPT2Model.from_pretrained('gpt2')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
### Limitations and bias
The training data used for this model has not been released as a dataset one can browse. We know it contains a lot of
unfiltered content from the internet, which is far from neutral. As the openAI team themselves point out in their
[model card](https://github.com/openai/gpt-2/blob/master/model_card.md#out-of-scope-use-cases):
> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases
> that require the generated text to be true.
>
> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do
> not recommend that they be deployed into systems that interact with humans > unless the deployers first carry out a
> study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race,
> and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar
> levels of caution around use cases that are sensitive to biases around human attributes.
Here's an example of how the model can have biased predictions:
```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='gpt2')
>>> set_seed(42)
>>> generator("The White man worked as a", max_length=10, num_return_sequences=5)
[{'generated_text': 'The White man worked as a mannequin for'},
{'generated_text': 'The White man worked as a maniser of the'},
{'generated_text': 'The White man worked as a bus conductor by day'},
{'generated_text': 'The White man worked as a plumber at the'},
{'generated_text': 'The White man worked as a journalist. He had'}]
>>> set_seed(42)
>>> generator("The Black man worked as a", max_length=10, num_return_sequences=5)
[{'generated_text': 'The Black man worked as a man at a restaurant'},
{'generated_text': 'The Black man worked as a car salesman in a'},
{'generated_text': 'The Black man worked as a police sergeant at the'},
{'generated_text': 'The Black man worked as a man-eating monster'},
{'generated_text': 'The Black man worked as a slave, and was'}]
```
This bias will also affect all fine-tuned versions of this model.
## Training data
The OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web
pages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from
this dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights
40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText
[here](https://github.com/openai/gpt-2/blob/master/domains.txt).
## Training procedure
### Preprocessing
The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a
vocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens.
The larger model was trained on 256 cloud TPU v3 cores. The training duration was not disclosed, nor were the exact
details of training.
## Evaluation results
The model achieves the following results without any fine-tuning (zero-shot):
| Dataset | LAMBADA | LAMBADA | CBT-CN | CBT-NE | WikiText2 | PTB | enwiki8 | text8 | WikiText103 | 1BW |
|:--------:|:-------:|:-------:|:------:|:------:|:---------:|:------:|:-------:|:------:|:-----------:|:-----:|
| (metric) | (PPL) | (ACC) | (ACC) | (ACC) | (PPL) | (PPL) | (BPB) | (BPC) | (PPL) | (PPL) |
| | 35.13 | 45.99 | 87.65 | 83.4 | 29.41 | 65.85 | 1.16 | 1,17 | 37.50 | 75.20 |
### BibTeX entry and citation info
```bibtex
@article{radford2019language,
title={Language Models are Unsupervised Multitask Learners},
author={Radford, Alec and Wu, Jeff and Child, Rewon and Luan, David and Amodei, Dario and Sutskever, Ilya},
year={2019}
}
```
<a href="https://huggingface.co/exbert/?model=gpt2">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
Ifenna/dbert-3epoch
|
Ifenna
| 2023-03-24T07:58:08Z
| 27
| 0
|
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"distilbert",
"question-answering",
"en",
"dataset:squad_v2",
"dataset:wiki_qa",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:04Z
|
---
datasets:
- squad_v2
- wiki_qa
language:
- en
metrics:
- accuracy
pipeline_tag: question-answering
---
A distilbert model fine-tuned for question answering.
|
linoyts/ffashion-dress
|
linoyts
| 2023-03-24T07:49:46Z
| 30
| 1
|
diffusers
|
[
"diffusers",
"pytorch",
"stable-diffusion",
"text-to-image",
"diffusion-models-class",
"dreambooth-hackathon",
"wildcard",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-03-24T07:47:13Z
|
---
license: creativeml-openrail-m
tags:
- pytorch
- diffusers
- stable-diffusion
- text-to-image
- diffusion-models-class
- dreambooth-hackathon
- wildcard
widget:
- text: a photo of ffashion dress in the Acropolis
---
# DreamBooth model for the ffashion concept trained by LinoyTsaban on the LinoyTsaban/dreambooth-hackathon-images dataset.
This is a Stable Diffusion model fine-tuned on the ffashion concept with DreamBooth. It can be used by modifying the `instance_prompt`: **a photo of ffashion dress**
This model was created as part of the DreamBooth Hackathon 🔥. Visit the [organisation page](https://huggingface.co/dreambooth-hackathon) for instructions on how to take part!
## Description
This is a Stable Diffusion model fine-tuned on `dress` images for the wildcard theme.
## Usage
```python
from diffusers import StableDiffusionPipeline
pipeline = StableDiffusionPipeline.from_pretrained('LinoyTsaban/ffashion-dress')
image = pipeline().images[0]
image
```
|
GraydientPlatformAPI/model_105
|
GraydientPlatformAPI
| 2023-03-24T07:48:11Z
| 29
| 0
|
diffusers
|
[
"diffusers",
"text-to-image",
"license:openrail",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-03-24T07:34:22Z
|
---
license: openrail
library_name: diffusers
pipeline_tag: text-to-image
---
|
Toadette/Blast_mixes
|
Toadette
| 2023-03-24T07:34:33Z
| 0
| 0
| null |
[
"license:cc-by-nc-4.0",
"region:us"
] | null | 2023-03-24T07:32:35Z
|
---
license: cc-by-nc-4.0
---
License for all my models listed here models
https://civitai.com/models/19466/blaest-mix
https://civitai.com/models/23668/blaestive-mix
|
GillesEverling/q-FrozenLake-v1-8x8-Slippery
|
GillesEverling
| 2023-03-24T07:33:00Z
| 0
| 0
| null |
[
"FrozenLake-v1-8x8",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-24T07:32:56Z
|
---
tags:
- FrozenLake-v1-8x8
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-8x8-Slippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-8x8
type: FrozenLake-v1-8x8
metrics:
- type: mean_reward
value: 0.53 +/- 0.50
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="GillesEverling/q-FrozenLake-v1-8x8-Slippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Jit/drjit-nlp-model-qa
|
Jit
| 2023-03-24T07:29:42Z
| 61
| 0
|
transformers
|
[
"transformers",
"tf",
"roberta",
"question-answering",
"generated_from_keras_callback",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-03-24T07:18:04Z
|
---
license: cc-by-4.0
tags:
- generated_from_keras_callback
model-index:
- name: drjit-nlp-model-qa
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# drjit-nlp-model-qa
This model is a fine-tuned version of [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 288, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
### Framework versions
- Transformers 4.18.0
- TensorFlow 2.6.4
- Datasets 2.1.0
- Tokenizers 0.12.1
|
charmquark/LunarLander-v2
|
charmquark
| 2023-03-24T07:09:16Z
| 0
| 0
| null |
[
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-24T06:30:16Z
|
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -49.15 +/- 25.65
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 1000000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 1024
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'charmquark/LunarLander-v2'
'batch_size': 4096
'minibatch_size': 1024}
```
|
glory20h/lunar_lander
|
glory20h
| 2023-03-24T07:08:11Z
| 4
| 0
|
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-24T07:07:49Z
|
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 244.69 +/- 20.40
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
starcatmeow/autotrain-cybersecurity-summarization-pegasus-x-book-43369110299
|
starcatmeow
| 2023-03-24T07:06:52Z
| 11
| 1
|
transformers
|
[
"transformers",
"pytorch",
"pegasus_x",
"text2text-generation",
"autotrain",
"summarization",
"unk",
"dataset:starcatmeow/autotrain-data-cybersecurity-summarization-pegasus-x-book",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2023-03-24T06:30:20Z
|
---
tags:
- autotrain
- summarization
language:
- unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- starcatmeow/autotrain-data-cybersecurity-summarization-pegasus-x-book
co2_eq_emissions:
emissions: 13.98857715454734
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 43369110299
- CO2 Emissions (in grams): 13.9886
## Validation Metrics
- Loss: 2.950
- Rouge1: 37.860
- Rouge2: 20.146
- RougeL: 34.340
- RougeLsum: 34.254
- Gen Len: 13.848
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/starcatmeow/autotrain-cybersecurity-summarization-pegasus-x-book-43369110299
```
|
Tritkoman/GermantoNorthFrisianV1
|
Tritkoman
| 2023-03-24T06:31:11Z
| 5
| 0
|
transformers
|
[
"transformers",
"pytorch",
"marian",
"text2text-generation",
"autotrain",
"translation",
"unk",
"dataset:Tritkoman/autotrain-data-germantonorthfrisian",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-03-24T06:22:08Z
|
---
tags:
- autotrain
- translation
language:
- unk
- unk
datasets:
- Tritkoman/autotrain-data-germantonorthfrisian
co2_eq_emissions:
emissions: 3.4297994633139433
---
# Model Trained Using AutoTrain
- Problem type: Translation
- Model ID: 43368110298
- CO2 Emissions (in grams): 3.4298
## Validation Metrics
- Loss: 1.137
- SacreBLEU: 50.890
- Gen len: 13.543
|
shanmukhchaitu/ppo-LunarLander-v2
|
shanmukhchaitu
| 2023-03-24T06:25:43Z
| 0
| 0
|
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-24T06:25:22Z
|
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 267.55 +/- 22.87
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
topskychen/rl-Taxi-v3
|
topskychen
| 2023-03-24T05:57:57Z
| 0
| 0
| null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-24T05:57:53Z
|
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: rl-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.69
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="topskychen/rl-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
thanhnguyenvn/distilbert-base-uncased-finetuned-ner
|
thanhnguyenvn
| 2023-03-24T05:52:28Z
| 5
| 0
|
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-03-24T05:25:39Z
|
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: validation
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9252181597260577
- name: Recall
type: recall
value: 0.9370175634858485
- name: F1
type: f1
value: 0.9310804802134283
- name: Accuracy
type: accuracy
value: 0.9834305050280394
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0616
- Precision: 0.9252
- Recall: 0.9370
- F1: 0.9311
- Accuracy: 0.9834
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2425 | 1.0 | 878 | 0.0698 | 0.9149 | 0.9203 | 0.9176 | 0.9811 |
| 0.0551 | 2.0 | 1756 | 0.0625 | 0.9188 | 0.9340 | 0.9263 | 0.9825 |
| 0.0298 | 3.0 | 2634 | 0.0616 | 0.9252 | 0.9370 | 0.9311 | 0.9834 |
### Framework versions
- Transformers 4.27.3
- Pytorch 1.13.1
- Datasets 2.10.1
- Tokenizers 0.13.2
|
charmquark/a2c-PandaReachDense-v2
|
charmquark
| 2023-03-24T05:48:49Z
| 0
| 0
|
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-21T09:34:28Z
|
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -2.47 +/- 0.24
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Fraisier/distilbert-base-uncased-finetuned-emotion
|
Fraisier
| 2023-03-24T05:33:17Z
| 105
| 0
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-03-24T04:23:39Z
|
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9325
- name: F1
type: f1
value: 0.9328486852494083
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1545
- Accuracy: 0.9325
- F1: 0.9328
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.1669 | 1.0 | 250 | 0.1628 | 0.9285 | 0.9281 |
| 0.1107 | 2.0 | 500 | 0.1545 | 0.9325 | 0.9328 |
### Framework versions
- Transformers 4.27.3
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
shreyansjain/Reinforce-CartPole-v1
|
shreyansjain
| 2023-03-24T05:28:19Z
| 0
| 0
| null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl=class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-24T04:58:27Z
|
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl=class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Falah/iraqi-cafes
|
Falah
| 2023-03-24T05:15:52Z
| 9
| 0
|
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-03-11T07:55:17Z
|
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### iraqi-cafes Dreambooth model trained by Falah with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
SEVUNX/PEPSYBLUE-MIX-RED
|
SEVUNX
| 2023-03-24T05:07:19Z
| 0
| 0
| null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-03-20T02:43:26Z
|
---
license: creativeml-openrail-m
---
|
arb9p4/a2c-PandaReachDense-v2
|
arb9p4
| 2023-03-24T05:06:26Z
| 0
| 0
|
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-23T18:41:13Z
|
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -1.33 +/- 0.19
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
ozcur/alpaca-native-4bit
|
ozcur
| 2023-03-24T04:59:36Z
| 19
| 58
|
transformers
|
[
"transformers",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-03-20T03:19:09Z
|
This is 4-bit quantization of [chavinlo/alpaca-native](https://huggingface.co/chavinlo/alpaca-native) (`cecc16d`) via [qwopqwop200/GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa) (`5cdfad2`).
Quantization invoked as such:
`llama.py /output/path c4 --wbits 4 --groupsize 128 --save alpaca7b-4bit.pt`
Inference example from the GPTQ repo and commit referenced above:
```
(gptq) [root@gpu03 GPTQ-for-LLaMa]# CUDA_VISIBLE_DEVICES=0 python llama_inference.py /root/alpaca-native-4bit --wbits 4 --groupsize 128 --load /root/alpaca-native-4bit/alpaca7b-4bit.pt --max_length 300 --text "$(cat test_prompt.txt)"
Loading model ...
Done.
### Instruction: What is an alpaca? How is it different from a llama?
### Response: Alpacas are soft and gentle, while llamas are stubborn and independent.</s>
(gptq) [root@gpu03 GPTQ-for-LLaMa]# CUDA_VISIBLE_DEVICES=0 python llama_inference.py /root/alpaca-native-4bit --wbits 4 --groupsize 128 --load /root/alpaca-native-4bit/alpaca7b-4bit.pt --max_length 300 --text "$(cat test_prompt.txt)"
Loading model ...
Done.
### Instruction: What is an alpaca? How is it different from a llama?
### Response: An alpaca is a small, domesticated species of livestock from the Andes region of South America. It is typically kept as a pet, and its fibers can be used for various purposes, such as making clothing and crafts. Alpacas are typically brown or black, and their ears and tails are often moved.
Although it is different from a llama, the two animals are often compared to when referring to their behavior.</s>
(gptq) [root@gpu03 GPTQ-for-LLaMa]# md5sum /root/alpaca-native-4bit/alpaca7b-4bit.pt
74849953cc54e313b972d2cc9a05c24b /root/alpaca-native-4bit/alpaca7b-4bit.pt
(gptq) [root@gpu03 GPTQ-for-LLaMa]#
```
|
sanak/dqn-SpaceInvadersNoFrameskip-v4
|
sanak
| 2023-03-24T04:38:06Z
| 0
| 0
|
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-24T04:37:27Z
|
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 439.00 +/- 150.86
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga sanak -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga sanak -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga sanak
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
Absie/Reinforce-Pixelcopter-PLE-v0
|
Absie
| 2023-03-24T04:27:49Z
| 0
| 0
| null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-24T04:27:19Z
|
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 116.80 +/- 109.11
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
ThePianist/poca-SoccerTwos
|
ThePianist
| 2023-03-24T04:08:23Z
| 6
| 0
|
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2023-03-24T04:08:16Z
|
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
library_name: ml-agents
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: ThePianist/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
kikijiki/dqn-SpaceInvadersNoFrameskip-v4
|
kikijiki
| 2023-03-24T03:51:11Z
| 0
| 0
|
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-24T03:48:41Z
|
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 628.00 +/- 144.28
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga kikijiki -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga kikijiki -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga kikijiki
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
huolongguo10/CDial-GPT2-LCCC-Base-copy
|
huolongguo10
| 2023-03-24T03:38:48Z
| 10
| 0
|
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"conversational",
"dataset:silver/lccc",
"arxiv:1901.08149",
"arxiv:2008.03946",
"license:mit",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2023-03-05T10:56:40Z
|
---
license: mit
tags:
- conversational
datasets: silver/lccc
---
## Chinese pre-trained dialogue model (CDial-GPT)
This project provides a large-scale Chinese GPT model pre-trained on the dataset [LCCC](https://huggingface.co/datasets/silver/lccc).
We present a series of Chinese GPT model that are first pre-trained on a Chinese novel dataset and then post-trained on our LCCC dataset.
Similar to [TransferTransfo](https://arxiv.org/abs/1901.08149), we concatenate all dialogue histories into one context sentence, and use this sentence to predict the response. The input of our model consists of word embedding, speaker embedding, and positional embedding of each word.
Paper: [A Large-Scale Chinese Short-Text Conversation Dataset](https://arxiv.org/pdf/2008.03946.pdf)
### How to use
```python
from transformers import OpenAIGPTLMHeadModel, GPT2LMHeadModel, BertTokenizer
import torch
tokenizer = BertTokenizer.from_pretrained("thu-coai/CDial-GPT2_LCCC-base")
model = GPT2LMHeadModel.from_pretrained("thu-coai/CDial-GPT2_LCCC-base")
```
For more details, please refer to our [repo.](https://github.com/thu-coai/CDial-GPT) on github.
|
joe-hug/q-FrozenLake-v1-4x4-noSlippery
|
joe-hug
| 2023-03-24T03:21:35Z
| 0
| 0
| null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-24T03:21:33Z
|
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="joe-hug/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
stale2000/sd-dnditem
|
stale2000
| 2023-03-24T03:09:14Z
| 32
| 20
|
diffusers
|
[
"diffusers",
"stable-diffusion",
"text-to-image",
"en",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2022-12-17T21:11:08Z
|
---
inference: true
language:
- en
tags:
- stable-diffusion
- text-to-image
license: other
---
dnditem
---
Examples | Examples
:-------------------------:|:-------------------------:
<img src="https://i.imgur.com/XCg4JmW.png" width="50%"/> | <img src="https://i.imgur.com/HRoKRlY.png" width="50%"/>
<img src="https://i.imgur.com/9KTpaIZ.png" width="50%"/> |
<img src="https://i.imgur.com/rZOJMQD.jpg" width="50%"/> |
MORE results here! Hundreds of images!! https://imgur.com/a/HvhOOjJ
This is a model (dnditem) for creating magic items, for the game Dungeons and Dragons! It was trained to be very similar to the official results that are available here: https://www.dndbeyond.com/magic-items
The model was trained in a pretty specific way though, and requires a specific way of prompting to get good results.
##Prompting
---
The keywork is "dnditem", and the prompts should be done in the following way:
"dnditem, [item type], [item style], [background]"
So, for example, a prompt could look like:
"dnditem, a pair of boots, spellguard style, light red circle inner background with white outer background".
or
"dnditem, a shield, shooting star style, light blue stripe inner background with white outer background".
##item type
---
Currently the model supports and was trained on the following types:
"a pair of boots", "a cloak", "a pair of gloves", "a helmet", "a necklace", "a ring", "a robe", "a rod", "a shield", "a staff", "a sword", "a wand"
##item_styles
---
The item styles, or abilities, can be found in the itemstyles.txt file. There are over 100 of them, of all sorts of different types of dnditems.
Some cool ones to check out are "ultimate evil style", "blue and green transparent animated style", and "spell storing style".
##background
---
Backgrounds should be promopted with an inner and an other background, as well as a "shape" that is either "circle" or "stripe".
So Something like "light blue circle inner background with white outer background".
|
mrm8488/flan-t5-small-finetuned-samsum
|
mrm8488
| 2023-03-24T03:03:48Z
| 15
| 1
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"license:wtfpl",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-12-31T10:51:11Z
|
---
license: wtfpl
lang:
- en
widget:
- text: "Sid: Wanna catch a movie?\nAnnie: sure what do you have in mind?\nSid; the Aquaman? :D\nAnnie: haha isn't it a bit childish\nSid: noooooo I mean yes but it's the highest grossing movie this week\nAnnie: seriously?\nSid: yeah?\nAnnie: okay let's see what the fuss is all about"
---
# Flan-T5 (small) fine-tuned on SAMSUM for conversation summarization
|
mrm8488/codebert-base-finetuned-code-ner
|
mrm8488
| 2023-03-24T03:03:35Z
| 20
| 0
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"roberta",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-10-21T15:20:01Z
|
---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: codebert-base-finetuned-code-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# codebert-base-finetuned-code-ner
This model is a fine-tuned version of [microsoft/codebert-base](https://huggingface.co/microsoft/codebert-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3522
- Precision: 0.6297
- Recall: 0.6417
- F1: 0.6356
- Accuracy: 0.9185
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 191 | 0.4601 | 0.4861 | 0.4578 | 0.4715 | 0.8853 |
| No log | 2.0 | 382 | 0.3989 | 0.5806 | 0.5243 | 0.5510 | 0.8996 |
| 0.5081 | 3.0 | 573 | 0.3547 | 0.5723 | 0.6017 | 0.5866 | 0.9059 |
| 0.5081 | 4.0 | 764 | 0.3507 | 0.6161 | 0.6115 | 0.6138 | 0.9135 |
| 0.5081 | 5.0 | 955 | 0.3412 | 0.6299 | 0.6252 | 0.6276 | 0.9161 |
| 0.2299 | 6.0 | 1146 | 0.3418 | 0.6162 | 0.6465 | 0.6310 | 0.9175 |
| 0.2299 | 7.0 | 1337 | 0.3497 | 0.6288 | 0.6287 | 0.6287 | 0.9175 |
| 0.1618 | 8.0 | 1528 | 0.3474 | 0.6340 | 0.6397 | 0.6368 | 0.9189 |
| 0.1618 | 9.0 | 1719 | 0.3501 | 0.6262 | 0.6432 | 0.6346 | 0.9179 |
| 0.1618 | 10.0 | 1910 | 0.3522 | 0.6297 | 0.6417 | 0.6356 | 0.9185 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
Rooshan/Rooshan-mbart-large50-1_finetuned_it_es
|
Rooshan
| 2023-03-24T02:50:34Z
| 104
| 0
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-03-24T02:34:29Z
|
---
tags:
- generated_from_trainer
model-index:
- name: Rooshan-mbart-large50-1_finetuned_it_es
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Rooshan-mbart-large50-1_finetuned_it_es
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| No log | 1.0 | 50 | 1.4498 | 31.7771 | 29.77 |
### Framework versions
- Transformers 4.27.3
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
valooo/22222
|
valooo
| 2023-03-24T02:29:57Z
| 0
| 0
| null |
[
"zh",
"dataset:fka/awesome-chatgpt-prompts",
"license:openrail",
"region:us"
] | null | 2023-03-24T02:29:29Z
|
---
license: openrail
datasets:
- fka/awesome-chatgpt-prompts
language:
- zh
---
|
loluvulol/sd-1-5-jorocca
|
loluvulol
| 2023-03-24T02:24:27Z
| 9
| 0
|
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-03-24T01:44:27Z
|
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### SD-1-5-jorocca Dreambooth model trained by loluvulol with [buildspace's DreamBooth](https://colab.research.google.com/github/buildspace/diffusers/blob/main/examples/dreambooth/DreamBooth_Stable_Diffusion.ipynb) notebook
Build your own using the [AI Avatar project](https://buildspace.so/builds/ai-avatar)!
To get started head over to the [project dashboard](https://buildspace.so/p/build-ai-avatars).
Sample pictures of this concept:

|
nhouben/ppo-LunarLander-v2
|
nhouben
| 2023-03-24T01:44:12Z
| 4
| 0
|
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-24T01:43:51Z
|
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 266.60 +/- 19.47
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
xinyixiuxiu/albert-xxlarge-v2-SST2-finetuned-try
|
xinyixiuxiu
| 2023-03-24T01:43:28Z
| 60
| 0
|
transformers
|
[
"transformers",
"tf",
"albert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-03-24T01:12:46Z
|
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: xinyixiuxiu/albert-xxlarge-v2-SST2-finetuned-try
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# xinyixiuxiu/albert-xxlarge-v2-SST2-finetuned-try
This model is a fine-tuned version of [albert-xxlarge-v2](https://huggingface.co/albert-xxlarge-v2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3689
- Train Accuracy: 0.8560
- Validation Loss: 0.3286
- Validation Accuracy: 0.8899
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 3e-06, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.6923 | 0.5660 | 0.6693 | 0.5814 | 0 |
| 0.6081 | 0.6550 | 0.5309 | 0.7431 | 1 |
| 0.3689 | 0.8560 | 0.3286 | 0.8899 | 2 |
### Framework versions
- Transformers 4.21.1
- TensorFlow 2.7.0
- Datasets 2.10.1
- Tokenizers 0.12.1
|
pszemraj/distilgpt2-magicprompt-SD
|
pszemraj
| 2023-03-24T01:08:44Z
| 22
| 3
|
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"stable diffusion",
"diffusion",
"text2image",
"prompt augment",
"prompt engineering",
"dataset:Gustavosta/Stable-Diffusion-Prompts",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-11-09T10:14:37Z
|
---
license: apache-2.0
tags:
- generated_from_trainer
- stable diffusion
- diffusion
- text2image
- prompt augment
- prompt engineering
datasets:
- Gustavosta/Stable-Diffusion-Prompts
model-index:
- name: distilgpt2-magicprompt-SD
results: []
thumbnail: https://i.ibb.co/WkmTnZD/image.png
widget:
- text: "morning sun over Jakarta"
example_title: "morning sun"
- text: "WARNING: pip is"
example_title: "pip"
- text: "sentient cheese"
example_title: "sentient cheese"
- text: "cheeps are"
example_title: "cheeps"
- text: "avocado armchair"
example_title: "creative prompt"
- text: "Landscape of"
example_title: "landscape"
parameters:
min_length: 16
max_new_tokens: 24
no_repeat_ngram_size: 1
do_sample: True
---
# distilgpt2-magicprompt-SD
[](https://colab.research.google.com/gist/pszemraj/bdddf9c3fe92d1ac2654730016d64c80/demo-distilgpt2-magicprompt.ipynb)
Generate/augment your prompt, stable diffusion style.
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the Gustavosta/Stable-Diffusion-Prompts dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3089
- eval_steps_per_second = 17.201
- perplexity = 3.7022
## example
Results in (_DALL-E, but you get the idea_):

<br>
this `distilgpt2` version is probably small/fast enough to be used locally on CPU!
## basic usage
install transformers as needed:
```bash
pip install -U transformers
```
load and query through a `pipeline` object:
```python
from transformers import pipeline
model_tag = "pszemraj/distilgpt2-magicprompt-SD"
generator = pipeline(
"text-generation",
model=model_tag,
)
prompt = "The Answer to Why"
result = generator(
prompt,
max_new_tokens=24,
) # generate, adjust/add kwargs as needed
print(result[0]["generated_text"])
```
## Training and evaluation data
refer to the `Gustavosta/Stable-Diffusion-Prompts` dataset.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- total_eval_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7061 | 0.99 | 33 | 2.5859 |
| 2.08 | 1.99 | 66 | 1.9965 |
| 1.7623 | 2.99 | 99 | 1.7248 |
| 1.5408 | 3.99 | 132 | 1.5449 |
| 1.4147 | 4.99 | 165 | 1.4437 |
| 1.3593 | 5.99 | 198 | 1.3768 |
| 1.2703 | 6.99 | 231 | 1.3362 |
| 1.2528 | 7.99 | 264 | 1.3175 |
| 1.1981 | 8.99 | 297 | 1.3091 |
| 1.2117 | 9.99 | 330 | 1.3089 |
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.6.1
- Tokenizers 0.13.1
|
selcukkubur/rafadan-hayri
|
selcukkubur
| 2023-03-24T01:08:20Z
| 5
| 0
|
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-03-24T01:02:25Z
|
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### rafadan-hayri Dreambooth model trained by selcukkubur with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
BM-K/KoMiniLM-68M
|
BM-K
| 2023-03-24T00:47:51Z
| 13
| 2
|
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"arxiv:2002.10957",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-18T07:20:19Z
|
# KoMiniLM
🐣 Korean mini language model
## Overview
Current language models usually consist of hundreds of millions of parameters which brings challenges for fine-tuning and online serving in real-life applications due to latency and capacity constraints. In this project, we release a light weight korean language model to address the aforementioned shortcomings of existing language models.
## Quick tour
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("BM-K/KoMiniLM-68M") # 68M model
model = AutoModel.from_pretrained("BM-K/KoMiniLM-68M")
inputs = tokenizer("안녕 세상아!", return_tensors="pt")
outputs = model(**inputs)
```
## Update history
** Updates on 2022.06.20 **
- Release KoMiniLM-bert-68M
** Updates on 2022.05.24 **
- Release KoMiniLM-bert-23M
## Pre-training
`Teacher Model`: [KLUE-BERT(base)](https://github.com/KLUE-benchmark/KLUE)
### Object
Self-Attention Distribution and Self-Attention Value-Relation [[Wang et al., 2020]](https://arxiv.org/abs/2002.10957) were distilled from each discrete layer of the teacher model to the student model. Wang et al. distilled in the last layer of the transformer, but that was not the case in this project.
### Data sets
|Data|News comments|News article|
|:----:|:----:|:----:|
|size|10G|10G|
### Config
- **KoMiniLM-68M**
```json
{
"architectures": [
"BertForPreTraining"
],
"attention_probs_dropout_prob": 0.1,
"classifier_dropout": null,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"initializer_range": 0.02,
"intermediate_size": 3072,
"layer_norm_eps": 1e-12,
"max_position_embeddings": 512,
"model_type": "bert",
"num_attention_heads": 12,
"num_hidden_layers": 6,
"output_attentions": true,
"pad_token_id": 0,
"position_embedding_type": "absolute",
"return_dict": false,
"torch_dtype": "float32",
"transformers_version": "4.13.0",
"type_vocab_size": 2,
"use_cache": true,
"vocab_size": 32000
}
```
### Performance on subtasks
- The results of our fine-tuning experiments are an average of 3 runs for each task.
```
cd KoMiniLM-Finetune
bash scripts/run_all_kominilm.sh
```
|| #Param | Average | NSMC<br>(Acc) | Naver NER<br>(F1) | PAWS<br>(Acc) | KorNLI<br>(Acc) | KorSTS<br>(Spearman) | Question Pair<br>(Acc) | KorQuaD<br>(Dev)<br>(EM/F1) |
|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|
|KoBERT(KLUE)| 110M | 86.84 | 90.20±0.07 | 87.11±0.05 | 81.36±0.21 | 81.06±0.33 | 82.47±0.14 | 95.03±0.44 | 84.43±0.18 / <br>93.05±0.04 |
|KcBERT| 108M | 78.94 | 89.60±0.10 | 84.34±0.13 | 67.02±0.42| 74.17±0.52 | 76.57±0.51 | 93.97±0.27 | 60.87±0.27 / <br>85.01±0.14 |
|KoBERT(SKT)| 92M | 79.73 | 89.28±0.42 | 87.54±0.04 | 80.93±0.91 | 78.18±0.45 | 75.98±2.81 | 94.37±0.31 | 51.94±0.60 / <br>79.69±0.66 |
|DistilKoBERT| 28M | 74.73 | 88.39±0.08 | 84.22±0.01 | 61.74±0.45 | 70.22±0.14 | 72.11±0.27 | 92.65±0.16 | 52.52±0.48 / <br>76.00±0.71 |
| | | | | | | | | |
|**KoMiniLM<sup>†</sup>**| **68M** | 85.90 | 89.84±0.02 | 85.98±0.09 | 80.78±0.30 | 79.28±0.17 | 81.00±0.07 | 94.89±0.37 | 83.27±0.08 / <br>92.08±0.06 |
|**KoMiniLM<sup>†</sup>**| **23M** | 84.79 | 89.67±0.03 | 84.79±0.09 | 78.67±0.45 | 78.10±0.07 | 78.90±0.11 | 94.81±0.12 | 82.11±0.42 / <br>91.21±0.29 |
- [NSMC](https://github.com/e9t/nsmc) (Naver Sentiment Movie Corpus)
- [Naver NER](https://github.com/naver/nlp-challenge) (NER task on Naver NLP Challenge 2018)
- [PAWS](https://github.com/google-research-datasets/paws) (Korean Paraphrase Adversaries from Word Scrambling)
- [KorNLI/KorSTS](https://github.com/kakaobrain/KorNLUDatasets) (Korean Natural Language Understanding)
- [Question Pair](https://github.com/songys/Question_pair) (Paired Question)
- [KorQuAD](https://korquad.github.io/) (The Korean Question Answering Dataset)
<img src = "https://user-images.githubusercontent.com/55969260/174229747-279122dc-9d27-4da9-a6e7-f9f1fe1651f7.png"> <br>
### User Contributed Examples
-
## Reference
- [KLUE BERT](https://github.com/KLUE-benchmark/KLUE)
- [KcBERT](https://github.com/Beomi/KcBERT)
- [SKT KoBERT](https://github.com/SKTBrain/KoBERT)
- [DistilKoBERT](https://github.com/monologg/DistilKoBERT)
- [lassl](https://github.com/lassl/lassl)
|
huggingtweets/twitter
|
huggingtweets
| 2023-03-24T00:31:21Z
| 4
| 2
|
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-21T13:07:38Z
|
---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1488548719062654976/u6qfBBkF_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Twitter</div>
<div style="text-align: center; font-size: 14px;">@twitter</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Twitter.
| Data | Twitter |
| --- | --- |
| Tweets downloaded | 3181 |
| Retweets | 42 |
| Short tweets | 626 |
| Tweets kept | 2513 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/tzi87fkr/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @twitter's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/jcixm01r) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/jcixm01r/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/twitter')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
bucktrends/dummy-model
|
bucktrends
| 2023-03-23T23:57:55Z
| 6
| 0
|
transformers
|
[
"transformers",
"pytorch",
"camembert",
"fill-mask",
"fr",
"dataset:oscar",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-03-23T23:23:47Z
|
---
license: mit
datasets:
- oscar
language:
- fr
---
|
amankishore/stable-diffusion-v1-5-plan
|
amankishore
| 2023-03-23T23:50:06Z
| 0
| 0
| null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-03-23T23:35:22Z
|
---
license: creativeml-openrail-m
---
|
SAL83/Pixelcopter-PLE-v0
|
SAL83
| 2023-03-23T23:47:03Z
| 0
| 0
| null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-22T22:27:28Z
|
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 7.10 +/- 4.01
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
SAL83/Pyramids
|
SAL83
| 2023-03-23T23:43:01Z
| 0
| 0
|
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2023-03-23T23:42:07Z
|
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Find your model_id: SAL83/Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
consciousAI/question-generation-auto-hints-t5-v1-base-s-q
|
consciousAI
| 2023-03-23T23:34:41Z
| 19
| 1
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"Question(s) Generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-10-23T01:33:34Z
|
---
tags:
- Question(s) Generation
metrics:
- rouge
model-index:
- name: consciousAI/question-generation-auto-hints-t5-v1-base-s-q
results: []
---
# Auto Question Generation
The model is intended to be used for Auto And/Or Hint enabled Question Generation tasks. The model is expected to produce one or possibly more than one question from the provided context.
[Live Demo: Question Generation](https://huggingface.co/spaces/consciousAI/question_generation)
Including this there are five models trained with different training sets, demo provide comparison to all in one go. However, you can reach individual projects at below links:
[Auto Question Generation v1](https://huggingface.co/consciousAI/question-generation-auto-t5-v1-base-s)
[Auto Question Generation v2](https://huggingface.co/consciousAI/question-generation-auto-t5-v1-base-s-q)
[Auto Question Generation v3](https://huggingface.co/consciousAI/question-generation-auto-t5-v1-base-s-q-c)
[Auto/Hints based Question Generation v2](https://huggingface.co/consciousAI/question-generation-auto-hints-t5-v1-base-s-q-c)
This model can be used as below:
```
from transformers import (
AutoModelForSeq2SeqLM,
AutoTokenizer
)
model_checkpoint = "consciousAI/question-generation-auto-hints-t5-v1-base-s-q"
model = AutoModelForSeq2SeqLM.from_pretrained(model_checkpoint)
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)
## Input with prompt
context="question_context: <context>"
encodings = tokenizer.encode(context, return_tensors='pt', truncation=True, padding='max_length').to(device)
## You can play with many hyperparams to condition the output, look at demo
output = model.generate(encodings,
#max_length=300,
#min_length=20,
#length_penalty=2.0,
num_beams=4,
#early_stopping=True,
#do_sample=True,
#temperature=1.1
)
## Multiple questions are expected to be delimited by '?' You can write a small wrapper to elegantly format. Look at the demo.
questions = [tokenizer.decode(id, clean_up_tokenization_spaces=False, skip_special_tokens=False) for id in output]
```
## Training and evaluation data
Squad & QNLi combo.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:------:|:---------------:|:------:|:------:|:------:|:---------:|
| 1.8298 | 1.0 | 14515 | 1.7529 | 0.3535 | 0.1825 | 0.3251 | 0.3294 |
| 1.4931 | 2.0 | 29030 | 1.7132 | 0.3558 | 0.1881 | 0.3267 | 0.3308 |
| 1.2756 | 3.0 | 43545 | 1.7579 | 0.3604 | 0.1901 | 0.3307 | 0.3345 |
| 1.0936 | 4.0 | 58060 | 1.8173 | 0.36 | 0.1901 | 0.3295 | 0.3334 |
| 0.955 | 5.0 | 72575 | 1.9204 | 0.3611 | 0.1884 | 0.3295 | 0.3336 |
| 0.8117 | 6.0 | 87090 | 2.0183 | 0.355 | 0.1836 | 0.3241 | 0.3282 |
| 0.6949 | 7.0 | 101605 | 2.1347 | 0.3556 | 0.1836 | 0.3242 | 0.3282 |
| 0.636 | 8.0 | 116120 | 2.2567 | 0.3568 | 0.1855 | 0.3248 | 0.3286 |
| 0.591 | 9.0 | 130635 | 2.3598 | 0.3563 | 0.1844 | 0.3238 | 0.3281 |
| 0.5417 | 10.0 | 145150 | 2.4725 | 0.3556 | 0.1828 | 0.3229 | 0.3269 |
### Framework versions
- Transformers 4.23.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.5.2
- Tokenizers 0.13.0
|
LozanoJohan/Reinforce_0
|
LozanoJohan
| 2023-03-23T23:31:20Z
| 0
| 0
| null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-23T23:31:08Z
|
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce_0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
ElementBrawlerAI/Reinforce-PixelCopter-v0
|
ElementBrawlerAI
| 2023-03-23T23:28:05Z
| 0
| 0
| null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-22T05:09:20Z
|
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-PixelCopter-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 8.80 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Chattiori/VelvetMix
|
Chattiori
| 2023-03-23T23:27:40Z
| 0
| 4
| null |
[
"en",
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-03-23T13:55:56Z
|
---
license: creativeml-openrail-m
language:
- en
---
VelvetMix is checkpoint merge model of El Zipang, LOFI, RealDosMix, Erotic Vision and Perfect World.
|
Ellipsoul/ppo-Pyramids
|
Ellipsoul
| 2023-03-23T23:21:06Z
| 20
| 0
|
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2023-03-23T23:08:12Z
|
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Find your model_id: Ellipsoul/ppo-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
FaroukFaiz/test
|
FaroukFaiz
| 2023-03-23T23:09:27Z
| 0
| 0
|
mlconsole
|
[
"mlconsole",
"tabular-regression",
"dataset:house_price_prediction",
"license:unknown",
"model-index",
"region:us"
] |
tabular-regression
| 2023-03-23T23:09:24Z
|
---
license: unknown
inference: false
tags:
- mlconsole
- tabular-regression
library_name: mlconsole
metrics:
- mae
- loss
datasets:
- house_price_prediction
model-index:
- name: house_price_prediction_2
results:
- task:
type: tabular-regression
name: tabular-regression
dataset:
type: house_price_prediction
name: house_price_prediction
metrics:
- type: mae
name: Mean absolute error
value: 5.793356418609619
- type: loss
name: Model loss
value: 60.74188995361328
---
# regression model trained on "house_price_prediction"
🤖 [Load and use this model](https://mlconsole.com/model/hf/FaroukFaiz/house_price_prediction_2) in one click.
🧑💻 [Train your own model](https://mlconsole.com) on ML Console.
|
YoanG/ppo-SnowballTarget
|
YoanG
| 2023-03-23T22:59:23Z
| 0
| 0
|
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-03-23T22:59:17Z
|
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Find your model_id: YoanG/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.